Where Does AI Fit in If Art is How We Express Our Humanity
Art and the Science of Generative AI
Published: 2023-06-16 - Updated: 2023-06-27
Author: Massachusetts Institute of Technology - Zach Winn, MIT News Office - Contact: mit.edu
Peer-Reviewed: Yes - Publication Type: Opinion Piece / Editorial
Journal Reference: DOI Link to the Study Paper
Related Papers: Latest Items - Full List
On This Page: Summary - Defining Generative Artificial Intelligence - Main Article - About/Author
Synopsis: Ziv Epstein, a Media Lab researcher, discusses issues arising from the use of generative artificial intelligence (AI) to make art and other media. The rapid advance of artificial intelligence has generated a lot of buzz, with some predicting it will lead to an idyllic utopia and others warning it will bring the end of humanity. Researchers from a number of organizations published a commentary article that helps set the stage for discussions about generative AI's immediate impact on creative work and society more broadly.
Definition
- Generative Artificial Intelligence
Generative artificial intelligence, generative AI or GenAI, is a type of artificial intelligence (AI) system capable of generating images, videos, audio, text, 3D models, or other media in response to prompts. Generative AI models learn the patterns and structure of their input training data, and then generate new data that has similar characteristics. A generative AI system is constructed by applying unsupervised or self-supervised machine learning to a data set. The capabilities of a generative AI system depend on the modality or type of the data set used.
Main Digest
"Art and the Science of Generative AI" - Science.
The rapid advance of artificial intelligence has generated a lot of buzz, with some predicting it will lead to an idyllic utopia and others warning it will bring the end of humanity. But speculation about where AI technology is going, while important, can also drown out important conversations about how we should be handling the AI technologies available today.
One such technology is generative AI, which can create content including text, images, audio, and video. Popular generative AIs like the chatbot ChatGPT generate conversational text based on training data taken from the internet.
advertisement
A group of 14 researchers from a number of organizations including MIT have published a commentary article in Science that helps set the stage for discussions about generative AI's immediate impact on creative work and society more broadly. The paper's MIT-affiliated co-authors include Media Lab postdoctoral researcher Ziv Epstein SM '19, PhD '23; recent graduate Matt Groh SM '19, PhD '23; MIT PhD candidate Rob Mahari '17; and Media Lab research assistant Hope Schroeder.

MIT News Speaks with Epstein the Lead Author of the Paper
Q: Why did you write this paper?
A: Generative AI tools are doing things that even a few years ago we never thought would be possible. This raises a lot of fundamental questions about the creative process and the human's role in creative production. Are we going to get automated out of jobs? How are we going to preserve the human aspect of creativity with all of these new technologies?
The complexity of black-box AI systems can make it hard for researchers and the broader public to understand what's happening under the hood, and what the impacts of these tools on society will be. Many discussions about AI anthropomorphize the technology, implicitly suggesting these systems exhibit human-like intent, agency or self-awareness. Even the term "Artificial intelligence" reinforces these beliefs: ChatGPT uses first-person pronouns, and we say AIs "hallucinate." These agentic roles we give AIs can undermine the credit to creators whose labor underlies the system's outputs, and can deflect responsibility from the developers and decision makers when the systems cause harm.
We're trying to build coalitions across academia and beyond to help think about the interdisciplinary connections and research areas necessary to grapple with the immediate dangers to humans coming from the deployment of these tools, such as disinformation, job displacement, and changes to legal structures and culture.
Q: What do you see as the gaps in research around generative AI and art today?
A: The way we talk about AI is broken in many ways. We need to understand how perceptions of the generative process affect attitudes toward outputs and authors, and also design the interfaces and systems in a way that is really transparent about the generative process and avoids some of these misleading interpretations. How do we talk about AI and how do these narratives cut along lines of power? As we outline in the article, there are these themes around AI's impact that are important to consider: aesthetics and culture; legal aspects of ownership and credit; labor; and the impacts to the media ecosystem. For each of those we highlight the big open questions.
With aesthetics and culture, we're considering how past art technologies can inform how we think about AI. For example, when photography was invented, some painters said it was "the end of art." But instead it ended up being its own medium and eventually liberated painting from realism, giving rise to Impressionism and the modern art movement. We're saying generative AI is a medium with its own affordances. The nature of art will evolve with that.
Q: How will artists and creators express their intent and style through this new medium?
A: Issues around ownership and credit are tricky because we need copyright law that benefits creators, users, and society at large. Today's copyright laws might not adequately apportion rights to artists when these systems are training on their styles.
Q: When it comes to training data, what does it mean to copy?
A: That's a legal question, but also a technical question. We're trying to understand if these systems are copying, and when.
For labor economics and creative work, the idea is these generative AI systems can accelerate the creative process in many ways, but they can also remove the ideation process that starts with a blank slate. Sometimes, there's actually good that comes from starting with a blank page. We don't know how it's going to influence creativity, and we need a better understanding of how AI will affect the different stages of the creative process. We need to think carefully about how we use these tools to complement people's work instead of replacing it.
In terms of generative AI's effect on the media ecosystem, with the ability to produce synthetic media at scale, the risk of AI-generated misinformation must be considered. We need to safeguard the media ecosystem against the possibility of massive fraud on one hand, and people losing trust in real media on the other.
Q: How do you hope this paper is received - and by whom?
A: The conversation about AI has been very fragmented and frustrating. Because the technologies are moving so fast, it's been hard to think deeply about these ideas. To ensure the beneficial use of these technologies, we need to build shared language and start to understand where to focus our attention. We're hoping this paper can be a step in that direction. We're trying to start a conversation that can help us build a roadmap toward understanding this fast-moving situation.
Artists many times are at the vanguard of new technologies. They're playing with the technology long before there are commercial applications. They're exploring how it works, and they're wrestling with the ethics of it. AI art has been going on for over a decade, and for as long these artists have been grappling with the questions we now face as a society. I think it is critical to uplift the voices of the artists and other creative laborers whose jobs will be impacted by these tools. Art is how we express our humanity. It's a core human, emotional part of life. In that way we believe it's at the center of broader questions about AI's impact on society, and hopefully we can ground that discussion with this.
Written by Zach Winn, MIT News Office
Attribution/Source(s):
This peer reviewed opinion piece / editorial article relating to our AI and Disabilities section was selected for publishing by the editors of Disabled World due to its likely interest to our disability community readers. Though the content may have been edited for style, clarity, or length, the article "Where Does AI Fit in If Art is How We Express Our Humanity" was originally written by Massachusetts Institute of Technology - Zach Winn, MIT News Office, and published by Disabled-World.com on 2023-06-16 (Updated: 2023-06-27). Should you require further information or clarification, Massachusetts Institute of Technology - Zach Winn, MIT News Office can be contacted at mit.edu. Disabled World makes no warranties or representations in connection therewith.
Share This Information To:
𝕏.com Facebook Reddit
Discover Related Topics:
advertisement
Disabled World is an independent disability community founded in 2004 to provide disability news and information to people with disabilities, seniors, their family and/or carers. See our homepage for informative reviews, exclusive stories and how-tos. You can connect with us on social media such as X.com and our Facebook page. Disabled World provides general information only. The materials presented are never meant to substitute for qualified professional medical care, nor should they be construed as such. Funding is derived from advertisements or referral programs. Any 3rd party offering or advertising does not constitute an endorsement.Information, Citing and Disclaimer
Permalink: <a href="https://www.disabled-world.com/assistivedevices/ai/ai-art.php">Where Does AI Fit in If Art is How We Express Our Humanity</a>
Cite This Page (APA): Massachusetts Institute of Technology - Zach Winn, MIT News Office. (2023, June 16). Where Does AI Fit in If Art is How We Express Our Humanity. Disabled World. Retrieved September 24, 2023 from www.disabled-world.com/assistivedevices/ai/ai-art.php