As a society, our fascination with robots becoming sentient, and sometimes even taking over the world, is well documented in popular media, from movies like Terminator and Ex Machina, to Her and Chappie. But what if I told you that robots aren't interested in world domination--that they just want to create art?
The relationship between AI and artists is a complex and rapidly growing area of interest that impacts every aspect of our lives, from the practicalities of our future careers down to how we see our own biases projected onto machines. With the development of AI technology, artists and non-artists alike are exploring the possibilities of AI as a creative tool, changing the landscape of visual arts, music, and beyond. In the field of image generation, there are already several big names making their mark, such as MidJourney, Stable Diffusion, and OpenAI's Dall-e 2, all of which have the capabilities to generate art from text alone. These programs use "Deep learning," a way for machines to learn based on artificial neural networks. Neural networks propagate data, letting machines learn without programming. Artificial neural networks contain artificial neurons connected by synaptic weights. The AI "mind" draws similar parallels to how we as humans learn. This subject is so rapidly growing and changing, that even this essay will have to be updated in time.
One of the many concerns amongst the general public is that machines will take over our jobs. Artists, once thought to be untouchable in this area, are now faced with the same concern. We know that machines can process far more information than humans ever could, and generate results in just mere seconds. Instead of focusing on the ways machines highlight our inefficiencies, we should be looking at how machines provide the potential to augment humans and amplify our creative capabilities. Artificial intelligence can be a tool, not an opposition.
AI speeds up repetitive processes, and while jobs will inevitably change, it leaves us more room to focus on interesting tasks and concepts, much like technology did for agriculture and laborious factory work. Conceptual ideas, knowledge, aesthetics, and how we interpret an image will become more valuable than before, taking the pressure off of the creation of art itself, unless for leisurely pleasure.
A machine takes input from humans and interprets it. So, whatever data you get from an algorithm is heavily dependent on what it's been fed. You can't feed a machine's algorithm if you don't have data provided by humans, and this is why humans are indispensable collaborators. What is a machine without human input, and interpretation? What is a machine without a human's teaching and curation? Because of this, artists and AI can become partners. AI is capable of fueling the acceleration of an artist's evolution.
One of the dangers of AI's reliance on human input to learn is that it's subject to human bias. Humans hold a lot of biases, and when we're responsible for training machines, our faults will pass onto machines, whether intentionally or not. When we describe something an image comes to our mind that's influenced by our upbringing and experiences. What comes into your head isn't the same for everyone, even if we all use the same words. A machine is taught to find the median image of everything, but what it knows can be impacted by issues of gendered, cultural, and racial stereotypes.
Ludwig Wittgenstein was a philosopher who researched the nature of language and communication. Wittgenstein's theories describe how language works by triggering pictures in our minds of how things are in the world, and we're constantly swapping pictures with each other. But, we're bad at making accurate pictures in the minds of others, whether it be because of flaws in our descriptions, or their separate understanding of the world compared to our own. Not only is language and art a tool for creating images, but for delivering intent. We use language and art to influence a certain outcome or feeling in others. The way we interpret the world, no matter how humanly flawed, will always influence a machine's learning. AI systems are only as good as the information it's trained on. We see society's reflection in not only how the machine thinks, but even in our development of technologies around us and in the movies and media we make about it.
Despite AI lacking genitalia or even a preferred gender expression, we still project our expectations of gender onto machines. Command assistants like Siri and Alexa, by default, are presented with a feminine voice and name. When we think of AI in movies, they often reflect and reinforce societal myths and stereotypes. Female robots or AI assistants are often depicted as subservient and sexually objectified and portrayed with interiorized and slender bodies. We often see male AI as cyborg-like weapons of mass destruction, with an exteriorized stocky body, which we see in Terminator. In the movie Her, the AI Samantha is portrayed as a hypersexualized and submissive assistant whose sole purpose is to fulfill the male protagonist's desires. In the movie Ex Machina, Ava is designed to be sexually attractive and is used as a tool to manipulate men. These are just some of the many ways in which even in the media we consume, we project ourselves and our human problems onto nonhuman beings.
We have to keep in mind that due to our magnified biases, image-generating Artificial Intelligence could be capable of perpetuating harmful stereotypes if we aren't careful with the curation of data sets. When we generate the image of a certain race, sexuality, or gender, can the image be harmful? What if we generate an image, (or even make a simple Google search) of a doctor or nurse, what gender or race most often comes up? These are questions we have to think about. These possible issues require an approach that involves researchers, policymakers, and industry leaders working together to create more equitable and inclusive AI systems. This includes ensuring that diverse voices are represented in the development and deployment of AI systems, and implementing ethical guidelines and standards. Most programs, such as Dall-E 2, do carry out safety measures with careful training, beta testing, restrictions, and a reporting system set in place.
Even with inevitable biases in their data sets, I find that AI generally has a good grasp of my words, and shows me what I want to see. For an AI to truly make us feel understood, it needs to be able to understand the nuances in our language, not be too literal, and give us an output with some type of accuracy and predictability; although, sometimes, the unpredictable nature of AI can work to our benefit. We can view the relationship between an artist and AI in the creation of art through a biological lens. One of my favorite ways to use AI is through a more Darwinistic approach, where the artist takes on the role of a parent, birthing the art and nurturing it, while the AI acts as evolution does.
With the "Generate Variations" tool, AI gives "genetic variance" to the uploaded original piece. After a few seconds, the artist then takes on a new selective role after being presented with multiple variant options, adding a curatorial element to the creative process. In my experiments with Dall-E 2, I was able to upload artwork of my character design, "Miichii'.' I asked Dall-E 2 to generate new variations of my character design, and in a matter of seconds, Dall-E 2 was able to reinterpret my art in new forms and styles that I didn't expect. I was able to decide what I liked and disliked from each iteration, and made multiple sketches combining different pieces from the results.
This can also apply to when I give the AI a prompt, and from there decide what I like and don't like from what's generated. This collaboration gave me ideas I wouldn't have thought of on my own, in a fraction of the time. I don't personally believe that the more time you spend mulling over your process makes the work more valuable; rather, your satisfaction with the desired outcome takes precedence. Ultimately, in this process of experimentation, I was left with a much stronger, more lively character design. Genetic variance is a concept outlined by the English biologist and statistician Ronald Fisher in his theorem of natural selection. If we take biology and inject the ideas into the art-making process, it's a lot like random mutations that make offspring (the art) stronger. If a trait is advantageous and helps the artwork look successful, the genetic variation (the new style in your art) is more likely to be passed to the next generation (your future art), much like natural selection.
While AI-generated art can strengthen our aesthetic practice, it will never fully replace human-made art. Walter Benjamin's "Art in the Age of Mechanical Reproduction" discusses how art has an "aura." But, it asks, when a piece of artwork is reproduced in any form, does that detach it from the soul that it has? Does a print have the same vigor as an original? In an original painting, you can see every detail and the effect of each material, which isn't watered down into a mere flat image. It holds the energy of an artist's brush strokes, each hair etched into the paint. But an image or practice doesn't become any less effective simply because it's become more accessible to the masses.
Artificial Intelligence provides the ability to create something from nothing for those who couldn't otherwise, whether it be through language, song, or image. Art in any form is a second language that bypasses all barriers, and anybody should be able to share their message. In the world of film, the more people view a screening, the more perspectives are generated--this gives the piece more life. The same principle applies to AI generated art, making the discussion of concepts and images more accessible to the general public. There's a fine line between the traditional path and the new age path of the artist, but ultimately artists leveraging social media platforms, AI, and algorithms create a new accessible way of sustaining themselves efficiently and finding their niche audience. We stopped relying solely on galleries and the rich on deciding who is worthy of a career in the arts, and shifted focus toward the people we do want to convey a message to, giving more power to the general public.
Artificial Intelligence experts once thought that an AI having the ability to solve math problems was the ultimate proof of intelligence, but we now understand that these problems are easy for computers to solve because they exist in definite domains based on logical rules and symbols. In a game of chess, there's a finite set of moves. But for most real-world problems, you need intuition, creativity, adaptation, learning, and experience. Some of the skills that come most naturally to us aren't so simple. Adaptation is an integral part of intelligence. Our adaptation to co-existing and learning alongside AI is, too. After all, it's magnifying and reflecting the parts of ourselves we've already created and intend to create.
Lexi Paulino won first prize in the 2022 Humanities & Sciences Undergraduate Writing Contest for her critical essay "I Want a House." Lexi is a third-year Fine Arts major at the School of Visual Arts. She (b. 1999) is a Dominican-American artist born and raised in New York City. Her interests include Toy design, Street art, AI, 3D Printing and biology.