Human Art By Artificial Intelligence

Artificial Intelligence
Image credit: source

The following is an excerpt of  You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place by Janelle Shane. Listen to a radio interview with Janelle Shane about the mistakes artificial intelligence can make. 


Algorithmic Creativity?

Will the music, movies, and novels of the future be written by AI? Maybe at least partially.
AI-generated art can be striking, weird, and unsettling: infinitely morphing tulips; glitchy humans with half-melted faces; skies full of hallucinated dogs. AT. rex may turn into flowers or fruit; the Mona Lisa may take on a goofy grin; a piano riff may turn into an electric guitar solo. AI-generated text may take on the quality of surrealist performance art.

When prompted with the following text: “My 10 favorite animals are: I.” the neural network GPT-2 added this list:

My 10 favorite animals are:
  1. zebras with a white scar on the back;
  2. insiduous spiders and octopus;
  3. frog with large leaves, hopefully black;
  4. cockatiel with scales;
  5. razorbill with wings hanging about 4 inches from one’s face and a heart tattoo on a frog.

 

Read The Book: You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place

Like AI problem solving, AI creativity could probably best be described as “AI-aided.”

For a GAN to produce a painting, it first needs a dataset, and a human chooses what that dataset should be. Some of the most interesting GAN results occur when artists give the algorithms their own paintings, or their own photography, to learn from. The artist Anna Ridler, for example, spent a spring taking ten thousand photos of tulips, then used her photos to train a GAN that produced an endless series of nearly photorealistic tulips, each tulip’s stripiness tied to the price of Bitcoin. The artist and software engineer Helena Sarin has produced interesting GAN remixes of her own watercolors and sketches, morphing them into cubist or weirdly textured hybrids. Other artists are inspired to choose existing datasets like public domain Renaissance portraits or landscapes and see what a GAN might make with them. Curating a dataset is also an artistic act add more styles of painting, and a hybrid or corrupted artwork might result. Prune a dataset to a single consistent angle, style, or type of lighting, and the neural net will have an easier time matching what it sees to produce more realistic images. Start with a model trained on a large dataset, then use transfer learning to focus in on a smaller but more specialized dataset, for even more ways to fine tune the results.

A cube with eyes and beret.

People who train text-generating algorithms also can control their results via their datasets. Science fiction writer Robin Sloan is one of a few writers experimenting with neural network generated text as a way of injecting some unpredictability into his writing. He built a custom tool that responds to his own sentences by predicting the next sentence in the sequence based on its knowledge of other science fiction stories, science news articles, and even conservation news bulletins. Demonstrating his tool in an interview with the New York Times, Sloan fed it the sentence “The bison are gathered around the canyon,” and it responded with “by the bare sky.” It wasn’t a perfect prediction in the sense that there was something noticeably off about the algorithm’s sentence. But for Sloan’s purposes, it was delightfully weird. He’d even rejected an earlier model he’d trained on 1950s and 1960s science fiction stories, finding its sentences too clichéd.

Like collecting the datasets, training the AI is an artistic act. How long should training last? An incompletely trained AI can sometimes be interesting, with weird glitches or garbled spelling. If the AI gets stuck and begins to produce garbled text or strange visual artifacts like multiplying grids or saturated colors (a process known as mode collapse), should the training start over? Or is this effect kinda cool? As in other applications, the artist will also have to watch to make sure the AI doesn’t copy its input data too closely. As far as an AI knows, an exact copy of its dataset is just what it’s being asked for, so it will plagiarize if it possibly can.

And finally, it’s the human artist’s job to curate the AI’s output and turn it into something worthwhile. GANs and text-generating algorithms can create virtually infinite amounts of output, and most of it isn’t very inter­esting. Some of it is even terrible remember that many text generating neural nets don’t know what their words mean (I’m looking at you, neural net that suggested naming cats Mr. Tinkles and Retchion). When I train neural nets to generate text, only a tiny fraction-a tenth or a hundredth of the results are worth showing. I’m always curating the results to present a story or some interesting point about the algorithm or the dataset.

It’s the human artist’s job to curate the AI’s output and turn it into something worthwhile.

In some cases, curating the output of an AI can be a surprisingly involved process. I used BigGAN in chapter 4 to show how image­ generating neural nets struggle when trained on images that are too varied-but I didn’t talk about one of its coolest features: generating images that are a blend of multiple categories.

Think of “chicken” as a point in space and “dog” as a point in space. If you take the shortest path between them, you pass other points in space that are somewhere between the two, in which chickendogs have feathers, floppy ears, and lolling tongues. Start at “dog” and travel toward “tennis ball,” and you’ll pass through a region of fuzzy green spheres with black eyes and boopable noses. This huge multidimensional visual landscape of possibility is called latent space. And once BigGAN’s latent space was accessible, artists began to dive in to explore. They quickly found coordinates where there were overcoats covered in eyes and trench coats covered in tentacles, angular faced dog-birds with both eyes on one side of their faces, picture perfect hobbit villages complete with ornate rounded doors, and flaming mushroom clouds with cheerful puppy faces. (ImageNet has a lot of dogs in it, as it turns out, so the latent space ofBigGAN is also full of dogs.) Methods of navigating latent space become themselves artistic choices. Should we travel in straight lines or curves? Should we keep our locations close to our origin point or allow ourselves to veer off into extreme far-flung corners? Each of these choices drastically affects what we see. The rather utilitarian categories of ImageNet blend into utter weirdness.

Is all this art AI-generated? Absolutely. But is the AI the thing doing the creative work? Not by a long shot. People who claim that their AI’s are the artists are exaggerating the capabilities of the AIs-and selling short their own artistic contributions and those of the people who designed the algorithms.


Excerpted from YOU LOOK LIKE A THING AND I LOVE YOU: HOW ARTIFICIAL INTELLIGENCE WORKS AND WHY IT’S MAKING THE WORLD A WEIRDER PLACE COPYRIGHT 2019. Available from Voracious, an imprint of Hachette Book Group, Inc.

Meet the Writer

About Janelle Shane

Janelle Shane is an artificial intelligence researcher based in Boulder, Colorado and author of You Look Like A Thing And I Love You (Voracious, 2019).

(Excerpt) Read more Here | 2019-11-08 11:13:14

Leave a Reply

Your email address will not be published. Required fields are marked *