Icelandic composer and musician Olafur Arnalds is bringing a small ensemble to the Luminato Festival in Toronto on June 24 for a concert with a conceptual and technological twist. Arnalds, who is best known for TV and movie scores (notably for the British series Broadchurch), makes melodic, melancholic, simple and repetitive music that’s on the line between classical and pop. It owes a debt to both the rhythmic minimalism of the classical tradition (think Arvo Part) and ambient electronica. The twist on his recent stuff is that he is using a couple of player pianos (mechanically operated) that are controlled by a computer algorithm.
The concert is called All Strings Attached. The program Arnalds and a coder friend devised responds to a chord played by Arnalds at a different (ordinary) piano. The chord triggers a musical response, to which Arnalds then responds, developing a kind of improvised duet of human and machine. The variations are quite subtle – they play with the same basic melody Arnalds has written – but there is an element of chance in what ends up being played.
“Each note I play triggers one sequencer,” the composer explains over the phone from Bali, where he is on holiday. “The sequencer plays, on the player pianos, a series of notes that are both predetermined and controlled by some changing values. Each sequencer can have a different time signature, so they can vary rhythmically.”
He is quick to stress that it is still human-composed music we are talking about. “This doesn’t do anything unless I play it. As the performer, I insert the notes, and it reacts to it. If I stop playing, the player pianos stop playing.”
There is a light show also matched to the software. “Some of the lights are being generated by the same generative algorithms. The rhythms that are generated in the pianos are the same rhythms generated in the light show and they are completely synchronized.”
And yet despite all this Arnalds does not see his music as dependent on technology. “I’m not married to technology,” he says. “I’m just always trying to find ways to make my music interesting and unique.”
One can see why he wants to distance himself from the field of “AI art” – a notoriously unsuccessful venture so far. While programmers have taught computers how to play chess and drive cars and have even almost conquered lifelike conversation (see the Google Duplex that made an appointment via telephone for a haircut at a California salon without anyone suspecting it was not a person), many see the creation of compelling art – a uniquely human activity – as the final frontier of AI and are working feverishly to get programs to write poems and paint pictures. So far the most difficult of art forms to get anything convincing out of is music.
You can get a computer to draw a pleasant abstract painting or even imagine a real-looking and idiosyncratic human face. You can get a computer to write passable surrealist poetry (that’s an easy one, actually, as poetry relies on the aleatory and the nonsensical). You can get a computer to analyze several thousand science-fiction or romance stories and then produce a typical plot structure and character list for those genres. But as soon as you get it pretending to be a symphony orchestra, anybody can tell it doesn’t have a clue right away. It sounds off, unnatural, like biting into a synthetic steak.
So why try? Aside from the thrill of accomplishing the difficult, is there an advantage to having machines dabble in art? Would anyone be interested in consuming such art as pure pleasure, without fascination at the process? Could it be compelling in any way other than as a novelty?
One thing AI does while attempting to create art is analyze it – often large quantities of it – in the most inhuman of ways. This in itself is useful to scholars. The recent field of “digital humanities” uses computers to speedily “read” (i.e., scan) and prepare complicated concordances of large bodies of work. You can get computers to digest all of Shakespeare, for example, and tell you not only how often he uses adjectives but in conjunction with what nouns or what genders or what dramatic situations. You can do the same for whole genres. Such analysis can tell you what characters are most likely to say in what situations in Western novels or in young adult novels about illness. It’s a quick way of seeing trends and themes that emotional readings might not give.
This is the analysis that AI art-creating is based on, and often the resulting statistics are more interesting than the machine-art itself. An interesting example of this was attempted by Canadian writer Stephen Marche, who published an experimental science-fiction story in Wired magazine last year. He wrote the story based on suggestions for plot, character and language given to him by an algorithm that analyzed 50 classic sci-fi stories. The idea was to make a typical story based on tropes in the greats. What he learned from it, and explained in footnotes throughout the resulting text, was that the great stories were defined by rather florid language (lots of adjectives such as “fantastic” and “spooky”) and a dearth of active female characters (to match the greats, only 16 per cent of his dialogue could come from a female character). His story is good, but the footnotes – a metatext – are even better.
This is why Arnalds is so insistent that his soft music is not really an example of AI art. He says, “I am not interested in a computer making music.”
This is the real value of the interaction of AI and art: not to make non-human art, but to exploit and expand the ideas already in art.
If it’s just a game, it rapidly becomes dull.