Professor and entrepreneur Francisco Vico is staking out new ground for machines by proving they can reach beyond artificial intelligence to a higher plane: artificial creativity.
Vico’s research team at the University of Malaga in Spain created a computer program, Iamus, that can compose music. The Mozart-like machine recently released an album, composed of symphonies performed by the London Symphony Orchestra, and has “written” over 1 billion songs across a wide range of genres.
“Sooner or later, computers will be doing art in every sense, not just music,” predicts Vico.
Vico is using the technology behind Iamus to seed a new startup, Melomics Media, which will sell royalty-free versions of its more than 1 billion songs online. Songs will retail for around $2 per song and buyers will receive all rights to any song they buy.
“We’re offering music as a commodity,” he said.
For our “Life As” series, HuffPost Tech asked Vico about the subliminal advertising he predicts will be coming soon to our favorite songs; how music will mimic our emotions; and why artificial creativity may be a boon for human ingenuity.
You were able to reverse-engineer the Nokia cellphone ring and then mutate that musical “genome” to create 1 million different variations of that tune. Why?
This opens up a very interesting way of advertising: it’s giving an ad without a user noticing she’s getting one.
Imagine you’re playing music, and at some point in the song, you hear something somewhat familiar. It’s not the Nokia tune, but it’s close enough to elicit this concept in your mind, and it’s subconsciously representing the Nokia brand in your brain.
Say you have an earworm [a piece of music that you can’t get out of your head] for a brand or product and you promote it in a song. Mutations of that tune could be played everywhere, in different songs, and you’d be getting advertisement even when you didn’t realize it.
What other new abilities do you plan to give Iamus?
We are working on creating empathetic music that adapts to you and the evolution of your physiological states. The music player will learn from your current situation to know what you need to hear.
Say you’re in bed but you can’t fall asleep. The program that is running on your smartphone could know your current state, and the music could evolve according to that, changing the volume, tempo and instruments. It’s exactly as if you had a violinist looking at you and trying to play music to help you fall asleep.
How do you hope to improve the computer's ability to compose music?
I think there is a huge bias against computer composers. But putting aside that bias, many people still do think that the music [made by computers] seems to go nowhere. It’s empty music: it’s enjoyable, but in principle there’s no message behind it because the computer did not mean anything with the music. The computer did note code into the music an intention like, “try to evoke this feeling in the listener at this point, then at this other point, you’ll change to this other feeling.”
In the future we could add that layer of feelings, of intentionality. Introducing that intentionality into music is something we plan to do over the next few years with Iamus. This will be very, very easy compared to what we have already done.
There were some experts in artificial intelligence and cognition who considered chess a creative, intellectual endeavor on par with music and literature -- until IBM’s Deep Blue beat chess master Garry Kasparov. How will Iamus’ achievements change how we think about music?
With this tool that we have now, we could really explore the music space much faster and more deeply.
In the case of music, technology will help us discover new genres, new instruments, new structures, new ways of playing, new ways of experiencing music and, of course, it will greatly affect the music industry. Imagine you have a genome in front of you that represents a rock song, and another one that represents flamenco music. You can create an entirely new genre by combining the genres you already have. This is a very powerful tool that will speed up the development of music.
What will be the most significant change in music we see five years from now?
The arrival of computers will democratize music: Everyone will be able to produce music, just like everyone is able to take wonderful photographs. It will be disruptive because there will be many more musicians. Anybody who has some musical sensibility and ear will be able to produce wonderful music.
When people got cheap digital cameras, they started taking pictures of everything. Now, when you go to Flickr, you can see professional-quality pictures that weren’t taken by professional photographers.
The main contribution of artificial intelligence to the music industry will be that anybody will be able to pick up a song and either leave it as it is, or slightly adapt the raw material, with simple tools, into something very beautiful.
Your computer can compose a song in seconds. So when will Melomics have its first Lady Gaga?
Hopefully never. But I predict that this year, you’ll be able to download pieces from Melomics that can be taken directly to a discothèque, and people will think the songs were made by a human DJ.
This interview has been edited and condensed for clarity.
Earlier on HuffPost:
SUBSCRIBE AND FOLLOW
Get top stories and blog posts emailed to me each day. Newsletters may offer personalized content or advertisements.Learn more