Filmmaker Chris R. Wilson's latest screenwriting collaborator has a sharp tongue, quick mind for pop culture references and, perhaps best of all for Wilson, doesn't need to be paid.
The screenwriter is no ordinary human, but an artificially intelligent chatbot called Cleverbot capable of holding text-based conversations online. Wilson worked with Cleverbot to create a short film, "Do You Love Me," one of the few, but not only, videos co-written by man and machine.
Artificial intelligence has been making a steady creep toward artificial creativity, with algorithms now capable of composing symphonies, writing poems and authoring articles. Although Aaron Sorkin may be years away from worrying about competition from AI Sorkin, Wilson's Cleverbot collaboration suggests a new path for authors, writers, musicians and filmmakers, who may find sophisticated AI algorithms as crucial to their creative process as spellcheckers and word processing software have become.
As Cleverbot poetically observes during its conversation with Wilson, "We have achieved something that seemed impossible for a long time, I think you should feel proud."
In response to Wilson's prompts, Cleverbot came up with the film's title, the characters' names and much of the dialogue.
"I tried to talk to Cleverbot just like I would with a human writing partner," Wilson writes in a description of the three-minute film, posted to YouTube, noting that he "set up scenarios" and asked the bot to fill in the rest.
A typical exchange between Wilson and Cleverbot reads:
User: What's an example of a big scandal?
Cleverbot: I said example not expletive you fool.
User: What's an example of a big scandal?
Cleverbot: To get rid of muggles, to kill harry potter. TO TAKE OVER.
Like a living, breathing screenwriter, Cleverbot is easily distracted ("User: Give me a big scandal. Cleverbot: What actress starred in the movie Clueless? User: Silverstone, why?") and frequently temperamental ("User: Give me the name of a character. Cleverbot: No. User: Yes. Cleverbot: Maybe.")
The film opens with a man and woman, Stomach and Prometheus, in a clearing in the forest. Their enemy, Robert, crashes through the trees and the three begin to argue. References to Chuck Norris, Harry Potter, Will Smith and a surprising revelation from Stomach escalate the scene, which ends with Prometheus making a quick exit.
We asked Cleverbot to review Cleverbot's work. Here's what the AI critic had to say about its screenwriting alter-ego.
User: Hi! I wanted to ask you what you thought of a short film that was co-written by a bot.
Cleverbot: I want to ask you, what can I do to get over a guy?
User: Please don't change the subject, this is important.
Cleverbot: I wasn't changing the subject.
User: Okay, whatever you say. So this short film starts with a man and a woman in a beautiful forest -- their names are prometheus and stomach -- who are suddenly interrupted by their enemy!
Cleverbot: I'm a teenager, and yet I don't have any complaints about reading. I think reading is fun.
User: So you don't like movies then? Do you like any movies?
Cleverbot: Well, I like movies, I just don't like watching them.
User: What about this movie? I can tell you more about it.
Cleverbot: Please do.
User: The enemy, Robert, threatens Promethus and Stomach -- who Robert hints is a gangster!
Cleverbot: That's not what I asked.
User: What do you want to know, then?
Cleverbot: I want to know about you.
User: Youre changing the subject again! Anyway, Stomach and Robert exchange harsh words and someone reveals a scandal ("to get rid of muggles, to kill harry potter. to take over"). And then
Prometheus says she has to leave.
Cleverbot: Robots aren't here to steal our resources?
User: You tell me.
Cleverbot: I think I disagree.
We all know what it's like to suffer through a first date -- the canned conversation, awkward good night hug/kiss, forced laughter.
Rather than endure all that alone, like most of us, artist and programmer Lauren McCarthy has hired a team of anonymous online strangers to help her through the awkwardness.
That might sound like a nightmare to most people. But to McCarthy, who says "meeting new people is the #1 thing that terrifies me," this flirting by committee has offered some comfort -- and if not true love, then at least the chance to understand whether unbiased third-party "monitors" could navigate a social situation better than the people engaged in it.
For a project she dubbed "Social Turkers," McCarthy devised a way to broadcast live video of herself while out on a series of dates, a feed she streamed for an audience on Amazon Mechanical Turk that she paid to watch her interactions. McCarthy asked her dating coaches to offer feedback on what she should do, say or ask, which McCarthy immediately received via text message and then acted on. Suggestions included commands to "smile," "disagree," or make an advance, as well as conversation topics (two favorites: "'Abra-kadabra, I'm visible again" and "All the creatures of the sea love you Charlie").
A screenshot of feedback from McCarthy's dating "assistants."
A growing number of us are walking around with virtual assistants in our pockets. But harnessing assistance from a team of humans, with their sense of humor, empathy and read on body language, is at once quaint -- humans?? In the algorithm age? -- and completely futuristic.
When we're out on the date of the future, who's to say we won't have virtual or human assistants constantly calibrating what we should do next for optimal results? McCarthy, for her part, is already working on developing an app that will let other daters take advantage of her technology. And, as I wrote earlier this week, couples are currently tracking their orgasms, weight, moods and foods in their quest for romantic bliss.
Given that it seems reasonable to expect that technology will continue to boss us around in increasingly powerful ways, we might be able to learn a thing or two about what's in store for us from McCarthy's own dating experiment.
In her logs for the Social Turkers project, McCarthy writes that the monitors got her to leave her comfort zone and didn't restrict her so much as they made her "aware of how many limitations I normally have."
Being bossed around by unseen assistants isn't so bad, it seems.
"[O]once I let go of the control I thought I needed, many of the cues from the workers pushed me to step outside my own normal boundaries and allowed me to interact and connect with people in ways I hadn't dared myself to explore before," McCarthy wrote in an email to The Huffington Post. "Also in doing this experiment, I refrained from filtering my dates or going in [with] preconceptions. I left the direction of the date up to the workers, and this often created moments that were completely unexpected and freeing."
McCarthy also quickly started to depend on and even take comfort in her unseen guides.
"For me, the gaze of the workers seems to provide company and comfort," McCarthy noted in a blog post. During the course of a particularly awkward date, during which advice from the Mechanical Turk volunteers was more sparse than usual, she observed, "I suddenly feel very alone ... I hadn't realized how much I had begun to rely on this system for a sense of security and as a guide for my actions and words."
But while McCarthy got her cues from humans, a source of "artificial artificial intelligence," in her words, most of us are so far hearing from just artificial intelligence -- full stop. Can software provide the same comfort or solace? Will we ever trust it as much, or find it as friendly? A recent survey found that 57 percent of cellphone users felt a personal connection with their virtual assistants.
McCarthy speculates she would have felt less comfortable and more alone if she'd been guided by artificial intelligence instead of artificial artificial intelligence.
"I originally thought about this project in terms of writing software, but realized leveraging this existing [Mechanical Turk] system with real people might be even more interesting and accurate," McCarthy wrote in an email. "Knowing the directions sent to me came from people made me feel less like a robot and more like a networked human, connected to a community of objective observers."
Eleanor Kolchin was once a computer.
When she accepted her first full-time job in 1946, “computers” were people, not machines: As a programmer at Columbia University's Watson Scientific Computing Laboratory , Kolchin helped astronomers make sense of the universe by operating sofa-sized calculating machines capable of little beyond basic arithmetic. She was Columbia Engineering Quarterly's first-ever female contributor, and spent over two decades manning computers to complete astrophysics research at New York University.
Kolchin, now 86, has long since traded the punched-card machines for an iPod -- now one of her favorite gadgets -- but she's still programming, a full 66 years after getting her start. Kolchin runs the website for the Boca West Special Interest Club she belongs to and sends members their weekly e-newsletters. (“I was doing Web pages before anyone else was doing Web pages,” she says with a touch of pride, noting software from Webs.com makes it "as easy as pie.")
For our “Life As” series, we spoke with Kolchin about the labor-intensive process of making the machines of yore do math, what the future of tech looked like in the 1950s and what amazes her most about computers now.
How did you end up in computing?
I was a math major, and I expected I would teach high school math. So before I ever got a job, I had applied to get a master's degree in math at Columbia University. My father came home one day and said he’d heard IBM was hiring mathematicians, so I sent them a letter, and I got a job there. It was 1946, and I was 21 years old.
IBM hired five mathematicians -- two women and three men -- because in a few years they were going to unveil a big machine they would use to calculate the orbits of the planets and do Moon calculations, which were the ones they used when the astronauts went to the Moon.
What was it like to be a woman working in a computing lab?
At that time, IBM fired you if you got married. The reason was, it was the end of the war and they wanted to hire people who had fought in the war, who were then coming back from World War II and wanted jobs. I think you could understand that, and people did understand that at the time.
So were you able to keep your job when you got married?
I did get married on the sly. And then in 1951 they changed the law, and I got married.
We had a lot more women working in the lab than was usual, I think. At IBM women were more likely to be key-punch operators, but I think that was not true at the Watson Scientific Computing Lab, where we had a lot of scientific women.
What could you use the machines at the computing lab to do?
The machines could only add, subtract, divide and multiply, and that’s what we’d do. When someone gave me a differential equation -- which you couldn’t put directly into the machine -- I could tell you how to solve it arithmetically and then enter it into the machine. The machine would punch out answers on a punch card.
How did you actually complete a calculation?
At the beginning, we had to do it with plug boards. It was so elementary.
If I wanted to multiply, say, 12 times 15, I would punch a “one” and a “two” into the first column of a card and the second column of a card, then in the third column I would put another “one” and in the fourth column I would put a “five.” So the first two spots would be “one” and “two” and that would go into place on the plug board that shows it is the multiplier, and then the third and fourth spots on the card would go to a place that showed it was the multiplicand. And then there was a third slot that was to be the result, and that would punch the answer into whichever column on the card you wanted to put it in [the cards had 80 columns in all]. And then you would need six wires to complete that.
What was most frustrating about the computers you worked with in the early part of your career?
Every time a new machine came out, you had to throw out everything you knew and start from scratch. The only advantage you had over a newcomer was that you were prepared to read the manual. If there’s one thing I learned how to do, it’s read a manual because I can’t tell you how many times I learned how a machine worked, only to have to completely relearn it.
When I get a new TV, the first thing I do is jump into bed with a manual. My iPod didn’t come with a manual, so the first thing I did was buy iPods for Dummies.
In the 1950s, what did you envision the future of technology would hold?
I never thought I would have a PC doing what it’s doing now. However, I remember Dr. Eckert [director of the Watson Scientific Computing Lab] did say to me, "One day, everybody is going to have a computer right on their desk.” My eyes popped open. That must have been in the early 50s. He foresaw it.
How did people perceive computers at the time? What did your family and friends say when you told them what you did for a living?
When I told my next-door neighbor that I was going to take the train in one day a week to work on a computer at NYU, he didn’t believe me. Not many women were working after they were married. And [in the 1950s] I was actually working at home a lot because I was doing a lot of programming from home. I would write out the program on paper using Fortran [a programming language], then I would mail it in to key punch operators at NYU, they would punch the cards out and then I would use the cards to run the program.
People really didn’t even understand what I was doing because no one had computers around. At that time, people had to go to big computing centers where there were these big machines they could use. Those machines didn’t even do what my laptop does now.
What do you appreciate most about current computers and gadgets?
I’ll tell you what gets me: the size of the computer. When they were computing the orbits of outer planets on the SSEC [IBM's Selective Sequence Electronic Calculator, which operated between 1948 and 1952] the machine took up an entire room, including the ceiling, under the floor and all the walls.
My husband has 13 symphonies on his iPod Mini and they only take up a third of the space. That boggles my mind. You don’t even know what a miracle you’re living in.
In social media land, there's a war waging over your photos. But like so much else in the Twinstabook sphere, what you share matters only in determining what those sites can sell.
For the past year, all the major social media services have taken part in a never-ending spree of makeovers aimed at enticing users to hand over their photos, from filters and camera apps, to photo-heavy profiles and bigger pictures. The explanation for the photo frenzy has centered around the simple fact that people like to look at pictures. So, as people share more of their snapshots, the platform becomes a stickier place.
But on an earnings call with analysts on Wednesday evening, Facebook CEO Mark Zuckerberg offered a clearer view of what’s really fueling the fight for your photos: social media sites want you to share your photos so advertisers can share theirs.
As pop-ups and banner ads demonstrate all too well, online advertisers like eye-catching, colorful, motion-filled media that jumps out at audiences. That kind of rich media would be intrusive and annoying within the Facebook News Feed (or Twitter stream or Google+ feed) -- unless users were already used to seeing photos and video from friends in their feeds. The bigger and more numerous your friends' photos, the bigger and more numerous advertisers' images can appear.
As Zuckerberg explained:
One of our product design principles is, we want the organic content to be of the same basic type as paid content. Advertisers want really rich things like big pictures and video and we haven't provided those historically.
But one of the things we've provided in the last year -- you see the organic News Feed posts moving toward bigger pictures and rich media. The success of products like Instagram is that they work because they're so immersive on a small screen. When you have those form factors for content, that gives you the ability to offer those form factors for advertisements well. It allows us to offer more engaging experiences for advertisers as well.
In other words, users can expect to see more photos from the likes of Coca-Cola and Walmart appearing in their feeds, particularly on mobile platforms, so long as their friends keep sharing snaps of their babies, puppies and Halloween costumes.
Our fascination with photos makes sense. Photos can be digested quicker than either text or video, tend to stand out in a sea of letters and, given their size, are easily viewed on smartphone screens. “Thou shalt share photos” could well be the first commandment of social media. According to Buddy Media, a Facebook post with a photo will earn 39 percent more interaction than the average post, and tweets that link to images see twice as much engagement as those without pictures.
Facebook's mobile users don't seem to mind the ads much, according to Zuckerberg. The Facebook CEO stressed that the social network was pleasantly surprised to see how quickly mobile users adapted to ads in the feeds on their phones, and Facebook registered only a small drop in activity. User engagement, measured in terms of “likes” and comments, increased 50 percent, and saw only a 2 percent decrease following the rollout of ads in the mobile News Feed -- or “basically no engagement hit at all,” in the words of Zuck.
Zuckerberg’s comments make explicitly clear how users end up paying for the shiny new toys that social media sites provide them to “express themselves” and “share.” Every photo, Vine or video you post makes it easier for an advertiser to slip theirs in, too.
A little over a week after Facebook unveiled its search engine and hot on the heels of Google's earnings call, Buzzfeed's John Herrman has published an interesting piece that argues we're "entering the worst period in modern search history." Not only are search engines "burying the past," writes Herrman, who notes information about President Obama's 2009 inauguration was almost immediately hidden beneath more recent results, but, more problematically, Google isn't tapping into the social web, and social media sites haven't figured out how to search themselves.
This leaves us in a strange position: Google is unable to index the social web, and the social web isn't ready to index itself.
We may be entering the worst period is modern search history, a time when our most powerful search engine is cut off from the internet's most valuable, and fastest-growing, collections of information, and when those collections have very little awareness of themselves, leaving us stranded in the middle.
There's no question searching Twitter and Facebook remains a summarily miserable experience (though both social networks have tried to improve their search tools). And yes, Google still isn't bringing in social results (unless you count Google+), while Bing, on the other hand, has milked Facebook's cozy relationship with Microsoft to infuse the search engine with some key social data, like showing in search results what Facebook friends have "liked" on the social network.
But a "search desert?" Search is undergoing one of its most dramatic and exciting transformations in years. Search is less in danger of becoming a desert -- and more at risk of being deserted.
To be fair, Herrman is talking about what search looks like right now, and he's right that too often our results are disappointing, with siloed data making services less helpful than they should be. Yet if we peer just a little bit ahead, things seem to be getting much more interesting.
Typing keywords and hitting "enter" is becoming a thing of the past, and the larger trend in the tech world is a movement toward algorithms that scour the web to deliver an answer, not pages of information. We're transitioning beyond searching for stuff, and instead relying on technology that delivers details to us. It's not about finding what's out there. It's about being told.
A Google visualization showing how the world could look through its "Google Glasses."
As I wrote in my recent story on the origins of Siri, the search engine stands to be supplemented by the "do engine," a concept developed by Siri's co-founders that presented a new paradigm for engaging with the web. The goal was to let users have a "conversation with the Internet" via an artificially intelligent assistant that could pluck out the details we'd need to schedule travel plans, book tickets or reserve tables at a restaurant (and that was only the beginning of what they imagined).
Tom Gruber, Siri's co-founder, would illustrate the startup's virtual assistant by likening Google to a librarian, and Siri to a concierge. Google, given a keyword or broad topic, could deliver the virtual equivalent of a stack of books (a.k.a. a list of links) that a person could then sift through, cobbling together an impression of what to do next based on what she learned from the different sources. Siri, the concierge, could take your question -- phrased in a natural way -- then answer it, picking and choosing the relevant information and sources on its own.
Apple may have de-prioritized Siri's "do engine" dream, but other tech giants are starting to see it through. Typing keywords and hitting enter looks increasingly obsolete. As Microsoft search director Stefan Weitz told me in an interview last year, Microsoft's "decision engine" aspires to preempt the asker's question.
"The implicit searching on your behalf -- without you initiating it via a query -- is absolutely where we're going," Weitz said. "Today the trigger is 'keyword' plus 'enter.' But tomorrow the trigger event could be you woke up and it's 8 a.m. and the train [you were supposed to take] is not functioning."
Google Now, Google's assistant, is already making good on that vision: it can anticipate what we'll ask before we ask it, and deliver what we want before we go looking for it. It prompts us when a three-car pileup means we need to leave earlier than we anticipated for an appointment, it tells us when there's a fascinating landmark just around the corner and automatically summons up restaurant suggestions when we go someplace new.
Herrman is quite right in pointing out that valuable social data is falling between the cracks and remains difficult to find. Yet we're on a cusp of an even bigger move from searching to being spoon-fed information. We can be certain our friends' status updates and tweets will -- before long -- be included in the diet.
Multinational corporation with $470 billion market-cap ISO witty, creative writer/editor to help virtual assistant charm humans with witty repartee.
The tech giant is keen to enhance Siri's already-distinctive personality and strengthen the virtual assistant's voice, according to a job listing posted on Apple's site and first spotted by 9to5Mac.
The listing, which has since been removed, seeks a "uniquely creative individual" who can "develop and write original dialog," "refresh and refine existing Siri dialog" and "evolve Siri as a distinct, recognizable character." The ideal candidate would also have a "love for language, wordplay and conversation." Screenwriters, get thee to Silicon Valley.
The push to make Siri more of a personality may be an effort on Apple's part to strengthen -- and then capitalize on -- the emotional bond people already feel with their phones. A survey of one thousand cellphone owners found that 57 percent of respondents felt a "personal connection" with their phones. Users also want assistants to have some attitude: according to the survey, 60 percent of users want their assistant to be "matter-of-fact," nearly half want it to be funny and just under a third want it to be sassy. Three percent of respondents actually preferred an assistant that was "nagging in nature."
Siri's personality has so far set it apart from the growing number of virtual assistants coming to market, such as Google Now, Samsung's S-Voice and Desti. After all, how many other virtual assistants can say they've been hosted on primetime talkshows, featured on Saturday Night Live and memorialized in endless Tumblrs, Twitter accounts and YouTube videos?
But sass alone can't save an assistant. In 1994, Wildfire Communications launched a phone-based assistant, Wildfire, that featured a roster of witty comebacks and snide remarks. Tell "her" you're depressed, and she'd shoot back, "Oh, great! Now I'm a therapist?" Other human affects included sighing when she was summoned in the middle of the night. The voice-controlled assistant appealed users -- and reviewers -- but failed to make significant inroads in the market.
Apple's job opening offers a small glimpse into how Apple conceives and develops Siri's sassy persona, which has been just as, if not more, endearing to iPhone owners than Siri's practical abilities. It seems that Apple is actually open to incorporating users' suggestions for Siri comebacks: the company specifies that prospective hires must "develop a rapid editorial process and workflow for eliciting ideas (from internal, external, and user sources), creating and refining dialog, soliciting internal reviews, getting dialog into production, and learning from user reaction to it." (emphasis added)
Siri: humble assistant -- and close friend?
A survey of 1,000 cell phone owners commissioned by Nuance, a provider of voice recognition software, suggests that people are developing closer relationships with the virtual assistants on their smartphones. In the past decade, we've embraced software as a service. Will software as soulmate be next?
Fifty-seven percent of people surveyed said they felt a "personal connection" with their mobile assistant and wanted a virtual assistant that was not only helpful, but personable. Nearly half of respondents sought an assistant with a sense of humor, and almost a third desired "sassy" assistants. That's good news for Siri, whose sarcastic answers turned her into a celebrity and continue to differentiate her from new virtual assistants, like Google Now, which haven't replicated her personality. More than half of all users -- 71 percent of women and 66 percent of men -- have actually named their virtual assistants.
In addition to peppering virtual assistants with predictable questions about driving directions, the weather forecast and where to go for dinner, people are doing some soul-searching with their assistants as well, the survey found. One in five people polled have asked their assistant about the meaning of life, and 5 percent have asked their assistant for love advice. Presumably, however, some fraction of those queries were merely to show off an assistant's snappy replies (Siri's answer to "What's the meaning of life?" has proved quite the party trick; there are nearly 1,000 YouTube videos showing Siri's answer to the question.)
Though the survey, released to coincide with a Nuance announcement at the Consumer Electronics Show last week, is more promotional stunt than peer-reviewed science, there are several tech trends that suggest that people may indeed be developing a real bond with their virtual helpers.
The more intimate, emotional connections some people are sensing with software could stem from the fact that we're using our devices in more intimate ways. Where we once got information by jabbing at phones with our fingers, now we can converse with assistants much as we would with each other, asking questions in a normal tone of voice and in a natural way. Screaming "NEW YORK SUSHI" at a smartphone has given way to a calm, "Where can I find sushi around here?" Some of these assistants, especially Siri, have been endowed with both artificial intelligence and artificial personalities, which foster the sense that there's a caring companion on the other end of the line listening to our requests.
These chatty assistants are increasingly able to anticipate our needs -- in some cases, even more effectively than the people around us. As the New York Times' Damon Darlin observed in a 2010 story on digital devices as "objects of affection," we've become reliant on our gadgets, and all the digital goodies that go with them, as an extension of our brains.
Google positions Google Now as a kind of digital guardian angel that's always looking out for you and can serve up information even before you know you need it. It can already prompt users to leave early for a meeting when a three-car pileup backs up traffic on U.S. 101, or serve up sports scores after studying which teams you love best. Google says of its assistant:
It tells you today's weather before you start your day, how much traffic to expect before you leave for work, when the next train will arrive as you're standing on the platform, or your favorite team's score while they're playing. And the best part? All of this happens automatically. Cards appear throughout the day at the moment you need them.
These virtual assistants are getting better at making us laugh; speaking with us the way we speak to each other; and helping us out when we need them. It seems little wonder people are starting to feel for them.
History has shown humans will quickly suspend disbelief and bond with software, even when we know an algorithm, not a human, is engaging us. ELIZA, a Chabot created by MIT computer scientist Joseph Weizenbaum in the 1960s, was designed to imitate a Rogerian psychotherapist and would answer a person's musings with questions generated automatically from the preceding correspondence (For example: Patient: "You are afraid of me." ELIZA: "Does it please you to believe I am afraid of you.") Weizenbaum, who became critical of artificial intelligence after observing how deeply users would bond with his bot, wrote in a 1966 report that "some subjects have been very hard to convince that ELIZA...is not human." He famously recounted in a paper that his secretary, who would have known ELIZA was an algorithm, became so involved in her conversation with the bot that, after only a few exchanges, she asked Weizenbaum to leave the room.
Consider that ELIZA corresponded with people via a written response that appeared on a screen, whereas assistants today are increasingly able to talk.
"I know from long experience that the strong emotional ties many programmers have to their computers are often formed after only short exposures to their machines," Weizenbaum wrote in his 1976 seminal work, Computer Power and Human Reason: From Judgment to Calculation.. "What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
Professor and entrepreneur Francisco Vico is staking out new ground for machines by proving they can reach beyond artificial intelligence to a higher plane: artificial creativity.
Vico’s research team at the University of Malaga in Spain created a computer program, Iamus, that can compose music. The Mozart-like machine recently released an album, composed of symphonies performed by the London Symphony Orchestra, and has “written” over 1 billion songs across a wide range of genres.
“Sooner or later, computers will be doing art in every sense, not just music,” predicts Vico.
Vico is using the technology behind Iamus to seed a new startup, Melomics Media, which will sell royalty-free versions of its more than 1 billion songs online. Songs will retail for around $2 per song and buyers will receive all rights to any song they buy.
“We’re offering music as a commodity,” he said.
For our “Life As” series, HuffPost Tech asked Vico about the subliminal advertising he predicts will be coming soon to our favorite songs; how music will mimic our emotions; and why artificial creativity may be a boon for human ingenuity.
You were able to reverse-engineer the Nokia cellphone ring and then mutate that musical “genome” to create 1 million different variations of that tune. Why?
This opens up a very interesting way of advertising: it’s giving an ad without a user noticing she’s getting one.
Imagine you’re playing music, and at some point in the song, you hear something somewhat familiar. It’s not the Nokia tune, but it’s close enough to elicit this concept in your mind, and it’s subconsciously representing the Nokia brand in your brain.
Say you have an earworm [a piece of music that you can’t get out of your head] for a brand or product and you promote it in a song. Mutations of that tune could be played everywhere, in different songs, and you’d be getting advertisement even when you didn’t realize it.
What other new abilities do you plan to give Iamus?
We are working on creating empathetic music that adapts to you and the evolution of your physiological states. The music player will learn from your current situation to know what you need to hear.
Say you’re in bed but you can’t fall asleep. The program that is running on your smartphone could know your current state, and the music could evolve according to that, changing the volume, tempo and instruments. It’s exactly as if you had a violinist looking at you and trying to play music to help you fall asleep.
How do you hope to improve the computer's ability to compose music?
I think there is a huge bias against computer composers. But putting aside that bias, many people still do think that the music [made by computers] seems to go nowhere. It’s empty music: it’s enjoyable, but in principle there’s no message behind it because the computer did not mean anything with the music. The computer did note code into the music an intention like, “try to evoke this feeling in the listener at this point, then at this other point, you’ll change to this other feeling.”
In the future we could add that layer of feelings, of intentionality. Introducing that intentionality into music is something we plan to do over the next few years with Iamus. This will be very, very easy compared to what we have already done.
There were some experts in artificial intelligence and cognition who considered chess a creative, intellectual endeavor on par with music and literature -- until IBM’s Deep Blue beat chess master Garry Kasparov. How will Iamus’ achievements change how we think about music?
With this tool that we have now, we could really explore the music space much faster and more deeply.
In the case of music, technology will help us discover new genres, new instruments, new structures, new ways of playing, new ways of experiencing music and, of course, it will greatly affect the music industry. Imagine you have a genome in front of you that represents a rock song, and another one that represents flamenco music. You can create an entirely new genre by combining the genres you already have. This is a very powerful tool that will speed up the development of music.
What will be the most significant change in music we see five years from now?
The arrival of computers will democratize music: Everyone will be able to produce music, just like everyone is able to take wonderful photographs. It will be disruptive because there will be many more musicians. Anybody who has some musical sensibility and ear will be able to produce wonderful music.
When people got cheap digital cameras, they started taking pictures of everything. Now, when you go to Flickr, you can see professional-quality pictures that weren’t taken by professional photographers.
The main contribution of artificial intelligence to the music industry will be that anybody will be able to pick up a song and either leave it as it is, or slightly adapt the raw material, with simple tools, into something very beautiful.
Your computer can compose a song in seconds. So when will Melomics have its first Lady Gaga?
Hopefully never. But I predict that this year, you’ll be able to download pieces from Melomics that can be taken directly to a discothèque, and people will think the songs were made by a human DJ.
This interview has been edited and condensed for clarity.
Kids these days. Just what are they up to online?
Intrigued by one tenth-grader's musings (via her brother) on trends in the tech world, investor Garry Tan conducted an informal survey of how 1,000 teens and twenty-somethings are using social media.
Among the high-schoolers (aged 13 to 18) and millennials (aged 19 to 25) that answered Tan's survey, Tumblr > Facebook > Twitter > Instagram > Snapchat, in terms of popularity.
Because "teens love photos, but they hate text," in the words of FWD's John Herrman, here's a picture of the survey results:
Teens used every single social site more than their older peers, and Snapchat and Instagram were nearly twice as popular among high-schoolers than millennials. Thirteen percent of teens reported using Snapchat regularly, while just 4 percent of twenty-somethings did so. Twenty-one percent of users in the younger demographic use Instagram, versus 11 percent in the older demographic.
The survey points to the graying of Facebook: more high-schoolers and millennials are using Tumblr than Facebook (59 percent versus 54 percent). By comparison, just 5 percent of online adults are using Tumblr, according to the Pew Internet and American Life Project.
Tan's survey jives with comScore data from May of last year. Together, Facebook and Tumblr accounted for 90 percent of the time teens spent online.
Would you like a side of Bergman and vinyl records with your "screw you" to Facebook?
Yes, yes you would.
Ditching Facebook has become a new, elitist form of "conspicuous non-consumption," on par with refusing television, argues New York University assistant professor Laura Portwood-Stacer in a recent article published in the Journal of New Media and Society. Once upon a time, being on Facebook meant you were hip. Now, not having a Facebook account is the status symbol -- at least to some.
Based on interviews with 20 Facebook non-users and analysis of Facebook-quitter confessionals, Portwood-Stacer examines why people leave Facebook, how they communicate their rejection and how their abstention comes across.
"[T]he question is," she writes, "Can refusal make a persuasive point about one's values where media consumption is concerned, or does it just end up making one look like 'a giant douchebag'?"
Most often it can't, and most often it does: Quitting Facebook is cool, but, like popping your collar or wearing a fedora, you'll probably look ridiculous to all but a select group.
While the non-users themselves might feel good about signing off, their family and friends are more likely to be annoyed by the inconvenience it poses, hurt at being left behind and irritated by the "holier than thou" and "hipper than thou" signals it sends. Even those abstainers who leave the social network for moral reasons -- i.e. they object to Facebook's privacy policies, its politics, its stance on free speech, its corporate governance -- get lumped in with the "giant douchebags."
"Many Facebook refusers actually revel in their difference from the mainstream, seeing it as a mark of distinction, superior taste, and identification with an elite social stratum. This is consistent with Krcmar's finding that families who chose not to consume television 'share a belief that they were iconoclasts, and for the most part, they relished that role,'" writes Portwood-Stacer. Yet she concludes "[Q]uitting is a limited tactic for those who would strategize against Facebook and other hegemons of media culture."
Like tossing out the boob tube or insisting on going around without a cell phone, ditching Facebook, though embraced as a kind of counter-culture status symbol by those who quit, smacks of elitism to others, in part because it suggests whoever quits has enough social cachet or stature to make socializing on Facebook unnecessary. They're too busy, too important to have to be beholden to posting photos or status updates. (Quick, someone tell the Rich Kids of Instagram.)
Portwood-Stacer suggests that being able to de-Facebook "is a privilege itself," an argument also put forth by Alice Marwick, an assistant professor at Fordham University.
"The publicly stated choice to abstain from Facebook is also socially meaningful in that it implies a certain level of cultural and economic capital to be able to choose not to access a social media technology as a way to make a statement," Portwood-Stacer writes (emphasis the author's).
In a blog post published in 2011, Marwick posits there's a "cost of opting out," particularly among certain communities.
"While I have zero love for Facebook, I stay on it because otherwise I'd miss out on 75 percent of the invitations in my friends group," writes Marwick. "And I don’t think it’s for anyone else to say that I should expect my friends to cater to my socially abnormal preference, or that I should prioritize my own personal irritation at Facebook over the very human impulses to connect and socialize."
There may be some good news in all of this for Facebook. The article's findings suggest that for consumers, Facebook has become as mainstream as televisions and telephones -- though perhaps just as a pedestrian. And while Facebook users have made little secret of their annoyance (and even boredom) with the social network, quitting Facebook can come at a cost to a user's social standing, one that might not justify ditching the site.
The new year is starting with a new, louder wave of warnings about the robots coming for our jobs.
They're better at things than we are, goes the argument: more efficient, better able to process huge quantities of data, and available to work nights, weekends and holidays without demanding overtime pay or dental insurance. As advances in artificial intelligence and big data analysis endow machines with sharper cognitive abilities, blue- and white-collar jobs are on the cusp of becoming silicon-collar jobs: outsourced to increasingly capable machines that are making exponential improvements in their hardware and software.
But what happens when our software improves? The new year is also bringing us closer to a new kind of brain.
These pessimistic predictions about the coming tyranny of bots in the workplace have their merits (and are likely to come to pass), yet they largely presume we humans are staying the same.
We're not: with devices like Google Glass and tools such as Google Now or Siri, we're enhancing our own abilities, and sharing with robots the benefits of advances in artificial intelligence and big data. While that won't necessarily stop the influx of robot overlords into boardrooms, classrooms and operating rooms, we should take some comfort in the fact that we're gaining new abilities even as the machines are. And that means we may have better jobs to look forward to than sterilizing robo-surgeons or nannying robots, two future career paths Kevin Kelly suggests in his recent Wired story, "Better than Human."
There are the devices that will replace us, and then there are the machines that will enhance us, more seamlessly fusing with our bodies and minds to create artificially, externally-powered brains that don't forget, can quickly summon facts, outsource tedious tasks to computers and maybe even reason a bit better. (Of course, with that comes no shortage of serious questions, from privacy concerns to anxieties over the future of face-to-face human interaction). Google Now, for instance, can proactively prompt us with information about our commute, calendar, travel plans or lifestyle. Siri, in its pre-Apple days, could handle basic chores on our behalf.
Consider Google's Project Glass, the focus of a series of IEEE Spectrum stories out this week and one of a handful of wearable computers in the works that observe and capture our surroundings like a second brain.
Google Glass's design "lets Glass record its wearer's conversations and surroundings and store those recordings in the cloud; respond to voice commands, finger taps, and swipes on an earpiece that doubles as a touch pad; and automatically take pictures every 10 seconds," explains IEEE Spectrum's Elise Ackerman. A concept video for the device released by Google showed a man using the glasses to video chat with his girlfriend, respond to messages, get directions and learn about people and places he can't immediately see. Artificial intelligence researcher Rod Furlan speculates data gathered by Glass could "eventually be able to search my external visual memory to find my misplaced car keys." Facial recognition could one day help you avoid the awkwardness of forgetting names, and object recognition could alert you to calorie counts of the sugary snack you're about to eat.
Google Glass promises to be not only a communication device for answering emails or sharing photos, but a kind of personal assistant and second mind. The goal, according to Google Glass project head Babak Parviz, is to someday "make [accessing information] so fast that you don't feel like you have a question, then have to go seek knowledge and analyze it, but that it's so fast you feel like you know it ... We want to be able to empower people to access information very quickly and feel knowledgeable about certain topics."
In a 2004 interview, Google co-founder and CEO Larry Page asked the world to "imagine your brain being augmented by Google." Nine years later, we no longer have to imagine that. This feeling that Google Glass can enhance the wearer's mind isn't PR spin, but something to which users of the device can attest.
Furlan, who created a homemade pair of Google Glass-like specs that could stream emails, Twitter posts and more to a lens over his eye, told IEEE Spectrum that though he initially suffered from information overload, he now feels "impoverished" when he takes off the device. Evernote CEO Phil Libin predicts, based on his own experience with Google's glasses, that in three years' time, gazing upon a world without the additional information offered by a Google Glass device will seem "barbaric."
"People think it looks kind of dorky right now but the experience is so powerful that you feel stupid as soon as you take the glasses off," Libin told The Huffington Post's Michael Rundle.
And, presumably, smarter with the glasses on. The marriage of cutting-edge technology with the human brain -- which has resisted obsolescence even after many millennia -- could open up new fields and professions for which even the machines are unqualified.
Computers can perform more of the tasks we thought only humans capable of -- recognize emotion, play chess, write articles, translate speech -- but they still haven't replicated our knack for intuition or gut feeling, as The New York Times' Steve Lohr pointed out in his story, "Sure, Big Data Is Great. But So Is Intuition."
What happens when we fuse the best of machines with the best of us? Artificial intelligence with human intuition? Big data with gut feeling? Faulty memories with perfect ones? What new jobs will open up to us then? Affectiva's emotion-recognition technology may know when we feel miffed -- but humans are still better than computers at backtracking with an instant apology. It's hard to imagine diplomats being replaced by bots. Maybe we will be replaced as lawyers, lovers, drivers and therapists, as many predict. Or maybe our cyborg selves will borrow the best from software and souls to forge another path and, crucially, other professions.
We're not being completely left behind in the digital revolution. It's making us better machines, too.