iOS app Android app More

Will Robo-Writers Put Humans Out of Work?

Karen Frankola   |   March 25, 2015    2:58 PM ET

Answer: I, for one, welcome our new computer overlords.
Question: What did Ken Jennings write in his response to Final Jeopardy!, knowing he had no hope of defeating Watson, IBM's artificially intelligent computer system, in a Jeopardy! tournament?

Jennings had already won millions as a game show contestant, so he could afford to joke about losing to a machine. But later in a TED talk, Jennings said, "I felt like 'Quiz Show Contestant' was now the first job that had become obsolete under this new regime of thinking computers."

Could those computers also put those of us who write for a living out of a job? I have worn different hats during my career -- journalist, corporate communicator, author -- but my core skill set has always been the ability to gather and synthesize information to create stories that are relevant to an audience. I never thought that I could lose my job to a machine, but computer algorithms are starting to replace fingers on keyboards.

The Associated Press uses Automated Insights' Wordsmith platform to create some 3,000 stories on company earnings reports every quarter. Narrative Science's Quill platform provides financial reports to Forbes and a number of Wall Street firms. It also churns out more than a million accounts of Little League games every year.

Data-driven topics like finance and sports are the sweet spot for these robo-journalists. They use algorithms and natural language generators to create articles which are virtually indistinguishable from those written by humans. Can you guess which of these story leads was written by a machine?

1. Optimism surrounds Costco Wholesale, as it gets ready to report its second quarter results on Thursday, March 5, 2015. Analysts are expecting the company to book a profit of $1.18 a share, up from $1.05 a year ago.

2. Thursday before the markets open, Costco Wholesale Corp. will report its fiscal second-quarter earnings. Thomson Reuters has consensus estimates of $1.18 in earnings per share (EPS) and $27.7 billion in revenue.

The first lead is from a Forbes article generated by the Narrative Science platform, while the second was written by a 24/7 Wall St. journalist.

The media outlets using robo-journalism claim it's not a job-killer, rather, it's freeing up reporters to write more analytical pieces while the algorithm bangs out the basics. These writing programs don't conduct interviews, so there would seem to still be some job security for journalists who actually gather information independently rather than simply process data that has been given to them.

But what about those of us who work in corporate communications? We speak for organizations, writing things like press releases, website copy, intranet articles, and executive emails and speeches. In many organizations, executives like to tightly control the messages sent to employees and the public. I would hate to see a future in which company leaders feed data and quotes to a robo-writer, rather than working with a communications expert who might question their thinking and press for more information.

The best way for both journalists and corporate communicators to keep their jobs is to do what computers can't. Seek out information that isn't easy to find. Badger people who would prefer not to talk to us. Challenge our bosses about what they want to say or aren't saying. Go beyond formulaic writing. If we just spew out the information someone gives us without analyzing and supplementing it, we deserve to let our computer overloads do the talking for us.

Proprietary APIs: A New Tool in the Age of the Platform

  |   October 28, 2014   11:52 AM ET

Read More:

Cloud Computing Is Here: Observations from IBM Pulse 2014

  |   February 25, 2014    9:31 AM ET

Read More:

Aisle View: Calling Mr. Watson

Steven Suskin   |   December 9, 2013   11:00 PM ET

"Mr. Watson -- come here -- I want to see you!" is repeated eight times in Madeleine George's new play, The (Curious Case of the) Watson Intelligence at Playwrights Horizons. Historically-minded viewers will immediately recognize that phrase as the 19th century equivalent of "one small step for man"; that is to say, these were the first words successfully transmitted over Alexander Graham Bell's newly-invented acoustic telephone on March 10, 1876. Not-so-learned patrons might guess that it derives from the Sherlock Holmes stories, with good old Dr. Watson being the fellow summoned. Computer dweebs might immediately center on Watson, the IBM computer that in 2011 defeated the all-time champions on the television game-show Jeopardy!

Ms. George, as it turns out, is referring to them all, along with a contemporary computer dweeb named -- naturally enough -- Watson. Her play is an intricate puzzle built around the quest to invent machines that bring "better living through technology." There are four Watson characters -- one a robotic machine -- as the play jumps back and forth between 1876, 1891, 1931 and today.

And there, alas, is the fly in the ointment or -- more aptly -- the virus in the computer. The playwright has built a sturdy-seeming house of cards, supported much of the way by intriguing characters and inviting dialogue. The second act starts with one of the strongest scenes, a bedroom tryst which begins with the modern-day Watson saying "I started out training to be a phlebotomist..." and somehow winds up with Billy Joel. And then, The Watson Intelligence turns baffling. There is too much talk; specifically, so much time spent on philosophizing by the 1891 and 1876 characters that your ears could glaze over. If we stick with the puzzle analogy, it's as if we've patched together all the distinctive sections of a 1000-piece jigsaw and are suddenly faced with 275 pieces of deep blue sea. Ms. George's delightful conceit turns irremediably nondelightful, and there's nothing to be done but wait for everyone to stop talking.

The four Watsons are played very nicely by John Ellison Conlee, a Tony-nominee for his role as the overweight steelworker in The Full Monty. Conlee effortlessly switches from Watson to Watson, contributing an especially droll impersonation as the computer-Watson. Amanda Quaid, who was memorable as the girl in Mike Bartlett's Cock, humanizes the play and does a fine job as the several Elizas. They are joined by David Costabile as the third, crotchety side of the triangle.

Director Leigh Silverman (Well, Chinglish) helps the play and the actors along. She has also devised a workable production scheme with set designer Louisa Thompson and some well-choreographed curtains on traveler tracks. But the last forty minutes of The Watson Intelligence -- which lasts slightly over two hours, not including intermission -- are mighty foggy.

Betsy Isaacson   |   November 15, 2013    3:36 PM ET

Daring developers will soon have a new tool at their disposal: IBM's Watson, the supercomputer that won "Jeopardy!" in 2011.

In an interview with Computerworld, Watson CTO Rob High discussed IBM's plans to open the computer to developers in 2014. "[Watson is] stable and mature enough to support an ecosystem now. We've become convinced there's something very special here and we shouldn't be holding it back," High told ComputerWorld.

Developers will be able to access Watson's power via the cloud. According to the IBM press release, the Watson cloud package will include "a development toolkit" and "access to Watson's API" -- in other words, developers will be able to create apps that interact with the software. The cloud package also gives access to an "application marketplace," ComputerWorld reported, so something resembling an App Store for all Watson-based apps.

So what could developers do with Watson that they can't do with regular data-crunching computers? In a 2012 interview with GigaOM, Dan Cerutti, IBM’s vice president of Watson commercialization, laid out some possibilities. The most radical: Watson could be used as an "adviser" in situations where humans don't know (or can't process) all the relevant information. “If a human being was able to read everything that was relevant and remember it, would they make a better decision once in awhile? We think so."

Extreme Tech reported earlier this year on Watson's technology being used in the medical field. In response to certain variables a doctor inputs, the computer can scan through medical data to return potential diagnoses. "[Health care company] WellPoint points out that doctors miss early stage lung cancer diagnoses about half the time. Watson, on the other hand, is able to get the right diagnosis on these same cases 90% of the time," Extreme Tech reported.

But even developers uninterested in Watson's unprecedented facility with natural language -- the computer used this ability to win on "Jeopardy!" -- could do some fairly innovative things with the machine. In 2012, students at the University of Rochester Business School suggested Watson could construct a system for optimizing organizational responses in the face of natural disasters. "The idea is to combine weather data with census information so that organizations can prepare for and better manage and allocate resources during weather crises."

The price and the release date for the Watson developer package haven't been announced yet, but we're sure developers will jump at the chance to try it out when they can.

How I Shamelessly Exploited Twitter (and Don't Anymore)

Stephen Baker   |   November 9, 2013    3:30 PM ET

Five years ago, I was the Twitter guy at BusinessWeek. I wandered around the the offices telling colleagues to tweet. Now, as the new Twitter stock soars, I barely tweet anymore. The reason: Much as I'd like to, I don't participate anymore in the "nugget economy."

I'll explain. When you tweet, you send out a nugget of information wrapped in self-branding. If people like that nugget, they retweet, and the information spreads, along with the branding. Maybe they respond with interesting information, or a relevant link. Those nuggets can be valuable. When I was at BusinessWeek, the nuggets I harvested turned into blog posts and stories. And the branding was vital for me. BusinessWeek was in late stages of collapse, and I needed the branding to promote my post-BW career, and (hopefully) to sell books. My brand, as I saw it, had been locked up in the magazine for 20 comfortable years. But I suddenly needed to fashion it into a lifeboat.

An example of how shamelessly I used Twitter for my own ends. I started on Twitter on Jan. 5, 2008. I was in Steve Rubel's office at Edelman, above Times Square, asking him how Heather Green and I could update our three-year-old story on blogs. (I remember the day because Barack Obama had just won the Iowa caucases, and his face was on every television in the lobby.) Steve urged me to jump onto Twitter. At that point, I remember, he had 2,400 followers. And he asked them with a tweet why @stevebaker should get onto Twitter. Responses poured in. He was clearly at the controls of a powerful tool. I had a book, The Numerati, coming out later that year and wanted some of that network magic. But how was I going to get thousands of followers?

After a month on Twitter, I had barely 200. But then I came up with a plan to leverage my mainstream journalism asset. I would write a BusinessWeek article explaining "Why Twitter Matters." But instead of calling up the usual sources, like @jayrosen_nyu, @jeffjarvis and @biz (Twitter co-founder Biz Stone), I would research the piece on Twitter. I would tweet topic sentences for each paragraph, and the Twittersphere would respond with examples, links and insights. Hopefully, they'd discuss and argue. Through this process, Twitter would write the story. Word would quickly spread about this story, and people who wanted to participate would follow me. I would catch up to Steve Rubel, or even pass him! I'd be hoisted up in the nugget economy.

It turned out that organizing a boatload of tweets into a coherent article took a lot of work. But it came together. The article went mildly viral and my Twitter following quintupled, finally topping 1,000. My evil strategy worked. And I even won a minor magazine award for the story. (I'll note, in passing, that traditional journalism awards carry zero weight in the nugget economy, not unless they're branding giants, like Pulitzers. If I were still focused on nuggets, I'd trade my dusty old Overseas Press Award for 10,000 Twitter followers in a minute.)

Months after that triumph, the economy cratered and BusinessWeek spiraled toward death. I left in late 2009, after Bloomberg snapped up the magazine for barely the price of a Superbowl commercial, and I got a book contract to write about IBM's Jeopardy computer, Watson. Since then, I've been doing books. That has removed me from the nugget economy. Much of what I'm doing is vaguely secret, and timed by months, not minutes. For instance, I'm co-writing a healthcare book that Penguin will publish next spring. But they're not publicizing it, and I guess they have their reasons. So I don't either. I have a couple of book proposals brewing, also secret. As a result, I don't generate good targeted nuggets. And my Twitter presence has degenerated into the occasional note about my life, a wine I drank in France, a slideshow from Africa. I'm a scattered Tweeter, virtually lapsed and widely ignored.

Now that I think about it, though, I should jump back on. I have a novel coming out next spring, The Boost. Maybe if I break down the first chapter into 150 nuggets.... No, really, I should get serious about this.

But this social media marketing is so exhausting, don't you think?

When Big Brother Meets Big Data

Rep. Rush Holt   |   June 27, 2013   12:17 PM ET

In 2011, shortly after IBM's supercomputer Watson defeated two human champions on the game show Jeopardy!, I had the chance to face off against the machine in a simulated match on Capitol Hill. I got lucky -- I won my round -- but I remember being awed at Watson's ability to draw upon massive troves of data to answer complex, unpredictable questions.

In the context of Jeopardy!, Watson was amusing and impressive. In the context of the machine's current efforts to treat lung cancer, Watson is inspiring. But there may be a dark side to Watson's abilities. The New York Times reported last week that, according to a government consultant, "Both the N.S.A. and the Central Intelligence Agency have been testing Watson in the last two years."

To me, this revelation adds a new layer of concern to disclosures that the NSA has, apparently, been recording the metadata on every phone call in the country.

Why is Watson's involvement so troubling? If the NSA truly possesses a record of every phone call made in the United States, that database would be so large as to be practically unusable by ordinary humans -- ensuring that law-abiding citizens could expect a degree of "privacy through obscurity." Watson-style technology has no difficulty sorting through billions of records, but in the end it's what the computer is told to look for that opens to door to error or even mischief.

Even if you are guilty of nothing, a simple inquiry to a supercomputer could reveal deeply personal, private information. If you send a message to a mental health provider, these supercomputers could know it. If you called your parents while they were vacationing overseas, these supercomputers could know it. If you expressed a view to your House or Senate representatives, these supercomputers could know it.

Personally, I believe that the best way forward is to prohibit the government from creating such all-encompassing permanent databases in the first place. That is why I opposed the FISA Amendments Act, which provided the legal basis for the NSA's dragnet surveillance, when it came to a vote in the House in 2008.

I raise this concern not as someone who fears technology. To the contrary, I am a research scientist, a patent-holder, and a great believer in the power of technology to create jobs and improve our lives. But our legal system is falling hopelessly behind the capabilities of our technology, and we must reform our laws to meet modern-day challenges.

Interestingly, a group of intrepid, patriotic public servants with real computer expertise and an understanding of the law showed us over a decade ago how all of this could be done without violating the privacy of American citizens.

In the early part of the last decade, a group of researchers at NSA developed a program called THINTHREAD that had the ability to sort through the mass of data NSA receives and pick out items requiring further attention -- all without compromising the Fourth Amendment rights of Americans. Unfortunately, their effort came to naught because of internal politics at NSA and competition from a Beltway-bandit boondoggle called TRAILBLAZER. The whole episode became public and ultimately led to a Defense Department Inspector General report, the declassified portions of which paint a damning picture of mismanagement at NSA and retaliation against Thomas Drake and others who reported these problems to the IG.

For the last several years, I have offered amendments to either the annual defense policy or intelligence authorization bills to protect whistleblowers like Drake, and every time the current House majority has refused to even allow those amendments to be considered on the House floor. Real oversight of the intelligence community is impossible so long as the Thomas Drakes of our national security establishment are treated like criminals instead of the public servants they are. Getting those kinds of protections into law remains one of my top legislative priorities.

What about other entities designed to protect the civil liberties of Americans?

At a June 18 hearing before the House Permanent Select Committee on Intelligence, members of Congress were told repeatedly that there are multiple layers of "oversight" for the surveillance programs now in the news. I heard those same assurances repeatedly during the eight years I spent on HPSCI. But as I discovered back then and as some of my colleagues pointed out this week, the reality is that nearly all of the alleged "oversight" is internal to the NSA or the Justice Department. Congress's watchdog, the Government Accountability Office (GAO), is statutorily prohibited from auditing these surveillance programs -- a grave omission that I tried to correct when I served on HPSCI.

Congress created a Privacy and Civil Liberties Oversight Board in the same legislation that created the office of the Director of National Intelligence in 2004. Unfortunately, the Board was never fully staffed and under President Bush was sufficiently politicized that board member Lanny Davis quit in protest. Although the Board was taken out of the Executive Office of the President in subsequent legislation in 2007, it remains understaffed and underfunded nearly a decade after its creation. And only in the wake of the New York Times' revelations is it beginning to focus on our latest surveillance controversy.

And as for the Foreign Intelligence Surveillance Court (FISC), the judicial body designed to review -- and if necessary refuse -- government surveillance requests? Saying "no" to the executive branch is something this court rarely does.

According to data obtained by the Electronic Privacy Information Center, the FISC has approved over 33,000 FISA applications since 1979 and rejected only 11. As the judges on the FISC rotate on and off the court every few years, the ability of the court to maintain a genuine institutional memory and expertise on these issues is compromised. Part of the solution would be for Congress to mandate permanent, independent "special masters" for the FISC.

There is precedent for drawing upon specialized experts to make such weighty determinations. In the Microsoft anti-trust case in the 1990s, for instance, Judge Thomas Pennfield Jackson utilized "special masters" with deep knowledge of computer software. In the complex field of medical malpractice, advocacy groups have supported the creation of special courts staffed by medically trained judges.

We should also consider modifying the statute governing the FISC to ensure judges assigned to it serve for longer terms (say ten years vice the current seven) and that they can be reappointed to the FISC at a subsequent date. Additionally, we should change the law to allow the Government Accountability Office to audit surveillance programs. Finally, Congress should prohibit any attempts to place limits on the ability of American citizens to encrypt their private communications and data, or to require companies in the electronics or telecommunications business from building in "back door" mechanisms to disable encryption used by American citizens.

If federal authorities want to see the data of an American citizen, they should be forced to come through the front door -- and only with a court order based on probable cause, as our Founders intended.

Rep. Rush Holt (D-NJ) represents New Jersey's 12th Congressional District. He is a former member of the House Permanent Select Committee on Intelligence and the former chairman of the House Select Intelligence Oversight Panel.

F.R.E.U.D.: Fetal Reconstructive Emotional Unalienable Deity (Or, Can a Machine Ever Love Us?)

Oren Frank   |   February 25, 2013   10:38 AM ET

Many years ago, I was fascinated by a program named Dr. Sbaitso, an offspring of the famous ELIZA pattern matching program developed at MIT during the '60s. Dr. Sbaitso was a simple natural language text to speech program that pretended to be a psychotherapist. The kind doctor would mostly rephrase any input into a question, so if I'd write "I hate my boss," it would reply, "Why do you think you hate your boss?" or "Are there other people that you hate?" Download it here and see why ELIZA and other chatbots managed to pass the basic Turing test in which subjects believed they were conversing with a real person.

We live in an era where communicating with machines, like the one you're reading this blog on, is second nature. As with our friends and loved ones, our machines can bring out the best and the worst in us; we all want to do something stupid to our smartphone, hate the nice lady inside our GPS device with a vengence, find ourselves talking fondly to our laptop or get addicted to a game or some other nice piece of code. We also know that we don't really need a human face for communications to become emotional; think of the texts and emoticons/symbols that we trade with people we never met at forums, chats or social networks, messages about our lives that sometimes carry more emotion and generate more relief than the means available to us in the "real life." Come to think of it, it's easy to imagine that some of our Facebook "friends" are actually natural language programs.

Cognitive scientist Marvin Lee Minsky argues that emotions are actually a way of thinking -- our very human way of thinking. It's a very small surprise, then, that machines and our interfaces with them are carefully designed to evoke our emotions. They achieve it by showing "empathy" (via positive feedback) and by establishing a "relationship" with us, the users. We now want our machines to "recognize" us and "listen" to us, to "understand" and proactively "help" with what we want and need, and modern devices are beginning to carry just enough sensors and computing power to start delivering all that. "Siri" is an attempt at creating a "personality" (mainly by using humor), and a line of new robot toys, pets, and assistants to the elderly or post-traumatic patients is another small beginning, signaling a wave of new artificial intelligence (AI) machines that will actively employ emotions when communicating with us humans.

We're not fantasizing about the AI that is usually associated with science fiction and is called "strong AI" (artificial intelligence that matches or exceeds human intelligence and is conscious). Our "weak AI" (machines that can demonstrate intelligence but do not necessarily have a mind, mental states or consciousness) is already very common and responsible to many aspects of our lives, from weather forecasting, via algo-trading and surveillance, and all the way to my video games.

With every accumulated petabyte of research and scientific knowledge, it seems more and more that we ourselves are sort of "machines," physically and perhaps also emotionally. True, endlessly complex and still beyond our own understanding, but whether you believe we were created by a god or by nature, there's certainly a design -- a magnificent blueprint. We are sometimes reluctant to acknowledge it, but although we're all singular entities, we're also very similar in many ways; many of our behaviours and issues can be "categorized." All colors are made from three basic ones, and perhaps there's a similar, though more complex, hierarchy to our emotions.

If this is true, it's just a matter of time before machines can really help us improve our emotional well-being. When math and computer sciences, machine learning and algo-art, hardware and cognitive sciences exponentially accelerate toward each other, it will soon become possible for emotional intelligence to be augmented by "artificial emotional intelligence." This new AEI will not need to be aware or conscious; the amount of data already available about us combined with clever questionnaires can be used to create a basic personality analysis. The EAI algorithms will be then able to follow our behaviour, "read" our emotions, and react to it in a very complex way: Be supportive when needed, or opinionated and even conflictual where appropriate, and yes, perhaps even insightful. Such an entity could be a great listener and provide some of the fundamental value of talk therapy, and with time, much more than that. A similar approach is already applied by IBM with "Watson" -- the program that won "Jeopardy" is now going through medical school, and when it "graduates" it will offer its diagnosis and prognosis to physicians.

The possibilities and implications are beautiful and plenty. Think of a therapist that is "with you" whenever and wherever you need it/him/her/whatever. It almost sounds obvious for such a machine to be able to practice positive psychology by constantly focusing us on our strengths or think of a machine that can help autistic children interpret and understand facial expressions, or veterans that never get left behind with their PTSD, and stress that can be alleviated in real time with the help of a friendly machine. Think of a time where many of our emotional daily challenges can be dealt with and contained before they escalate into something much more painful and potentially harmful.

Perhaps the fundamental question is whether a machine can ever really care for us, or more precisely, can we really ever feel loved by a machine? Therapy helps us because it allows us to feel we're accepted for who we are despite our faults. Simply put, a good therapist will care for us, accept and even love us, in her or his own way, and thus allow us to accept and love ourselves a little more. I don't know if we can ever achieve such a relationship without a human connection. On the other hand, I can definitely imagine a contextual AI machine that "lives" in our mobile device and knows so much about us that it can actually sense us: On a Monday morning, when our self-esteem hits bottom, it will remind us of our professional achievements. If we feel lonely on Valentine's Day (not a very complicated algorithm), it can help us fondly recall the meaningful relationship we recently had and promise us it will all be all right.
Isn't that somewhat human? A little bit like love?

Oren Frank is the co-founder and CEO of

For more by Oren Frank, click here.

For more on emotional intelligence, click here.

Bonnie Kavoussi   |   March 6, 2012    8:45 AM ET

Banks don't exactly have stellar reputations for customer service.

Enter the robots.

On Monday, Citigroup announced that it would try to figure out how to advance "customer interactions" by using IBM's supercomputer 'Watson,' the robot that made its name by dominating humans on Jeopardy!

The nation's third largest bank, Citigroup, is the first to tap Watson's enormous data-crunching capabilities and is planning on using the supercomputer to "analyze customer needs and process financial, economic and client data to advance and personalize digital banking."

(We are not sure what it say about the state of banking that it takes a robot to make it personal.)

'Watson' can read 200 million pages in three seconds and learn information and answer questions like a human being. Citigroup said in a press release that it aims to become "the leading digital bank."

Citigroup can use the help. Its profit was 11 percent lower at the end of 2011 than over the same period a year earlier, and it plans to cut 4,500 jobs. Its chairman of 16 years, Richard Parsons, announced on Friday that he is stepping down.

IBM's stock price hit an all-time high at the end of trading on Monday, at $200.66 per share, according to The Wall Street Journal. The price of IBM shares is about 24 percent higher than it was a year ago, according to the Associated Press.

'Watson' already has started working in health care. IBM formed a board on Friday that will explore how 'Watson' can help the health care industry. 'Watson' started working for WellPoint, one of the country's largest health insurers, in September.

This is not the first time that IBM has worked with Citigroup. In 1954, IBM reduced the time necessary for a cost-benefit analysis at Citigroup from 1,000 man-hours to 9.5 minutes, according to the Associated Press.

'Watson' first caught the nation's attention by earning more than three times than both of its competitors on 'Jeopardy!,' who had won 'Jeopardy!' before, last February. Ken Jennings, who came in a distant second, wrote next to his correct Final Jeopardy answer, "I for one welcome our new computer overlords."

As Computers Get Smart, We're Getting Dumb

Stephen Baker   |   September 29, 2011    1:14 PM ET

Rules are dumb. We all know it. Each of us has a magnificent brain, the most intricately engineered known artifact in the universe, and yet in a world of rules, we're not trusted to exercise our judgment.

For the last half century, it's been the computer that enforces countless inflexible rules for the masses. The bank's computer remorselessly levies a fee if the credit card payment comes in five minutes late. The insurance company's computer determines that the specialists we see are "off the plan" and automatically fires off hideously high invoices. We object to the rules and resent their senseless electronic administrators. We appeal to humans. Surely they'll understand.

But now things are turning around. Computers are learning about us, and focusing on exceptions. Humans, meanwhile, are binding themselves to inflexible rules. In other words, while machines grow smarter, we're getting dumber. This is especially clear in politics.

As IBM's Watson demonstrated in Jeopardy, today's advanced machines are evaluating evidence. Watson makes its bets based on probabilities. It's never 100 percent sure of anything. That's partly because it doesn't know or understand things the way we do. But still, it's a smart way to look at the world. If you're not sure about something, after all, you'll give it some analysis. That's what Watson does, and it's not a bad thing.

Humans are heading in the other direction. The game of politics, for example, is to find a disastrous example of someone's judgment. Say a governor implements an amnesty program for aged inmates. Several hundred are released, and one of them commits a horrible crime. The governor's opponent promptly promises to keep every single prisoner in jail to the last day of his or her sentence. Forget probabilities. Toss human judgment out the window. These people will all fall under the same rule. The system will operate like an old-fashioned computer.

Every time we use our judgment, we run the risk of making an error. That's life. And in areas in which errors are unforgivable, we hide behind rules. The rules are often idiotic. But they cannot be blamed. Rules are rules. The more we rely on them, the more we cede our intelligence and act like yesterday's machines.

'Person' Of The Year?

Catharine Smith   |   June 2, 2011    6:38 PM ET

(JAKE COYLE, AP/THE HUFFINGTON POST) NEW YORK -- The "Jeopardy"-playing IBM computer Watson has been named person of the year by the Webby Awards.

The Webbys, which honor Internet achievement, announced their special honorees Thursday.

  |   February 27, 2011    9:49 AM ET

Watson, the IBM super computer, last week beat out two of Jeopardy's most successful contestants on the television game show. Now officials at IBM are beginning to think about how Watson, which can answer questions posed to it in natural language by using algorithms to sort through reams and reams of information, might be able to help alleviate social problems.

Watson Is No Match for Humanity

John Maeda   |   February 23, 2011    2:58 PM ET

The Watson craze last week didn't fully hit me until my cab driver got lost and cheerily exclaimed in thickly-accented English, "Watson! Heeeeelp me!" I find it interesting how the so-called "artificial intelligence" (AI) systems I studied decades ago at MIT are on their way to becoming the Fonzies (Watson can tell you who that is) of our times. There are a few misconceptions about our "new overlord" that I attempted to clarify within the confines of my taxi ride lost in a suburb of DC. Here they are:

1/ The computer is smart as us, and dumb as us. When Watson slipped up with the Oreo/crossword puzzle answer of "19-teens" it was our fault for not teaching Watson what that means. And if you do a Web search for "19-teens" it's brutally clear that the invention of "Oreos" or other innocent games doesn't come first to mind in the darkness of the online world.

2/ The computer never makes mistakes -- or the same mistake over and over -- unless we let it do so. If left alone, like the proverbial broken record, a computer will do the exact same thing it has always done. There is a construct in computer programming called "the infinite loop" which enables a computer to do what no other physical machine can do -- to operate in perpetuity without tiring. In the same way it doesn't know exhaustion, it doesn't know when it's wrong and it can keep doing the wrong thing over and over without tiring.

3/ The computer still needs us to make the right decision. That little exercise you do countless times with the computer on a daily basis of clicking "Yes," "No," or "Cancel" is the important moment when you are able to prevent the computer from doing harm to you or to itself. Were it to decide to, say, show up on Jeopardy unannounced and without asking, that's a completely different story for Watson 5.0 -- a world where Watson can click its own Yes/No/Cancel buttons.

4/ The computer doesn't care -- at best it can act like it cares. In the movie WALL-E we see a trash collector robot that breaks out of its daily routine and discovers consciousness through love. Given that we humans still don't understand how love works (and doesn't work), it's impossible to imagine that we could ever program a computer to truly love the way that we do -- and yes -- in that special case we can't seem to press our own Yes/No/Cancel buttons.

The taxi driver seemed to nod in disinterest until he asked me, "So, where is this place you're looking for?" I solved the problem by pulling out my iPhone and asking my "other overlord", Google, how to get there. S/he delivered the right answer.

PS I suggested the cab driver visit one of the many sites running the original software systems "Eliza" from the 60s, and to tell Eliza, "The first modern crossword is published and Oreo cookies are introduced." When I tried that just now Eliza simply responded, "I see."

Man Vs. Machine: Watson Supercomputer A Reminder Of Education Shortcomings

John Rogers   |   February 22, 2011   12:40 PM ET

This week, IBM's supercomputer Watson had quite a successful appearance in the man-versus-machine Jeopardy! showdown. Even with a couple flubs, Watson was able to handily beat two of the trivia game show's most prolific winners. Unfortunately, Watson's winnings won't make up for cancellation of IBM's hefty contract with the California Department of Education if the company doesn't meet deadlines to fix the state's data system. Maybe that's not so important to IBM, but it's not a trivial matter for California students.

We'll admit to being excited -- even thrilled -- by Watson's sheer computing power. However, that impressive display makes it even more frustrating to witness California's failure to get a fully functional education data system from IBM. That system should be able to answer fairly straightforward questions, such as:

* How many students who enter elementary school with limited English skills are still designated as English Language Learners when they arrive in middle school?

* Do eighth-grade students enrolled in Algebra 1 perform better, on average, if their teacher has a credential in math?

* Which California high schools graduate the highest proportion of young women who move on to major in computer science in California public universities?

The system should be able to follow students from kindergarten through high school graduation and beyond. It can't.

IBM has been beset with delays and technical complications in its contract with the state to create the California Longitudinal Pupil Achievement Data System, or CALPADS. The delays led Gov. Schwarzenegger last year to eliminate $6.8 million earmarked for the project. Of course, we don't know IBM's side. California's policy environment and historic disinterest in gathering good data might well contribute to delays. But, as we are fond of telling students, "No excuses."

A report out this week on states' capacity to collect data reveals that California compares poorly to other states. The Data Quality Campaign's sixth annual report reveals that half of the states are collecting the full 10 "essential elements" of data tracking. California was missing the ability to match student K-12 records with higher education.

Getting basic data is only an early step in a much longer process. Once CALPADS is in place, there are some difficult learning and political challenges. "States were looking at these 10 elements as a checklist and saying, "OK, we can collect these 10 things; we're done," Aimee Guidera, executive director of Data Quality Campaign, was quoted in Education Week. "We're saying, 'No, you're just beginning to be able to tap in and leverage the investments you've made.' "

Tapping into the full potential of data systems will require California to move beyond a narrow focus on outcomes data. Improving educational practice demands that we know more about the opportunities present in different schools and neighborhoods that lead to desired outcomes. That additional data must come from new sources, including students and educators, about the conditions that shape teaching and learning in their classrooms.

Even when IBM overcomes its technical difficulties for California, our data system will still be no Watson. Yet, just this one prototype machine has a lot to teach our practical-minded policymakers and communities. As stated on PBS' Inside Nova, "The significance of Watson goes beyond public perception... Watson isn't a single computer program, but a very large number of programs running simultaneously on different computers that communicate with each other."

Watson, in other words, isn't confined to preset programming of, for example, 10 conditions for this or that solution. To answer its questions, Watson seeks and communicates with new sources, penetrates the nuances of written and spoken language, and uses its power to arrive at trustworthy, best-bet answers.

Ultimately, the value of any super machine lies in whether humans can use it as a tool for problem solving and not confuse our basic tools with the solutions we seek. As IBM engineers complete California's longitudinal data system, California educators and community members need professional development and public engagement to access and reach beyond the technology and arrive at human decisions.