Man Versus Machine

Man Versus Machine

The recent economic meltdown has renewed the salience of the question of the relationship between human beings and computers. A survey of various experts in the fields of economics, information science, and artificial intelligence (AI) reveals general agreement that substantial responsibility for the global crisis belongs to people not programs -- but beyond that, the consensus breaks down.

The effect of advances in information technology on the financial sector raises more questions than it answers: How can transactions understood by few -- if any -- experts be subject to effective government regulation? Is the current crisis a symptom of human dependency on information-processing machines created by man but now beyond man's control?

More immediately, these questions suggest the enormous complexities the next president will face. starting immediately after election day, and the inherent difficulties in developing an effective and politically viable approach to dealing with them.

The Huffington Post communicated (by email and in interviews) with nine specialists in computers, finance, economics, and artificial intelligence to get their take on these issues. The nine:

*Nathan Myhrvold is the CEO and managing director of Intellectual Ventures, which he founded with Edward Jung, a former Microsoft colleague. When Myhrvold left Microsoft in 2000, he was Chief Technology Officer. Myhrvold holds a doctorate in theoretical and mathematical physics and two master's degrees, one in mathematical economics, the other in geophysics and space physics.

*Cassio Pennachin is the cofounder of Novamente LLC, a software company focused on Artificial Intelligence, and the nonprofit Artificial General Intelligence Research Institute; the chief technology officer of Biomind LLC, a bioinformatics firm; and co-editor of Artificial General Intelligence.

*Martin Baily was chairman of the Council of Economic Advisers during the Clinton administration (1999-2001), and is the co-author of a prescient book, The Great Credit Squeeze (2008), and a Brookings scholar.

*Ben Goertzel is director of research at the Singularity Institute of Artificial Intelligence; his writings include Chaotic Logic and Creating Internet Intelligence. He holds a Ph.D. in mathematics from Temple University.

*Peter Hartley, professor of economics at Rice, specializes in Applied Microeconomics, Money and Banking, and is the academic director of the Shell Center for Sustainability (SCS).

*George Dyson, "a historian among futurists," and a Director's Visitor at the Institute for Advanced Study at Princeton, is the author of Baidarka; Project Orion; and Darwin Among the Machines.

*Ray Kurzweil is a pioneer in the field of Artificial Intelligence (AI); the inventor of the first print-to-speech reading machine for the blind; the founder of companies specializing in the application of evolutionary algorithms to stock market decisions; the author of The Age of Spiritual Machines, When Computers Exceed Human Intelligence; the recipient of the Lemelson-MIT Prize, the nation's largest award in invention and innovation, and of the 1999 National Medal of Technology. In 2002 he was inducted into the National Inventor Hall of Fame.

*Jaron Lanier, Interdisciplinary Scholar-in-Residence at the University of California - Berkeley, is a specialist in virtual reality research. He was the chief scientist of Advanced Network and Services and the lead scientist of the National Tele-immersion Initiative, a coalition of research universities studying advanced applications for Internet2.

*Tom Edsall (despite the coincidence, no known relation to the author of this article) is Senior Vice President and Chief Technology Officer at Cisco Systems Inc., Datacenter, Switching, and Security Technology Group, where he was chief architect of the Cisco MDS 9000 storage networking platform. Before that, he was Chief Technology Officer and cofounder of Andiamo Systems, Inc., a storage networking startup acquired by Cisco.

*Robert Solow was awarded the Nobel Prize in Economics in 1987 "for his contributions to the theory of Economic Growth." He has been an economics professor at MIT since 1950; from 1961 to 1963 he was a senior economist with President John F. Kennedy's Council of Economic Advisers; in 1961 he received the American Economic Association's John Bates Clark Award, given to the best economist under age forty; in 1979 he was president of that association.

The line of inquiry in this article was prompted, in part, by an October 11, 2008 New York Times op-ed written by Richard Dooling who provocatively argued:

As the current financial crisis spreads (like a computer virus) on the earth's nervous system (the Internet), it's worth asking if we have somehow managed to colossally outsmart ourselves using computers. After all, the Wall Street titans loved swaps and derivatives because they were totally unregulated by humans. That left nobody but the machines in charge.

One of the questions the Huffington Post posed to these nine experts was: Is the current economic crisis in fact attributable to inadequate information fed into super-fast computers, leading to electronic trading, and particularly to computer-driven algorithmic trading based on erroneous assumptions?

As Alan Greenspan argued in his October 2008 testimony to Congress:

The best insights of mathematicians and finance experts, supported by major advances in computer and communications technology, [had a fatal flaw] . . . .The whole intellectual edifice collapsed . . .because the data inputted into the risk management models generally covered only the past two decades, a period of euphoria.

Solow argues that

financial types . . . may, through self-selection, have a bias toward underestimating risks that they do not fully understand. . . . The complexity and opaqueness of the financial instruments created by financial engineering outran the capacity of money managers and others to understand what they were doing.

Pennachin sees a subtle interaction of computer programs, complex instruments, and huge accelerations in speed and volume all combining to make accurate risk assessment exceptionally difficult:

Analysts did understand what their computer programs did, but they didn't fully understand what they were supposed to do. Very few people really understood the risk implications of asset-backed derivatives, an explosion in securitization and the subsequent highly leveraged trading on these securities. This trading is all done over the counter (OTC), which makes value and risk assessment difficult for at least two reasons:

1. No standard contracts. Since you trade directly with your counterparty, it's easy to tailor agreements to each party's needs at the time. This is useful on an individual contract basis, but dangerous in scale because it's a lot harder to model the impacts of price fluctuations when each contract is a little different. These little differences reduce the confidence in the final results, which gives analysts wider error bars. Results that come with wide error bars aren't very useful and tend to be discarded. Imagine if election polls came with 10% error bars in both directions, for instance: you'd need Obama (or McCain) to have a 20 points lead in order to say something with the degree of confidence required by standard statistical analysis. Most analysts would look at the similarly wide error bars in risk models and shrug them off.

2. No information on holdings and pairings. Since all contracts are executed directly by the trading parties, it's very hard for market participants to know who's holding which positions. So when some derivatives went sour because the original debtors were defaulting on their subprime mortgages, no one really knew how to estimate the impact on the market as a whole, and no one knew who was holding good or bad debt. This led to the credit crisis that has hurt the financial markets, as no one would trust anyone else and people stopped lending amongst themselves.

Finally, there's another inherent challenge brought by securitization -- as debt is packaged and sold into the so-called tranches, most investors had no idea whose debt they were holding onto, and had to blindly trust rating agencies' risk assessment for the paper they held. It turns out rating agencies didn't know how to measure that risk as well, and many made a key mistake, called independence assumption. Naively put, under an independence assumption, a default by one borrower doesn't influence the probability of default by other borrowers. This turned out to be wrong. Incidentally, a similar assumption was made by Long Term Capital Management, and caused their much publicized downfall.

In summary, analysts didn't know what their programs should be doing, and made overoptimistic assumptions for key inputs into their models. It's for reasons like this that they say: 'artificial intelligence is no match for natural stupidity'.

Baily's bottom line is similar, but he reaches his conclusion by following a different path. Baily contends that computer-based innovations were "a necessary condition" for the current breakdown to take place:

Collateralized debt obligations took off when an analyst at the Canadian Imperial Bank of Commerce in the '90s developed Gaussian Copula allowing risk assessment and pricing. The risk assessment was faulty, as it turned out.

Baily argues that

it is the case that computerized Monte Carlo studies may have persuaded people that risk assessment was better than it really was. Monte Carlo studies have been around for a long time. Computers made them easier and cheaper to do.... If artificial intelligence had been properly used, many of the subprime mortgages would not have been issued in recent years. [But] there was a severe incentive problem among brokers and originators.

On this point, Baily has very strong views:

The biggest problem was that the financial institutions did not follow their own risk management politics. If they had, they would have behaved differently. .. CEOs of the companies and other top management failed to supervise the traders or control them because for a white they were making so much money. There are technical problems in risk assessment, but the management failures were much more important in practice.

Edsall, Senior Vice President of Cisco Systems, voices stronger concerns over the role of computers as essential participants, although not co-conspirators, in the financial meltdown:

In my opinion the current crisis is not an artifact of these algorithms and systems at all. However, the electronic or programmatic trading is likely an important lubricant that allows a crisis like the current one to first be so global, and second, to happen so fast. Most of the high speed electronic trading is trying to take advantage of small inefficiencies in the market that historically have been too small or too fleeting to use. Now, with computers, they are being taken advantage of. It is regulation and policy, or lack thereof, that has caused this crisis to occur. Computers and networks just help make it big.

Many of the systems are so complex that no single person understands them completely. However, this should not be interpreted to mean that there is some artificial intelligence program out there that is taking on a life of its own. Each one is doing exactly what its creator wanted it to do. What is often not understood is the interactions and the implications of the interactions between different programs. It is like the software from Microsoft. Each of the products from Microsoft work very well by themselves. It is only when you are running a bunch of them at the same time that problems start to arise. This is generally because no one considered the interaction of various components from different programs....An analogy that might make sense is our environment. We have often done things to our environment such as removing wolves from Yellowstone that had completely unforeseen and non-intuitive consequences (reduction of aspen groves in this example). I think the same can happen in the market when you have different companies developing applications that interact in the market when they have very little understanding of each others applications.

You could call this being out of control. Certainly there is no one in control.

Ben Goertzel, suggests that computers played significant role in the current economic debacle, but the assumptions on which the computers calculated risk were developed by human beings:

There's no doubt that advanced software programs using AI and other complex techniques played a major role in the current global financial crisis. However, it's also true that the risks and limitations of these software programs were known by many of the people involved, and in many cases were ignored intentionally rather than out of ignorance. ....Sometimes humans assigned with the job of assessing risk are given a choice between 1) assessing risk according to a technique whose assumptions don't really apply to the real-world situation, or whose applicability is uncertain, or 2) saying 'Sorry, I don't have any good technique for assessing the risk of this particular financial instrument.' Naturally, the choice commonly taken is 1 rather than 2.

Peter Hartley has a different take:

Computer models that are not well-understood by their users have certainly played a role in the crisis. In some ways the universities are at fault. We have increasingly trained students (e.g. MBAs) to manipulate formulae and use algorithms without requiring them to understand the basic economics underlying the models and thus where the weaknesses in the models may lie. This is partly a response to student demands for easy courses with no abstractions or difficult concepts to master. It also is true that computers have enabled more complex financial instruments to be developed and traded, while also accelerating the speed with which disturbances are transmitted around the world.

Computers and computer models or their ignorant users are not the only source of the problem, however, and I agree with one comment below that government pressure to provide mortgages to unqualified buyers was also a significant factor. I would add that implicit government guarantees of Fannie Mae and Freddie Mac (which later became explicit guarantees) also contributed to reckless under-assessment of risks.

Nathan Myhrvold, formerly top dog at Microsoft, told the Huffington Post:

First, there are two separate aspects of this crisis. The main driving force was the collapse of the housing market bubble. This market has no computers in it - it is millions of individual Americans buying houses. They bid houses higher and higher driven by their own instincts, and a faith that the market would keep going up. So computers played no part in this aspect. Indeed, commentators tend to miss out that this crisis started on Main Street, not Wall Street, with people overpaying for houses, or taking 2nd mortgages they couldn't afford.

Downstream of the homeowners there were sophisticated computer models used in Wall Street. Indirectly this was one of the reasons they kept lending to home owners, and it is how they traded the resulting securities. However, I don't think that there is any reasonable way to call this a computer generated crisis. In particular it does NOT meet your test that people failed to understand what the computers were doing. Computers were used quite narrowly to analyze the value of mortgage backed securities - and everybody knew that is what they were doing. They had some bad HUMAN assumptions.

Today, everybody asks about market 'capitulation' meaning when have the people stopped despairing. Computers don't capitulate. So net-net, while computers surely were tools in the hands of people, they had zero roll in the homeowner's buying binge, and only a supporting role on Wall Street.

Jaron Lanier takes on the debate about the role and power of computers in shaping human finances, behavior and prospects from a radically different vantage point faulting -- in an article published on the Edge web site -- "cybernetic totalists" who, absolve from responsibility for "whatever happens" the

individual people who do specific things. I think that treating technology as if it were autonomous is the ultimate self-fulfilling prophecy. There is no difference between machine autonomy and the abdication of human responsibility. . . .There is a real chance that evolutionary psychology, artificial intelligence, Moore's law fetishizing, and the rest of the package will catch on in a big way, as big as Freud or Marx did in their times. Or bigger, since these ideas might end up essentially built into the software that runs our society and our lives. If that happens, the ideology of cybernetic totalist intellectuals will be amplified from novelty into a force that could cause suffering for millions of people. The greatest crime of Marxism wasn't simply that much of what it claimed was false, but that it claimed to be the sole and utterly complete path to understanding life and reality. Cybernetic eschatology shares with some of history's worst ideologies a doctrine of historical predestination. There is nothing more gray, stultifying, or dreary than a life lived inside the confines of a theory. Let us hope that the cybernetic totalists learn humility before their day in the sun arrives.

George Dyson contends that the current crisis can in no way be blamed on computers:

The solution to the present crisis is more computerization of the banking system, not less. It was human beings, not computer programs, that robbed the banks. Instead of pouring good money after bad into failing banks, we should be launching new ones, and making a fresh start.

A Banke is a certain number of sufficient men of Credit and Estates joyned together in a stock, as it were for keeping several mens Cash in one Treasury, and letting out imaginary money at interest... and making payment thereof by Assignation, passing each mans Accompte from one to another, yet paying little money,' explained Francis Cradocke, in 1660, in 'An Expedient For taking away all Impositions, and raising a Revenue without Taxes, By Erecting Bankes for the Encouragement of Trade'.

To start a bank requires secure information storage to keep accounts, a license from the government (or an entity beyond government), and trust. The systems that failed were far too opaque. This is our chance to reboot the economy, and do it right. Who do you trust? If you trust someone with your entire life's e-mail, or last month's search history, or your Amazon wish list, would you trust them with your cash?

Ray Kurzweil has expanded and developed the concept of artificial intelligence reaching a key turning point sometime in the next three decades. Here is Kurzweil's explanation on his own website, kurzweilai.net:

I think that once a nonbiological intelligence (i.e., a machine) reaches human intelligence in its diverse dimensions, it will necessarily soar past it because (i) computational and communication power will continue to grow exponentially, (ii) machines can master information with far greater capacity and accuracy already and most importantly, machines can share their knowledge. We don't have quick downloading ports on our neurotransmitter concentration patterns, or interneuronal connection patterns. Machines will.

We have hundreds of examples of "narrow AI" today, and I believe we'll have "strong AI" (capable of passing the Turing test) and thereby soaring past human intelligence for the reasons I stated above by 2029. But that's not the Singularity. This is 'merely' the means by which technology will continue to grow exponentially in its power.

...If we can combine strong AI, nanotechnology and other exponential trends, technology will appear to tear the fabric of human understanding by around the mid 2040s by my estimation. However, the event horizon of the Singularity can be compared to the concept of Singularity in physics. As one gets near a black hole, what appears to be an event horizon from outside the black hole appears differently from inside. The same will be true of this historical Singularity.

"Once we get there, if one is not crushed by it (which will require merging with the technology), then it will not appear to be a rip in the fabric; one will be able to keep up with it.

Kurzweil told the Huffington Post:

There have been derivatives created and traded by computer intelligence, but in my view this was not the cause of the crisis. The problem was excessive leverage just like in 1929 and the 1930's. We put bank regulations in place that limited leverage and that worked for many decades, but recently a shadow banking system arose that were not covered by the banking regulations. Wall Street firms were leveraged 30:1. It was not just an American problem; European banks were leveraged 60:1. That's extremely unstable. It only worked when the real estate assets that were used to collateralize the loans kept going up in value. When that bubble burst, the excessive leverage caused the melt down. But this was not a computer problem. The humans running these organizations knew fully well what their leverage ratios were. Another aspect of the problem is a lack of transparency. Many of these derivatives have no clearing house and firms who are in a position to lend today don't know what the liabilities are of firms that they might lend to. So the lack of information causes them not to lend to anyone.

It is true that there was a lot of computer decision-making. The computers were reporting what they were doing, and humans were asleep at the switch to see the excessive exposure to leverage. It is not as if the computers hid this reality. But people were lulled by the apparent positive results. So there is a risk that as computers handle a lot of our apparently routine decision making, that we are lulled into a complacent lack of attention. The computers were not claiming to handle the higher level decision making.

Do the current economic problems have larger implications concerning the notion that artificial intelligence could not only surpass human intelligence, but that it could become independent of human intelligence - a view publicized in an April, 2000, Wired article by Bill Joy, co-founder of Sun Microsystems: "Why The Future Doesn't Need Us"?

Cassio Pennachin argues, more prescriptively than others, that human beings will soon be at

a point at which technology becomes self-improving, bringing extremely rapid advances. Obviously, such an inflection point would imply many serious risks. This means that research in AI, nanotech, and other fields with similarly risky outcomes should be coupled with research on safety and what Eliezer Yudkowsky calls 'friendly AI' -- technology that's mathematically guaranteed to be well-motivated and well-behaved toward humans and other species. I'm not convinced this level of safety is even possible, but I'm convinced safety should play an increasingly larger role as we get closer and closer to human level AI, autonomous nanobots and such.

Popular in the Community

Close

What's Hot