Here we go again.
Computers run amok.
No personal accountability, only finger pointing -- maybe they confused digits with digital.
No doubt you have seen the news and followed the story -- the vaunted, automated trading system of Knight Capital Group seemed to have had a "nervous breakdown" and the company, responsible for some 10% of all United States Equity Trade Volume, booked hundreds of millions of dollars of faulty trades.
The company has declined to comment.
A chilling first glance might invite parallels to Stanley Kubrick's legendary movie 2001: A Space Odyssey --- based on Arthur C. Clarke's short story The Sentinel. If you haven't seen it, you must, it's a classic -- and for those who need a brief reminder, the villain in the movie is a central computer system named HAL -- that in single-minded pursuit of its programmed mission kills anyone it deems to be in its way.
And to give you a taste of what makes it so spooky (and I might add, what draws incredibly scary parallels to Knight) listen to these three clips - in order please...
See ... or should I say hear ... what I mean?
However, what is truly troubling in the Knight case, with shades of HAL thrown in for good measure, is that there is supposed to be, allegedly is, an off switch -- a simple button, lever, plug to be pushed or moved or pulled that would have -- should have -- shut the system down. A kill switch in the vernacular. And what makes it worse is that employees noticed within minutes that the system was haywire, but it took them 45 minutes -- an eternity plus another eternity at the speeds at which the system traded -- to take action and damage piled upon damage.
And now the regulators will step in. But before they start pontificating, perhaps they should start with Asimov's Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Frankly it also reminds me of the nuclear disaster in Japan during/after the tsunami, when human error and indecision caused more damage than the failure of technology.
And therein maybe lies the hope.
The problem is not the systems -- it's us.
Computers give us immense opportunities, enhance our ability to perform and achieve, enable outcomes previously considered impossible and ultimately, as tools, allow us to excel in being human.
However, when we outsource our own brains and abdicate common sense to HAL and Knight and whomever -- we are squandering the true positive power inherent in what we have created, unleashing the monster that can destroy.
I always find it ironic that since H. G. Wells and maybe before, the apocalyptic vision of the future is technology out of control -- where our pursuit of High Tech is for everything but peace, health and feeding the needy.
Let's not blame the technology -- let's not regulate ourselves out of the ability to progress -- rather let us be aware that we are the limiting factor, and the more we blindly rely on technology without any sense of human need and limitation the more it will run amok.
Listen: "I ought to be thy Adam, but I am rather the fallen angel..." -- Mary Shelley, Frankenstein
The analysts who gave Knight a ludicrous market cap, the managers who didn't have the guts to pull the plug, the regulators who will now all weigh in ... there's the message for you.
What do you think?