While Google's activities taken in isolation may not raise too many eyebrows, taken together they point to some sort of strategic vision, of which, we are currently unaware. This strategic vision, though, may require careful execution if it is to uphold to the company's motto of "do no evil."
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

I often joke with my students that they should consult the "God of Google" when faced with a problem, need information, or have not made a first attempt to answer their own questions. However, my somewhat sardonic, though I think affectionate, term for this vastly expanding company is not far off the mark.

Recently Google purchased eight leading robotics companies (Boston Dynamics, Bot and Dolly, Autofuss, Holomni, Redwood Robotics, Meka Robtics, Schaft, and Industrial Perception) all of which are involved in creating cutting edge robotics. Though this event slid by rather silently, we should look at some of Google's other recent activities to try to gain an understanding of the tech giant is planning. In particular, a few days ago reports emerged that Google also bought Deepmind, an artificial intelligence (AI) company for $400 million. This move comes alongside the creation of Google's "Open Automotive Alliance," an alliance between the company and a group of car manufacturers who agree to use Google's Android operating system as a platform for apps in their respective cars. Some see this move as a potential avenue to foster deeper relationships with automakers seeking to create self-driving cars; a well researched and funded pet project of Google. This project required a cadre of experts in the fields of robotics, machine learning, and engineering so that the car will not merely navigate obstacles, but mimic the reaction times and agility of a human driver. However, those experts were in employment before -- well before -- the acquisition of the robotics firms and Deepmind.

While Google's activities taken in isolation may not raise too many eyebrows, taken together they point to some sort of strategic vision, of which, we are currently unaware. This strategic vision, though, may require careful execution if it is to uphold to the company's motto of "do no evil." Indeed, reports that part of the Deepmind deal was to create an "ethics board" points to some knowledge that this technology has wide ranging and potentially dangerous consequences. However, creating an ethics board is one thing, and having one's motto be "do no evil" is another. For if Google is cornering the market on the creation of artificially intelligent, or at least learning, machines we may want to press them on what types of ethical boundaries they want to impose on these future (or present?) creations.

Ethics is a messy, messy business. For one thing, ethics is an attempt to aid a person -- or perhaps a now a machine -- to understand the vast, and rather complex, world in which we live and to provide us with action guiding principles. Ethics is not merely for thinking, pondering, and isolated sorts, it is for moral agents living in and amongst other moral agents. (I say moral agent here because it is unclear if we should reserve the word "person" for humans anymore, or when that word may extend to artificially created beings and intelligences, given the trajectory of machine learning and Google's interest in exploring and funding it.) So ethics is an attempt at providing moral agents with action guiding principles. What does this entail? Well, for instance, the question "what should I do?" presupposes that the context of the situation is at least relatively clear, that one understands the variety of available options, that one has a fairly robust understanding of the relevant moral rules or precepts, and that one is capable of making a reasoned judgment. That one might make the wrong judgment is also, always, a possibility.

Here is the other thing about ethics: there is no one agreed upon universal conception of what is right and wrong. If Sally is a consequentialist, she believes that the consequences of her acts dictate the moral worth of her actions. If Bob is a deontologist, he believes that it is the motives, or the maxim, of his action and not the expected effects that give the act moral worth. These, are, of course, gross oversimplifications, but there is something that we can learn from boiling centuries of debate down to two sentences: the same act, in the same set of circumstances, with the same available options, may be immoral to one person and moral to another. What is more, depending upon who you ask, there may be some situations where there is no wholly and completely correct moral answer to a problem. One might be faced with a moral dilemma, where no matter what one does, harm, wrong, or a "moral remainder" will fall somewhere.

What does this have to do with the God of Google? Well, if one's motto is "do no evil," it seems to me, that such position presupposes what evil is and how to avoid doing it. If we apply this position to Google's recent acquisitions of robotics and artificial intelligence companies, it seems to me that there is some sort of plan at work that we are unable to quite see yet.That plan may be one where the private corporation shoulders the responsibilities of guiding the creation of technology fraught with moral questions, problems and dilemmas. Indeed, it appears that if Google is going to spearhead the creation of artificially intelligent agents, then it sits in a very similar position to role of the Judeo-Christian god, as it now has the choice to pursue the creation of an intelligence that may or may not flirt with the concept of free will (and thus the ability to do evil), or to try to circumscribe the actions of artificial agents to do no evil. Of course, all of this depends, still, on one's vision for the future and one's definition of evil.

Popular in the Community

Close

What's Hot