04/17/2013 04:19 pm ET | Updated Jun 17, 2013

Algorithms are the foundation of your computing interactions. An algorithm is the means by which a computer program can make decisions about you, for you, or decisions that affect you. Algorithms are the translation of what you do into rules and policies that a computer understands (i.e., 0s and 1s). Like it or not, you are influenced by them as much as you influence them.

Algorithms need data -- they use digital data that you give, leave or have tracked about you (willingly or not). This input into an algorithm is your digital footprint, which comes from Facebook, Twitter, text messages, email, key stokes, swipes, gestures, play lists, payment records, your routes, navigation -- indeed anything you do which is an interaction with an electronic device. This is the basis of what an algorithm knows about you. It is how an algorithm can model you: it takes input and predicates based on what you have done and what you will do -- it is you.

Adding other digital data about you generated from your friends helps refine and confirm how well you and your preferences can be, and have been, modeled. I can now compare your actions and reactions to others and group you into a segment that behaves the same way, and model this group with an algorithm.

So, we love algorithms that help and save time. The supermarket shopping cart that says, "Because you like this you will like that," or "Last time you bought this, you might need it again," are fun, helpful and not invasive. We are less enthusiastic when an algorithm determines our credit-worthiness unfavorably. So, how do we feel about algorithms that make decisions for us in a self-drive car?

Dan Ariely's work (Truth, Predictably Irrational, Behavior, Desire Engines) concludes that we are creatures of habit. Habits can be modeled and coded into an algorithm. In reality, we are not as irrational as we would like to think. We are indeed creatures of habit, when a habit is formed we find it very difficult to break. Once habits are known, it is possible with a high degree of accuracy to predict your actions, reactions and most probable outcomes.
If we reject the viewpoint that we are predictable, why not start with chemistry. Humans, our bodily form, are essentially a complex algorithm of chemicals. The levels of chemicals in your body interact with your cells, causing reactions. Those reactions can be found to form certain biases and can be modeled -- fear and flight experiments. Experience is comprised of how we reacted to the environment and how our unique body made a chemical cocktail to react to that same environment. Some chemical structure has learned (remembered) how that reaction worked (whatever worked is!) Again we reach the same conclusion -- it is difficult to break a habit -- as the habit is chemical.

A question these points raise is "Can "data" be human?" Given that we can be modeled, based on who and what we are, how we react, that chemical cocktail -- could we model human behavior? Furthermore, would that model be human? The reason to ask this is because of how we make decisions. Can a computer make the same decision?
None of this thinking is new as Keynes's observation (in his General Theory of economics) said "Practical men who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist." So the question is not if you can become an algorithm, but how accurately can an algorithm model you?
If sensors collected all your data from all interactions and reactions then it is probable that the model would reflect your behavior with a high degree of accuracy. The issues are that sensors cannot (today) determine...

• The chemical cocktail in your body, only the reaction, but this will happen.
• See the very subtle facial expressions that another human can see.
• Blend yours and others' biases to predict how you will react together.
• The difference between signals and noise.

Today's sensors are blind to most human interactions, however, that will change. Apple already has patents on sensors for blood sugars and oxygen levels based on your ear buds. Facial muscle movement can be tracked through cameras. In the long term, these sensors and ability to determine/understand what the computer is seeing will arrive. The question remains: who will write the model about how you will react to a new situation, a new environment, a new complex inter-relationship of small changes?

Steven Lukes (sociologist) indicates that power comes in three varieties: the ability to stop people doing what they want to do; the ability to compel them to do things that they don't want to do and the ability to shape the way they think.

Given that algorithms are now doing all three in your life, who is in control?

And, what is the order in which my interests should be best served?

• Me as an individual -- I code for myself based on my data and my desired outcomes. This allows me to misrepresent myself.
• Me as a group of like minded people -- we test the algorithm to determine if I (we) like or dislike the implied outcomes, then we refine.
• An organization acts on my behalf to determine if harm is done and sets guidelines (best working practices)
• A government sets up a regulatory body to provide guidance and enforce law
• A programmer who is outside of my jurisdiction does what they like
• A company who is outside of my control who wants a desired outcome
So who plays God? Who determines whose interests are best served by a machine? Who does the machine act on the behalf of? Where it is an organization that can exploit your data sets, there is an interesting dynamic...

The company is coding the algorithm, the focus is profit -- there is an incentive to get it right. The efficiency games come to town.

Or, the company itself is coding the algorithm, the focus is volume - we'll find a different bias.
What if the code is outsourced or the algorithm purchased? Where is the alignment, where are the policies defined?

From the perspective of organizational behavior and directors' fiduciary duties this raises some unanswered questions. How do the strategy, style, culture and purpose of a company affect the algorithm and coding principles. Short term demands once coded into algorithms will drive user decisions in a certain way. Over time, if enshrined in the very fabric of the company, this can be difficult to change.


Your digital footprint forms the data set that allows and enables companies to personalize your experience. However, if your data set when passed through the algorithm produces a credit rating that many will not lend to, one party may choose to decline, but another may not. The outcome is both the prerogative and the bias of the company (coder/algorithm) and in reality it is not controlled by you. How do you feel about that and what should we do?