What the Machine Learning Revolution Means for Today's Marketer

Scientists save lives. Marketers... maybe don't save lives, but they would be a lot more successful at influencing desired outcomes if they, just like scientists, used a scalable approach to experimentation and discovery.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Scientists save lives.

Marketers...maybe don't save lives, but they would be a lot more successful at influencing desired outcomes if they, just like scientists, used a scalable approach to experimentation and discovery.

Most marketers today are striving to achieve personalization, or at least high relevance with their marketing efforts to better engage and retain their customers. If you can create the right intervention that reaches the right customer at the right time, you can engage customers, increase sales, and prevent churn. But just like the side effects of a medicine, ill-timed or mischosen interventions have the potential to make things worse. With less than optimal targeting of marketing offers, companies may cannibalize future purchases or create negative customer experiences − experiences that don't delight the customer and actually have a negative effect on key business metrics such as retention. So how does today's marketer determine the best, most effective intervention for influencing a desired behavior in every customer scenario?

Observation is good, but not enough
Before the scientific method of controlled experimentation became the standard for testing, we gained knowledge of the world from observation.

When I give this patient a medicine he feels better, but when I give a customer a placebo he does not.

However, observation alone doesn't get the job done. You can see that a change occurred, but you can't isolate what caused it. Plus, you still have a lot of unanswered questions: Did the medicine actually cause him to heal or just feel better temporarily? Would he have gotten better without the medicine? Did he need as much? Will the medicine hurt him in the long term?

It's the same scenario for marketing. When you "intervene" with an offer, and a customer redeems it, what exactly does it mean? Could the offer have been less? Would the customer have behaved in the desired manner even without the offer? Which customers would have responded better to other offers? Is the impact of the offer positive over the long term?

Predictive and propensity models don't work either
The ultimate goal of the marketer is not to observe outcomes but find out how to cause them. Yet many predictive models currently used by marketers don't reveal enough to identify actionable causality. These models typically give scores to customers who will exhibit, or have exhibited, certain behaviors. For example, a high churn score means a customer is likely to leave the company, or has left the company but doesn't yet meet the formal definition of "churned". However, predictive modeling is unable to predict what kind of interventions might or might not alter the course of the customer's behavior. As a result, marketers have to guess how to most effectively influence, nudge, coax, tempt, or convince a given customer in order to encourage a specific behavior or action that otherwise would not occur.

A/B testing can't identify every winner

One of the most common ways marketers try to determine the best marketing tactics is through A/B testing. With A/B testing marketers are focused on identifying the "winner" among a hypothesized set of potentially-good marketing interactions, targeted at certain customers and delivered in certain execution contexts. The test identifies the offer that performs best in terms of take rate or other KPI, and therefore, should be launched broadly. But with each test there are questions:
  • Is the winning offer the best for everyone? Would another message have worked better for customers who behave differently?
  • Is there another incentive that would have worked better in different contexts?
  • Would delivering the offer via a different channel have been better for some customers?
A/B testing can be better than guessing, but is limited to what can be hypothesized and constructed by the marketer. Because A/B tests are designed and executed manually and rely on the marketer to design the clean measurement of each test, identifying the winner for each customer in every context across a customer base of millions becomes impossible.

While A/B testing can be scientifically rigorous, it's insufficient in that it can't possibly test and optimize all of the possible combinations of targeting conditions e.g., those involving the customer, the marketing experience itself, and the execution of the experience e.g., day, time, channel, location, etc. to impact the performance of each and every marketing interaction.

So what is the alternative approach to enabling personalization at scale?

The evolution of hypothesis testing
Statistical-hypothesis testing was a huge leap forward in science. With it, we no longer had to rely on observation looking for correlation. We could test whether one intervention worked better than another side-by-side and measure the results with confidence. It's what has enabled A/B testing as we know it today.

But for both modern science and modern marketing, it's easy to see why this type of testing may have limitations. Setting up tests can be time consuming and complex, and setting up enough of them to explore all possibilities related to a desired outcome is often impossible.

What has evolved to address these challenges and replace hypothesis testing is the multi-armed bandit approach to experimentation. With this approach, the notion of a marketer waiting until the end of an experiment for a "final answer" essentially goes away. Compared to A/B tests, multi-armed bandit experimentation maintains quantified best-guesses for answers as to which marketing interactions are working, and which aren't, and refines those guesses continuously. Dynamic testing never stops, and the marketers' desire to exploit what is working is uniquely balanced with their desire to continually explore new customer experiences that may or may not generate the desired outcome e.g., revenue lift. Experiments progress yielding more and more learning that the marketer can leverage for deciding what to test.

The machine learning revolution
In light of how testing has evolved, the giant leap forward today is the coupling of multi-armed bandit experimentation and machine learning. What this means for today's marketer is the ability to glean customer insights that can be automatically applied to personalize and optimize marketing interactions, and do so at enormous scale.

Let's say you are running a new marketing campaign and rather than using A/B testing, you have a machine-designed experimentation capability. By leveraging behavioral analytics and an understanding of available marketing assets and execution capabilities, the machine has the ability to create a very high dimensional fabric of possible A/B tests.

What we are seeing when deploying our Amplero product, is the ability to create 2200+ tests and learn based upon empirical feedback, after marketing executions are randomly targeted, which dimensions are associated with a causal, positive or negative marketing response. This enables the capability to boil down the 2200+ possibilities to the thousands that actually matter, when marketing to individual customers across a customer base that's in the millions.

What's powerful about this machine driven revolution is the dynamic learning and branching of concurrent bandits that happens automatically based on all of the possible individual experimental hypotheses being tested. In this example, rather than the marketer having to manually design new experiments, the machine recursively discovers the next experiment to execute across behaviors, offers, messages, contexts, execution parameters, etc. and builds new control groups on its own without marketer intervention. The machine learning automatically and continuously feeds a closed loop experimentation cycle that allows for ongoing optimization of marketing interactions executed on a per customer and context basis.
This is true personalized marketing achieved at scale and only made possible by machine designed testing and self-learning optimization.

Science and the future of marketing

To some, relying on machines to execute testing and personalization may seem like taking the human touch out of marketing. In fact, the opposite is true. With the combination of closed loop experimentation and machine learning, you can:
  • Concurrently manage thousands of experiments;
  • Automatically learn what is working and what is not, and determine the true causation of outcomes, and have those quantified insights communicated to the marketer;
  • Continuously adjust to ensure ongoing optimization of personalized marketing interactions to drive sustainable results; and
  • Direct your focus on strategy development and new creative ideas to test for impact rather than operational tasks
Today scientific experimentation is being used to tackle some of the most complex of problems and actually influence outcomes. It's time that marketers assume a similar scalable approach to get to true personalization.

Modern marketers aren't saving lives. But by adopting a rigorous experimentation capability and marrying it to today's machine learning technology, they can certainly make a big impact on customer engagement and profitability, and that has significant measurable benefit.

Dr. Olly Downs is the Chief Scientist behind Amplero, a self-learning personalization platform built by Globys.

Popular in the Community

Close

What's Hot