Watson:2020 – America’s Self-Driving Presidency


Ken Jennings, the second most successful Jeopardy player of all time, has an IQ of 175. That makes him smarter than 99.9998615605% of everybody. If you put him in a city the size of Indianapolis, he’d probably be the smartest person there. In fact, in all of the US, statistics say there are only 443 people that would be smarter than Mr. Jennings.

And one machine. Let’s not forget IBM’s Watson whupped Jennings’ ass over two days, piling up $77,147 in winnings to Jennings $24,000. It wasn’t even close. Watson won by a factor of more than 3 to 1.

That’s why I think Watson should run for president in 2020. Bear with me.

Donald Trump’s IQ is probably in the 119 range (not 156 as he boasts – but then he also boasted that every woman who ever appeared on the Apprentice flirted with him). Of course we’ll never know. Like his tax returns, any actual evidence of his intelligence is unavailable. But let’s go with 119. That makes him smarter than 88.24% of the population, which isn’t bad, but it also isn’t great. According to Wikipedia, if that IQ estimate were correct, he would be the second dumbest president in history, slightly ahead of Gerald Ford. Here’s another way to think about it. If you were standing at a moderately busy bus stop, chances are somebody else waiting with you would be smarter than the President Elect of the United States.

Watson won Jeopardy in 2011. Since then, he’s become smarter, becoming an expert in health, law, real estate, finance, weather – even cooking. And when I say expert, I mean Watson knows more about those things than anyone alive.

Donald Trump, on the other hand, has probably learned little in the last 5 years because, apparently, he doesn’t have time to read. But that’s okay, because he reaches the right decisions

“with very little knowledge other than the knowledge I [already] had, plus the words ‘common sense,’ because I have a lot of common sense and I have a lot of business ability.”

In the President Elect’s mind, that also qualifies him to “wing it” with things like international relations, security risks, emerging world events, domestic crises and the other stuff on his daily to-do list. He has also decided that he doesn’t need his regular intelligence briefing, reiterating:

“You know, I’m, like, a smart person. I don’t have to be told the same thing in the same words every single day for the next eight years. Could be eight years — but eight years. I don’t need that.”

That’s right, the future leader of the free world is, “you know, like, a smart person.”

Now, President Watson could also decide to skip the briefing, but that’s because Watson can process 500 gigabytes – the equivalent of a million books – per second. Any analyst or advisor would be hard pressed to keep up.

Let’s talk about technology. Donald Trump doesn’t appear to know how to use a computer. His technical prowess seems to begin and end with midnight use of Twitter. To be fair, Hillary Clinton was also bamboozled by technology, as one errant email server showed all too clearly. But Watson is technology: and if you can follow this description from Wikipedia, apparently pretty impressive technology: “a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor threads and 16 terabytes of RAM.

In a presidential debate, or, for that matter, a tweet, Watson can simultaneously retrieve from its onboard 16-terabyte memory, process, formulate and fact check. Presumably, unlike Trump, Watson could remember whether or not he said global warming was a hoax, how long ISIS has actually been around and whether he in fact had the world’s greatest memory. At the very least, Watson would know how to spell “unprecedented

But let’s get down to the real question, whose digit do you want on the button: Trump’s “long and beautiful” fingers or Watson’s bionic thumb? Watson – who can instantly and rationally process terabytes of information to determine optimum alternatives – or Trump – who’s philosophy is that “it really doesn’t matter…as long as you’ve got a young and beautiful piece of *ss.”

I know what you’re thinking – this is us finally surrendering to the machines. But at least it’s intelligence – even if it is artificial.

Note: In writing what I thought was satire, I found once again that fact is stranger than fiction. Somebody already thought of this 4 years ago: http://watson2016.com/

Sorry Folks – Blame it on Ed


Just when you thought it was safe to assume I’d be moving on to another topic, I’m back. Blame it on Ed Papazian, who commented on last week’s column about the Rise of the Audience marketplace. I’ll respond to his comment in multiple parts. First, he said:

“I think it’s fine to speculate on “audience” based advertising, by which you actually mean using digital, not traditional media, as the basis for the advertising of the future.”

All media is going to be digital. Our concept of “traditional” media is well down its death spiral. We’re less then a decade away from all media being delivered through a digital platform that would allow for real time targeting of advertising. True, we have to move beyond the current paradigm of mass distributed, channel restricted advertising we seem stuck in, but the technology is already there. We (by which I mean the ad industry) just have to catch up. Ed continues in this vein:

“However, in a practical sense, not only is this, as yet, merely a dream for TV, radio and print media, but it is also an oversimplification.”

Is it an oversimplification? Let’s remember that more and more of our media consumption is becoming trackable from both ends. We no longer have to track from the point of distribution. Tracking is also possible at the point of consumption. We are living with devices that increasingly have insight into what we’re doing at any moment of the day. It’s just a matter of us giving permission to be served relevant, well targeted ads based on the context of our lives.

But what would entice us to give this permission? Ed goes on to say that…

“Even if a digital advertiser could actually identify every consumer in the U.S. who is interested—or “in the market” for what his ads are trying to sell and also how they are pitching the product/service—and send only these people “audience targeted ads”, many of the ads will still not be of interest…”

Papazian proposed an acid test of sorts (or, more appropriately – an antacid test):

“Why? Because they are for unpleasant or mundane products—toilet bowel cleansers, upset stomach remedies, etc.—-or because the ads are pitching a brand the consumer doesn’t like or has had a bad experience with.”

Okay, let me take up the challenge that Ed has thrown down (or up?). Are ads for stomach remedies always unwanted? Not if I have a history of heartburn, especially when my willpower drops and my diet changes as I’m travelling. Let’s take it one step further. I’ve made a dinner reservation for 7 pm at my favorite Indian food restaurant while I’m in San Francisco. It’s 2 pm. I’ve just polished off a Molinari’s sandwich and I’m heading back to my hotel. As I turn the corner at O’Farrell and Powell, an instant coupon is delivered to my phone with 50% off a new antacid tablet at the Walgreen’s ahead, together with the message: “Prosciutto, pepperoncinis and pakoras in the same day? Look at you go! But just in case…”

The world Ed talks about does have a lot of unwanted advertising. But in the world I’m envisioning, where audiences are precisely targeted, we will hopefully eliminate most of those unwanted ads. Those ads are the by-product of the huge inefficiencies in the current advertising marketplace. And it’s this inefficiency that is rapidly destroying advertising as we know it from both ends. The current market is built on showing largely ineffective ads to mainly disinterested prospects – hoping there is an anomaly in there somewhere – and charging the advertiser to do so. I don’t know about you, but that doesn’t sound like a sustainable plan to me.

When I talk about selecting audiences in a market, it’s this level of specificity that I’m talking about. There is nothing in the above scenario that’s beyond the reach of current Mar-Tech. Perhaps it’s oversimplified. But I did that to make a point. In paid search, we used to have a saying, “buy your best clicks first”. It meant starting with the obviously relevant keywords – the people who were definitely looking for you. The problem was that there just wasn’t enough volume on these “sure-bet” keywords alone. But as digital has matured, the amount of “sure-bet” inventory has increased. We’re still not all the way there – where we can rely on sure-bet inventory alone – but we’re getting closer. The audience marketplace I’m envisioning gets us much of the way there. When technology and data allow us to assemble carefully segmented audiences with a high likelihood of successful engagement on the fly, we eliminate the inefficiencies in the market.

I truly believe that it’s time to discard the jury-rigged, heavily bandaged and limping behemoth that advertising has become and start thinking about this in an entirely new way. Papazian’s last sentence in his comment was…

“You just can’t get around the fact that many ads are going to be unwanted, no matter how they are targeted….”

Do we have to accept that as our future? It’s certainly the present, but I would hate to think we can’t reach any higher. The first step is to stop accepting advertising the way we know it as the status quo. We’ll be unable to imagine tomorrow if we’re still bound by the limitations of today.


The Rise of the Audience Marketplace


Far be it from me to let a theme go before it has been thoroughly beaten to the ground. This column has hosted a lot of speculation on the future of advertising and media buying and today, I’ll continue in that theme.

First, let’s return to a column I wrote almost a month ago about the future of advertising. This was a spin-off on a column penned by Gary Milner – The End of Advertising as We Know It. In it, Gary made a prediction: “I see the rise of a global media hub, like a stock exchange, which will become responsible for transacting all digital programmatic buys.”

Gary talked about the possible reversal of fragmentation of markets by channel and geographic area due to the potential centralization of digital media purchasing. But I see it a little differently than Gary. I don’t see the creation of a media hub – or, at least – that wouldn’t be the end goal. Media would simply be the means to the end. I do see the creation of an audience market based on available data. Actually, even an audience would only be the means to an end. Ultimately, we’re buying one thing – attention. Then it’s our job to create engagement.

The Advertising Research Foundation has been struggling with measuring engagement for a long time now. But it’s because they were trying to measure engagement on a channel-by-channel basis and that’s just not how the world works anymore. Take search, for example. Search is highly effective at advertising, but it’s not engaging. It’s a connecting medium. It enables engagement, but it doesn’t deliver it.

We talk multi-channel a lot, but we talk about it like the holy grail. The grail in this cause is an audience that is likely to give us their attention and once they do that – is likely to become engaged with our message. The multi-channel path to this audience is really inconsequential. We only talk about multi-channel now because we’re stopping short of the real goal, connecting with that audience. What advertising needs to do is give us accurate indicators of those two likelihoods: how likely are they to give us their attention and what is their potential proclivity towards our offer. The future of advertising is in assembling audiences – no matter what the channel – that are at a point where they are interested in the message we have to deliver.

This is where the digitization of media becomes interesting. It’s not because it’s aggregating into a single potential buying point – it’s because it’s allowing us to parallel a single prospect along a path of persuasion, getting important feedback data along the way. In this definition, audience isn’t a static snapshot in time. It becomes an evolving, iterative entity. We have always looked at advertising on an exposure-by-exposure basis. But if we start thinking about persuading an audience that paradigm needs to be shifted. We have to think about having the right conversation, regardless of the channel that happens to be in use at the time.

Our concept of media happens to carry a lot of baggage. In our minds, media is inextricably linked to channel. So when we think media, we are really thinking channels. And, if we believe Marshall McLuhan, the medium dictates the message. But while media has undergone intense fragmentation they’ve also become much more measurable and – thereby – more accountable. We know more than ever about who lies on the other side of a digital medium thanks to an ever increasing amount of shared data. That data is what will drive the advertising marketplace of the future. It’s not about media – it’s about audience.

In the market I envision, you would specify your audience requirements. The criteria used would not be so much our typical segmentations – demography or geography for example. These have always just been proxies for what we really care about; their beliefs about our product and predicted buying behaviors. I believe that thanks to ever increasing amounts of data we’re going to make great strides in understanding the psychology of consumerism. These  will be foundational in the audience marketplace of the future. Predictive marketing will become more and more accurate and allow for increasingly precise targeting on a number of behavioral criteria.

Individual channels will become as irrelevant as the manufacturer that supplies the shock absorbers and tie rods in your new BMW. They will simply be grist for the mill in the audience marketplace. Mar-tech and ever smarter algorithms will do the channel selection and media buying in the background. All you’ll care about is the audience you’re targeting, the recommended creative (again, based on the mar-tech running in the background) and the resulting behaviors. Once your audience has been targeted and engaged, the predicted path of persuasion is continually updated and new channels are engaged as required. You won’t care what channels they are – you’ll simply monitor the progression of persuasion.


Media Buying is Just the Tip of Advertising’s Disruptive Iceberg


Two weeks ago, Gary Milner wrote a lucid prediction of what advertising might become. He rightly stated that advertising has been in a 40-year period of disruption. Bingo. He went on to say that he sees a consolidation of media buying into a centralized hub. Again, I don’t question the clarity of Milner’s crystal ball. It makes sense to me.

What is missing from Milner’s column, however, is the truly disruptive iceberg that is threatening to founder advertising as we know it – the total disruption of the relationship between the advertiser and the marketplace. Milner deals primarily with the media buying aspect of advertising but there’s a much bigger question to tackle. He touched on it in one sentence: “The fact is that a vast majority of advertising is increasingly being ignored.”

Yes! Exactly. But why?

I’ll tell you why. It’s because of a disagreement about what advertising should be. We (the buyers) believe advertising’s sole purpose is to inform. But the sellers believe advertising is there to influence buyers. And increasingly, we’re rejecting that definition.

I know. That’s a tough pill to swallow. But let’s apply a little logic to the premise. Bear with me.

Advertising was built on a premise of scarcity. Market places can’t exist without scarcity. There needs to be an imbalance to make an exchange of value worthwhile. Advertising exists because there once was a scarcity of information. We (the buyers) lacked information about products and services. This was primarily because of the inefficiencies inherent in a physical market. So, in return for the information, we traded something of value – our attention. We allowed ourselves to be influenced. We tolerated advertising because we needed it. It was the primary way we gained information about the marketplace.

In Milner’s column, he talks about Peter Diamandis’ 6 stages that drive the destruction of industries: digitalization, deception, disruption, demonetization, dematerialization, and democratization. Milner applied it to the digitization of media. But these same forces are also being applied to information and rather than driving advertising from disruption to a renaissance period, as Milner predicts, I believe we’ve barely scratched the surface of disruption. The ride will only get bumpier from here on.

The digitization of information enables completely new types of marketplaces. Consider the emergence of the two-sided markets that both AirBNB and Uber exemplify. Thanks to the digitization of information, entirely new markets have emerged that allow the flow of information between buyers and suppliers. Because AirBNB and Uber have built their business models astride these flows, they can get a cut of the action.

But the premise of the model is important to understand. AirBNB and Uber are built on the twin platforms of information and enablement. There is no attempt to persuade by the providers of the platforms – because they know those attempts will erode the value of the market they’re enabling. We are not receptive to persuasion (in the form of advertising) because we have access to information that we believe to be more reliable – user reviews and ratings.

The basic premise of advertising has changed. Information is no longer scarce. In fact, through digitization, we have the opposite problem. We have too much information and too little attention to allocate to it. We now need to filter information and increasingly, the filters we apply are objectivity and reliability. That turns the historical value exchange of advertising on its head. This has allowed participatory information marketplaces such as Uber, AirBNB and Google to flourish. In these markets, where information flows freely, advertising that attempts to influence feels awkward, forced and disingenuous. Rather than building trust, advertising erodes it.

This disruption has also driven another trend with dire consequences for advertising as we know it – the “Maker” revolution and the atomization of industries. There are some industries where any of us could participate as producers and vendors. The hospitality industry is one of these. The needs of a traveller are pretty minimal – a bed, a roof, a bathroom. Most of us could provide these if we were so inclined. We don’t need to be Conrad Hilton. These are industries susceptible to atomization – breaking the market down to the individual unit. And it’s in these industries where disruptive information marketplaces will emerge first. But I can’t build a refrigerator. Or a car (yet). In these industries, scale is still required. And these will be the last strongholds of mass advertising.

Milner talked about the digitization of media and the impact on advertising. But there’s a bigger change afoot – the digitization of information in marketplaces that previously relied on scarcity of information to prop up business models. As information goes from scarcity to abundance, these business models will inevitably fall.

Where Should Science Live?


Science, like almost every other aspect of our society, is in the midst of disruption. In that disruption, the very nature of science may be changing. And that is bringing a number of very pertinent questions up.

Two weeks ago I took Malcolm Gladwell to task for oversimplifying science for the sake of a good story. I offered Duncan Watts as a counter example. One reader, Ted Wright, came to Gladwell’s defence and in the process of doing so, took a shot at the reputation of Watts, saying with tongue firmly in cheek, “people who are academically lauded often leave an Ivy League post, in this case at Columbia, to go be a data scientist at Yahoo.”

Mr. Wright (yes, I have finally found Mr. Wright) implies this a bad thing, a step backwards, or even an academic “selling out.” (Note: Watts is now at Microsoft where he’s a principal researcher)

Since Wright offered his comment, I’ve been thinking about it. Where should science live? Is it a sell out when science happens in private companies? Should it be the sole domain of universities? I’m not so sure.

Watts is a sociologist. His area of study is network structures and system behaviors in complex environments. His past studies tend to involve analyzing large data sets to identify patterns of behavior. There are few companies who could provide larger or more representative data sets than Microsoft.


Peter Norvig, Director of Research at Google

One such company is Google. And there are many renowned scientists working there. One of them is Peter Norvig, Google’s Director of Research. In a blog post a few years ago where he took issue with Chris Anderson’s Wired article signaling the “End of Theory”, Norvig said:

“(Chris Anderson) correctly noted that the methodology for science is evolving; he cites examples like shotgun sequencing of DNA. Having more data, and more ways to process it, means that we can develop different kinds of theories and models. But that does not mean we throw out the scientific method. It is not “The End of Theory.” It is an important change (or addition) in the methodology and set of tools that are used by science, and perhaps a change in our stereotype of scientific discovery.”

Science as we have known it has always been reductionist in nature. It requires simplification down to a controllable set of variables. It has also relied on a rigorous framework that was most at home in the world of academia. But as Norvig notes, that isn’t necessarily the only viable option now. We live in a world of complexity and the locked down, reductionist approach to science where a certain amount of simplification is required doesn’t really do this world justice. This is particularly true in areas like sociology, which attempts to understand cultural complexity in context. You can’t really do that in a lab.

But perhaps you can do it at Google. Or Microsoft. Or Facebook. These places have reams of data and all the computing power in the world to crunch it. These places precisely meet Norvig’s definition of the evolving methodology of science: “More data, and more ways to process it.”

If that’s the trade-off Duncan Watts decided to make, one can certainly understand it. Scientists follow the path of greatest promise. And when it comes to science that depends on data and processing power, increasing that is best found in places like Microsoft and Google.






Decoupling Our Hunch Making Mechanism


Humans are hunch-making machines. We’re gloriously good at it. In fact, no one and no thing is better at coming up with a hunch. It’s what sets up apart on our planet and, thus far, nothing we’ve invented has proven to be better suited to strike the spark of intuition.

We can seemingly draw speculative guesses out of thin air – literally. From all the noise that surrounds us, we recognize potential patterns and infer significance. Scientists call them hypotheses. Artists call them artistic inspirations. Entrepreneurs call them innovations.

Whatever the label, we’re not exactly sure what happens. Mihaly Czikszentmihaly (which, in case you’re wondering, is pronounced Me-high Cheek-sent-me-high) explored where these hunches come from in his fascinating book, Creativity, The Psychology of Discovery and Invention. But despite the collective curiosity about the source of human creativity – the jury remains out. The mechanism that turns these very human gears and sparks the required connections between our synapses remains a mystery.

We’re good at making hunches. But we suck at qualifying those hunches. The reason is that we rush a hunch straight into becoming a belief. And that’s where things go off the rails. A hunch is a guess about what might be true. A belief is what we deem to be true. We go straight from what is one of many possible scenarios to the only scenario we execute against. The entire scientific method was created to counteract this very human tendency – forcing rational analysis of the hunches we churn out.

Philip Tetlock’s work on expertise in prediction shows how fragile this tendency to go from hunch to belief can make us. After all, a prediction is nothing more than a hunch of what might be. He referred to Isaiah Berlin’s 1950 essay, “The Hedgehog and the Fox.” In the essay, Berlin quotes the ancient Greek poet Archilochus, “”a fox knows many things, but a hedgehog one important thing.” Taking some poetic license, you could said that a hedgehog is more prone to moving straight from hunch to belief, where a fox tends to evaluate her hunches against multiple sources. Tetlock found that when it came to the accuracy of predictions, it was better to be a fox than a hedgehog. In some cases, much better.

But Tetlock also found that when it comes down to “crunching hunches”, machines tend to bet man hands down. It’s because humans have been programmed for thousands of generations to trust our hunches and no matter how much we fight it, we are born to treat our hunches as fact. Machines bear no such baggage.

This is an example of Moravec’s Paradox – the things that seem simple for humans are amazingly complex for machines. And vice versa. As artificial intelligence pioneer Marvin Minsky once recognized, it’s the things we do unconsciously that represent the biggest challenges for artificial intelligence, “In general, we’re least aware of what our minds do best.” Machines may never be as good as humans at creating a hunch – or, at least – we’re certainly not there yet. But machines have already outstripped humans in the ability to empirically analyze and validate multiple options.

Fellow Online Spin columnist Kaila Colbin posited this in her last column, “When Watson Comes for Your Job, Give it to Him.” As she points out, IBM’s Watson can kick any human ass when it comes to reviewing case law – or plowing through the details required for an accurate medical diagnosis – or assisting students prepare for an upcoming exam. But Watson isn’t very good at coming up with hunches. It’s because hunches aren’t rational. They’re inspirational. And machines aren’t fluent in inspiration. Not yet, anyway.

Maybe that’s why – even in something as logical as chess – the current champion isn’t a machine, or a human. It’s a combination of both. As American economist and author (Average is Over) Tyler explained in a blog post, a “striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.” Cowen shows four ways a man-machine team can outperform and they all have to do with leveraging the respective strengths of each. Humans use intuition to create hunches, and then harness the power of the machine to analyze relevant options.

Hunches have served humans very well. They will continue to do so. The trick is to decouple those hunches from the belief making mechanism that has historically accompanied it. That’s where we should let machines take over.



The Wave Form of Complex Strategy


I’ve been thinking about waves a lot lately. As I said to a recent group of marketing technologists, nature doesn’t plan in straight lines. Nature plays out in waves. As soon as you start looking for oscillations, you seem them everywhere. Seasons, our brains, the economy – if complexity lurks there, chances are there is a corresponding wave.

So how do waves tie into my recent two columns (Part One and Part Two) about agency relationships? Simply this – like most complex things, our corporate strategy should also plot itself against a wave-like cycle. And in that cycle, there is a place for both external partnerships and internal execution.

Let me give you two examples of the ubiquity of waves.

Remember how I talked about Bayesian Strategy? Again, it’s a wave, or, if you’d prefer, a loop (which is simply a wave plotted in a different form). It is a process of setting a frame, opening that frame to external validation and then updating that frame based on our newly perceived reality. This approach to strategy borrows from the work done on how we make sense of the world, which is also a loop, or a wave.

Alex “Sandy” Pentland’s “Science of Great Teams” also embodies its own wave:

“Successful teams, especially successful creative teams, oscillate between exploration for discovery and engagement for integration of the ideas gathered from outside sources. At the MIT Media Lab, this pattern accounted for almost half of the differences in creative output of research groups.”


Alex “Sandy” Pentland

The thing about waves is that they require very different approaches at the peaks and valleys of the wave. The oscillation is caused by this dynamic tension. The act of gathering input is very different than the act of synthesizing and acting on that output. And it’s very difficult to do both at the same time. Again, Pentland found this in his observation of effective teams, “Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses.”

Increasingly, in complex situations, we have to incorporate wave planning into our strategic approach. And when it comes to marketing, this will likely include a wave that winds itself through working with an external partner to gather the value that comes from their external perspective and in creating an internal “sense-making” discipline with an embedded marketing team. This will require a clear understanding of control and authority transference at the appropriate times. Like the Exploration/Engagement cycle of Pentland’s teams, both are necessary but they shouldn’t necessarily run in parallel.

I’ve found in the past that most of the value that can come from a strong external partnership gets burned off in turf wars and discounting outside information and advice because it doesn’t come from “inside”. Even when this information is accepted, it’s subsumed into internal dialogues and documentation, losing whatever insight it once offered.

Similarly, the partner loses precious cycles trying to keep up to speed with the internal directional course changes that inevitably happen. The problem comes when both these processes try to co-exist and run along the same straight line. The result is a rapidly zig-zagging line that tries to stay the course but loses any energy it might have had in constantly readjusting itself to meet “straight line” strategic objectives.

I believe the right answer to the in-house/agency debate is not an “Either/Or” but rather a wave-aware “And.”