NBC’s Grip on Olympic Gold Slipping

When it comes to benchmarking stuff, nothing holds a candle to the quadrennial sports-statzapooloza we call the Summer Olympics. After 3 years, 11 months and 13 days of not giving a crap about sports like team pursuit cycling or half heavyweight judo, we suddenly get into fist fights over 3 one hundredths of a second or an unawarded Yuko.

But it’s not just sports that are thrown into comparative focus by the Olympic games. It also provides a chance to take a snap shot of media consumption trends. The Olympics is probably the biggest show on earth. With the possible exception of the World Cup, it’s the time when the highest number of people on the planet are all watching the same thing at the same time. This makes it advertising nirvana.

Or it should.

Over the past few Olympics, the way we watch various events has been changing because of the nature of the Games themselves. There are 306 separate events in 35 recognized sports that are spread over 16 days of competition. The Olympics play to a global audience, which means that coverage has to span 24 time zones. At any given time, on any given day, there could be 6 or 7 events running simultaneously. In fact, as I’m writing this, diving, volleyball, men’s omnium cycling, Greco-Roman wresting, badminton, field hockey and boxing are all happening at the same time.

This creates a challenge for network TV coverage. The Olympics are hardly a one-size-fits-all spectacle. So, if you’re NBC and you’ve shelled out 1.6 billion dollars to provide coverage, you have a dilemma: how do you assemble the largest possible audience to show all those really expensive ads to? How do you keep all those advertisers happy?

NBC’s answer, it seems, is to repackage the Olympics as a scripted mini-series. It means throttling down real time streaming or live broadcast coverage on some of the big events so these can be assembled into packaged stories during their primetime coverage. NBC’s chief marketing officer, John Miller, was recently quoted as saying, “The people who watch the Olympics are not particularly sports fans. More women watch the games than men, and for the women, they’re less interested in the result and more interested in the journey. It’s sort of like the ultimate reality show and miniseries wrapped into one.”

So, how is this working out for NBC? Not so well, as it turns out.

Ratings are down, with NBC posting the lowest primetime numbers since 1992. The network has come under heavy fire for what is quite possibly the worst Olympic coverage in the history of the games. Let’s ignore for a moment their myopic focus on US contestants and a handful of superstars like Usain Bolt (which may not be irritating unless you’re a international viewer like myself). Their heavy-handed attempt to control and script the fragmented and emergent drama of any Olympic games has stumbled out of the blocks and fallen flat on its face.

I would categorize this as a “RTU/WTF” The first three letters stand for “Research tells us…” I think you can figure out the last three. I’m sure NBC did their research to figure out what they thought the audience really wanted in Olympics game coverage. I’m positive there was a focus group somewhere that told the network what they wanted to hear; “Screw real time results. What we really want is for you to tell us – with swelling music, extreme close ups and completely irrelevant vignettes– the human drama that lies behind the medals…” And, in the collective minds of NBC executives, they quickly added, “…with a zillion commercial breaks and sponsorship messages.”

But it appears that this isn’t what we want. It’s not even close. We want to see the sports we’re interested in, on our device of choice and at the time that best suits us.

This, in a nutshell, is the disruption that is broadsiding the advertising industry at full ramming speed. It was exactly what I was talking about in my last column. NBC may have been able to play their game when they were our only source of information and we were held captive by this scarcity. But over the past 3 Olympic games, starting in Athens in 2004, technology has essentially erased that scarcity. The reality no longer fits NBC’s strategy. Coverage of the Olympics is now a multi-channel affair. What we’re looking for is a way to filter the coverage based on what is most interesting to us, not to be spoon-fed the coverage that NBC feels has the highest revenue potential.

It’s a different world, NBC. If you’re planning to compete in Tokyo, you’d better change your game plan, because you’re still playing like it’s 1996.

 

 

 

Media Buying is Just the Tip of Advertising’s Disruptive Iceberg

Two weeks ago, Gary Milner wrote a lucid prediction of what advertising might become. He rightly stated that advertising has been in a 40-year period of disruption. Bingo. He went on to say that he sees a consolidation of media buying into a centralized hub. Again, I don’t question the clarity of Milner’s crystal ball. It makes sense to me.

What is missing from Milner’s column, however, is the truly disruptive iceberg that is threatening to founder advertising as we know it – the total disruption of the relationship between the advertiser and the marketplace. Milner deals primarily with the media buying aspect of advertising but there’s a much bigger question to tackle. He touched on it in one sentence: “The fact is that a vast majority of advertising is increasingly being ignored.”

Yes! Exactly. But why?

I’ll tell you why. It’s because of a disagreement about what advertising should be. We (the buyers) believe advertising’s sole purpose is to inform. But the sellers believe advertising is there to influence buyers. And increasingly, we’re rejecting that definition.

I know. That’s a tough pill to swallow. But let’s apply a little logic to the premise. Bear with me.

Advertising was built on a premise of scarcity. Market places can’t exist without scarcity. There needs to be an imbalance to make an exchange of value worthwhile. Advertising exists because there once was a scarcity of information. We (the buyers) lacked information about products and services. This was primarily because of the inefficiencies inherent in a physical market. So, in return for the information, we traded something of value – our attention. We allowed ourselves to be influenced. We tolerated advertising because we needed it. It was the primary way we gained information about the marketplace.

In Milner’s column, he talks about Peter Diamandis’ 6 stages that drive the destruction of industries: digitalization, deception, disruption, demonetization, dematerialization, and democratization. Milner applied it to the digitization of media. But these same forces are also being applied to information and rather than driving advertising from disruption to a renaissance period, as Milner predicts, I believe we’ve barely scratched the surface of disruption. The ride will only get bumpier from here on.

The digitization of information enables completely new types of marketplaces. Consider the emergence of the two-sided markets that both AirBNB and Uber exemplify. Thanks to the digitization of information, entirely new markets have emerged that allow the flow of information between buyers and suppliers. Because AirBNB and Uber have built their business models astride these flows, they can get a cut of the action.

But the premise of the model is important to understand. AirBNB and Uber are built on the twin platforms of information and enablement. There is no attempt to persuade by the providers of the platforms – because they know those attempts will erode the value of the market they’re enabling. We are not receptive to persuasion (in the form of advertising) because we have access to information that we believe to be more reliable – user reviews and ratings.

The basic premise of advertising has changed. Information is no longer scarce. In fact, through digitization, we have the opposite problem. We have too much information and too little attention to allocate to it. We now need to filter information and increasingly, the filters we apply are objectivity and reliability. That turns the historical value exchange of advertising on its head. This has allowed participatory information marketplaces such as Uber, AirBNB and Google to flourish. In these markets, where information flows freely, advertising that attempts to influence feels awkward, forced and disingenuous. Rather than building trust, advertising erodes it.

This disruption has also driven another trend with dire consequences for advertising as we know it – the “Maker” revolution and the atomization of industries. There are some industries where any of us could participate as producers and vendors. The hospitality industry is one of these. The needs of a traveller are pretty minimal – a bed, a roof, a bathroom. Most of us could provide these if we were so inclined. We don’t need to be Conrad Hilton. These are industries susceptible to atomization – breaking the market down to the individual unit. And it’s in these industries where disruptive information marketplaces will emerge first. But I can’t build a refrigerator. Or a car (yet). In these industries, scale is still required. And these will be the last strongholds of mass advertising.

Milner talked about the digitization of media and the impact on advertising. But there’s a bigger change afoot – the digitization of information in marketplaces that previously relied on scarcity of information to prop up business models. As information goes from scarcity to abundance, these business models will inevitably fall.

Where Context Comes From

Fellow Spinner Cory Treffiletti told you last week that data without context is noise.

Absolutely right.

I want to continue that conversation, because it’s an important one. It’s all about context. So let’s talk a little more about context. And specifically how we decide what makes up that context.

You might have seen or heard the hubbub that emerged around a tweet from Neil Degrasse Tyson a month ago: “Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence”

Nice thought, but it ignited a social media shit-storm. Which was entirely predictable. Because we don’t want to be rational. We want to be human. Did 79 episodes of Star Trek teach us nothing?

The biggest beef against #Rationalia was that evidence is typically in the eyes of the beholder. It’s all a matter of context. I’m guessing that the policies that come from evidence in the hands of Republicans will not bear much resemblance to policies that come from the evidence of Democrats. The evidence could be the same but the context is different, because Democrats and Republicans think differently.

Like Treffiletti said – evidence without context is just noise. And our context is only marginally based on evidence. And that’s why #Rationalia – as intellectually attractive as it might be – won’t work.

We as humans understand the world through something called sense making. This is the process we use to build context. In 2006, psychologist Gary Klein shed new light on how we make sense of the world. We start with a frame that captures our current understanding of the situation and depending on the evidence presented to us, we decide whether to elaborate our frame or discard it and create a new frame. So, sensemaking is really an iterative loop that is constantly using our current frame as a reference point.

But here’s the thing. What we consider as evidence depends on the frame we already have in place. It’s the filter that determines what data we pay attention to. And much as Neil Degrasse Tyson would like the governments of the world to be totally unbiased in the filtering of evidence, “that dog just won’t hunt.” It can’t – because we can’t consider data without some context to put it in.

Perhaps someday artificial intelligence will advance to the point where it can pull unbiased context out of random data. Maybe computers will be able to do what we’re unable to – make sense of the noise without assuming a pre-existing frame. But we’re not there yet. And even if we were, we would simply look at the conclusions of the computer and decide whether we agree with them or not. As long as humans are in charge, there will always be a biased filter in place.

So back to Cory’s column. If context is so important, think about where that context is coming from. Who is defining the context and what frame are they operating from? That in turn will define what data you consider and how you consider it.

Perhaps the most important decision before considering data is to be totally clear about what the goal is. Goals, together with experience, form the underpinning of beliefs. Frames are then built on those beliefs. Context comes from those frames. And context is the filter we apply to evidence.

Happiness as a Corporate Metric

Costa Rica is the happiest place on earth. The least happy place on earth? That would be Botswana.

At least, those are the results according to by the things measured by the Happy Planet Index. The index is a measure of three factors, life expectancy, Experienced Well Being and Ecological Footprint. Western nations tend to do very well on the first two measures, but suck at the third. The index is looking for balance – being happy without raping and pillaging the earth. Here in North America, we still have a ways to go in that department.

In another study – the 2015 UN’s World Happiness Report – a different weighting of factors treated the western world a little better. When we tip the balance towards individual happiness and away from the environment and sustainability; Denmark, Switzerland, Iceland, Norway, Finland and Canada topped the rankings. Apparently, snow is good for the soul. At the bottom of the list were Benin, Afghanistan, Togo, Syria and Burundi (it’s hard to believe anywhere scored worse than Syria – mental note: stroke Burundi off my travel bucket list).

Jigme-Singye-Wangchuck

The 4th King of Bhutan: Jigme Singye Wangchuck

In 1971, the 4th Dragon King of Bhutan, Jigme Singye Wangchuck was so enamored with the idea of happiness as a goal that he introduced a new measure of a nation’s worth: Gross National Happiness. He believed that the western world’s obsession with materialism represented by Gross National Product shouldn’t be the sole measure of progress. Things like sustainable development, care for the environment, good governance and preservation of culture deserved to be measured as well. In the 45 years since the idea of Gross National Happiness was first floated by his Royal Dragonship, it’s been slow to take, but perhaps it’s time has come. By the way, in the UN survey, Bhutan was in the middle of the pack for happiness, ranking 84th out of 157 countries.

Happiness should be important with companies as well. There’s even an investment fund that invests exclusively in companies with happy employees. But happiness can be an elusive goal, especially when we try to wrestle it to the ground in the way of a hard performance metric in a corporate environment. What exactly are we measuring when we measure happiness? And who’s happiness are we measuring? Our customers? Our shareholders? Our employees? All of the above?

Let’s single out employees. Companies like Zappos and Southwest Airlines have tried to make employee happiness a metric that matters. But what makes an employee happy? Perhaps we can find a clue in a recent survey from Ypulse that asked Millennials which companies they’d most like to work at. The top 10 answers were:

  1. Google
  2. Apple
  3. Disney
  4. Non-profit/charity
  5. School/community/university
  6. Hospital
  7. U.S. government
  8. Myself/my own company
  9. Amazon
  10. FBI/CIA

It’s an interesting list. It’s not the list you’d expect from a generation that simply wants to get rich quick. You don’t work at a hospital or the FBI if you want to make big bucks. This is a list that comes from people who want to make a difference. They want meaning. In the words of Steve Jobs, they “want to put a ding in the universe.”

I get that. I recently discovered just how hard happiness is to pin down. After selling my company, I was fortunate enough to achieve financial independence and retire at 51. I should have been deliriously happy, right? Well, I wasn’t suicidal by any means, but I would say my level of happiness actually decreased after I tried retirement. I was at the other end of my career path from Millennials, but meaning remained just as important to me.

In a study of retirement satisfaction published in the Journal of Financial Counselling and Planning, Sarah Arsebedo and Martin Seay found that psychologist Martin Seligman’s positive psychological attributes, referred to as PERMA (Positive emotions, Engagement, [Family] Relationships, Meaning and Accomplishment) – don’t go away when we retire. These things are necessary to happiness. For men in particular – and increasingly so with women – we rely on our jobs to provide many of these. This was certainly true for me.

It’s good we’re paying more attention to happiness. But it’s also important that we understand what we’re talking about when we refer to happiness. It has little to do with monetary measures of success. Whether we’re talking nations, corporations or employees, it turns out that happiness means a sense of interconnectedness, contribution and personal values. It means living beyond ourselves and leaving some footprint that won’t fade when we no longer walk this earth.

Ultimately, it means doing stuff that matters.

 

A Possibly Premature Post-Mortem on Yahoo

Last Thursday, Yahoo held it ‘s annual shareholder meeting. At that meeting, CEO Marissa Mayer dealt the company a doubled down kiss of death. She stated the goals of the board are fully aligned with one clear priority: “delivering shareholder value to all of you.” She further mentioned, when dealing with the divesture of all that once was Yahoo, that she’s “been very heartened by the level of interest in Yahoo. It validates our business processes as well as our achievements to date.”

It’s fancier language, but it’s basically the same as the butcher saying, “This cow is no longer viable as a cow, so I’m looking at it as a collection of rump roasts, T-Bones and hamburger. I’m hoping we have more of the former and less of the later.”

Yahoo_1996I first encountered Yahoo in 1995, shortly after it’s brief life as Jerry and David’s Guide to the World Wide Web. I think it was probably still parked on Stanford’s servers at the time. At the time, the Internet was like the world’s biggest second-hand store – a huge collection that was 95% junk/5 % useful stuff with no overarching order or organization. David Filo and Jerry Yang’s site was one of the very first to try to provide that order.

As an early search marketer in the run up to the dot-com bubble, you couldn’t ignore the Yahoo directory. The Yahooligans walked with typical Valley swagger. Hubris was never in short supply. They were the cocks of the walk and they knew it.

It was a much-humbled post-bubble Yahoo that I visited in 2004. They had got their search asses soundly kicked by Google, who was now powering their non-directory results. The age of the curated directory was gone, replaced by the scalability of algorithmic search.

As a culture, the Yahooligans were struggling with the mixed management signals that came from then CEO Terry Semel and his team. Sunnyvale was clouded in a purple haze. The Yahooligans didn’t know who the hell they were or what they were supposed to do. Where they a tech company or an entertainment company? The answer, as it turned out, was neither.

I met with the remnants of the once mighty search team to talk about user behaviors. I didn’t know it at the time, but Yahoo was gearing up to relaunch their search service. A much vilified paid inclusion program would also be debuted. It was one of many ill-fated attempts to find the next “Big Thing.”

Marissa Mayer continues to put a brave face on it, but the Yahoo engine ran out of steam at least a decade and a half ago. What amazes me is how long the ride has been. There is a message here for tech-based companies.

If you dig down to the critical incubation period of any tech company, you find a recurring pattern. Some technologically mediated connection allows people to do something they were previously unable to do. This releases pent up market demand. It’s like a thin sliver trying to poke through a water balloon. If successful, this released market demand creates an immediate and sizable audience for whomever introduced the innovation. Yahoo’s directory, Google’s PageRank, Facebook’s “Facemash”, AirBnB’s accommodation directory, Uber’s ridesharing app – they all share the same modus operandi – a tech-step forward creates a new audience and market opportunity.

In hindsight, once you strip away all the hype, it’s amazing how tenuous and unimpressive these technological advances are. Luck and timing typically play a huge part. If the conditions are right, the sliver eases through the balloon’s membrane and for a time, there is a steady stream of opportunity.

The problem is that is that as easily as these markets form, they can just as easily evaporate. When the technological advantage passes to the next competitor, as it did when Yahoo gave way to Google, all that’s left is the audience. When you consider that Yahoo has been coasting on this audience for close to two decades, it’s rather amazing that Mayer still has any assets at all to sell.

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Where Should Science Live?

Science, like almost every other aspect of our society, is in the midst of disruption. In that disruption, the very nature of science may be changing. And that is bringing a number of very pertinent questions up.

Two weeks ago I took Malcolm Gladwell to task for oversimplifying science for the sake of a good story. I offered Duncan Watts as a counter example. One reader, Ted Wright, came to Gladwell’s defence and in the process of doing so, took a shot at the reputation of Watts, saying with tongue firmly in cheek, “people who are academically lauded often leave an Ivy League post, in this case at Columbia, to go be a data scientist at Yahoo.”

Mr. Wright (yes, I have finally found Mr. Wright) implies this a bad thing, a step backwards, or even an academic “selling out.” (Note: Watts is now at Microsoft where he’s a principal researcher)

Since Wright offered his comment, I’ve been thinking about it. Where should science live? Is it a sell out when science happens in private companies? Should it be the sole domain of universities? I’m not so sure.

Watts is a sociologist. His area of study is network structures and system behaviors in complex environments. His past studies tend to involve analyzing large data sets to identify patterns of behavior. There are few companies who could provide larger or more representative data sets than Microsoft.

peter-2937-X2

Peter Norvig, Director of Research at Google

One such company is Google. And there are many renowned scientists working there. One of them is Peter Norvig, Google’s Director of Research. In a blog post a few years ago where he took issue with Chris Anderson’s Wired article signaling the “End of Theory”, Norvig said:

“(Chris Anderson) correctly noted that the methodology for science is evolving; he cites examples like shotgun sequencing of DNA. Having more data, and more ways to process it, means that we can develop different kinds of theories and models. But that does not mean we throw out the scientific method. It is not “The End of Theory.” It is an important change (or addition) in the methodology and set of tools that are used by science, and perhaps a change in our stereotype of scientific discovery.”

Science as we have known it has always been reductionist in nature. It requires simplification down to a controllable set of variables. It has also relied on a rigorous framework that was most at home in the world of academia. But as Norvig notes, that isn’t necessarily the only viable option now. We live in a world of complexity and the locked down, reductionist approach to science where a certain amount of simplification is required doesn’t really do this world justice. This is particularly true in areas like sociology, which attempts to understand cultural complexity in context. You can’t really do that in a lab.

But perhaps you can do it at Google. Or Microsoft. Or Facebook. These places have reams of data and all the computing power in the world to crunch it. These places precisely meet Norvig’s definition of the evolving methodology of science: “More data, and more ways to process it.”

If that’s the trade-off Duncan Watts decided to make, one can certainly understand it. Scientists follow the path of greatest promise. And when it comes to science that depends on data and processing power, increasing that is best found in places like Microsoft and Google.

 

 

 

 

 

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

Why Marketers Love Malcolm Gladwell … and Why They Shouldn’t

Marketers love Malcolm Gladwell. They love his pithy, reductionist approach to popular science – his tendency to sacrifice verity for the sake of a good “Just-so” story. And in doing this, what is Malcolm Gladwell but a marketer at heart? No wonder our industry is ga-ga over him. We love anyone who can oversimplify complexity down to the point where it can be appropriated as yet another marketing “angle”.

Take the entire influencer advertising business, for instance. Earlier this year, I saw an article saying more and more brands are expanding their influencer marketing programs. We are desperately searching for that holy nexus where social media and those super-connected “mavens” meet. While the idea of influencer marketing has been around for a while, it really gained steam with the release of Gladwell’s “The Tipping Point.” And that head of steam seems to have been building since the release of the book in 2000.

As others have pointed out, Gladwell has made a habit of taking one narrow perspective that promises to “play well” with the masses, supporting it with just enough science to make it seem plausible and then enshrining it as a “Law.”

Take “The Law of the Few”, for instance, from The Tipping Point: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” You could literally hear the millions of ears attached to marketing heads “perk up” when they heard this. “All we have to do,” the reasoning went, “is reach these people, plant a favorable opinion of our product and give them the tools to spread the word. Then we just sit back and wait for the inevitable epidemic to sweep us to new heights of profitability.”

Certainly commercial viral cascades do happen. They happen all the time. And, in hindsight, if you look long and hard enough, you’ll probably find what appears to be a “maven” near ground-zero. From this perspective, Gladwell’s “Law of the Few” seems to hold water. But that’s exactly the type of seductive reasoning that makes “Just So” stories so misleading. You mistakenly believe that because it happened once, you can predict when it’s going to happen again. Gladwell’s indiscriminate use of the term “Law” contributes to this common deceit. A law is something that is universally applicable and constant. When a law governs something, it plays out the same way, every time. And this is certainly not the case in social epidemics.

duncan-watts

Duncan Watts

If Malcolm Gladwell’s books have become marketing and pop-culture bibles, the same, sadly, cannot be said for Duncan Watts’ books. I’m guessing almost everyone reading this column has heard of Malcolm Gladwell. I further guess that almost none of you have heard of Duncan Watts. And that’s a shame. But it’s completely understandable.

Duncan Watts describes his work as determining the “role that network structure plays in determining or constraining system behavior, focusing on a few broad problem areas in social science such as information contagion, financial risk management, and organizational design.”

You started nodding off halfway through that sentence, didn’t you?

As Watts shows in his books, “Firms spent great effort trying to find “connectors” and “mavens” and to buy the influence of the biggest influencers, even though there was never causal evidence that this would work.” But the work required to get to this point is not trivial. While he certainly aims at a broad audience, Watts does not read like Gladwell. His answers are not self-evident. There is no pithy “bon mot” that causes our neural tumblers to satisfyingly click into place. Watts’ explanations are complex, counter-intuitive, occasionally ambiguous and often non-conclusive – just like the world around us. As he explains his book “Everything is Obvious: *Once You Know the Answer”, it’s easy to look backwards to find causality. But it’s not always right.

Marketers love simplicity. We love laws. We love predictability. That’s why we love Gladwell. But in following this path of least resistance, we’re straying further and further from the real world.

Decoupling Our Hunch Making Mechanism

Humans are hunch-making machines. We’re gloriously good at it. In fact, no one and no thing is better at coming up with a hunch. It’s what sets up apart on our planet and, thus far, nothing we’ve invented has proven to be better suited to strike the spark of intuition.

We can seemingly draw speculative guesses out of thin air – literally. From all the noise that surrounds us, we recognize potential patterns and infer significance. Scientists call them hypotheses. Artists call them artistic inspirations. Entrepreneurs call them innovations.

Whatever the label, we’re not exactly sure what happens. Mihaly Czikszentmihaly (which, in case you’re wondering, is pronounced Me-high Cheek-sent-me-high) explored where these hunches come from in his fascinating book, Creativity, The Psychology of Discovery and Invention. But despite the collective curiosity about the source of human creativity – the jury remains out. The mechanism that turns these very human gears and sparks the required connections between our synapses remains a mystery.

We’re good at making hunches. But we suck at qualifying those hunches. The reason is that we rush a hunch straight into becoming a belief. And that’s where things go off the rails. A hunch is a guess about what might be true. A belief is what we deem to be true. We go straight from what is one of many possible scenarios to the only scenario we execute against. The entire scientific method was created to counteract this very human tendency – forcing rational analysis of the hunches we churn out.

Philip Tetlock’s work on expertise in prediction shows how fragile this tendency to go from hunch to belief can make us. After all, a prediction is nothing more than a hunch of what might be. He referred to Isaiah Berlin’s 1950 essay, “The Hedgehog and the Fox.” In the essay, Berlin quotes the ancient Greek poet Archilochus, “”a fox knows many things, but a hedgehog one important thing.” Taking some poetic license, you could said that a hedgehog is more prone to moving straight from hunch to belief, where a fox tends to evaluate her hunches against multiple sources. Tetlock found that when it came to the accuracy of predictions, it was better to be a fox than a hedgehog. In some cases, much better.

But Tetlock also found that when it comes down to “crunching hunches”, machines tend to bet man hands down. It’s because humans have been programmed for thousands of generations to trust our hunches and no matter how much we fight it, we are born to treat our hunches as fact. Machines bear no such baggage.

This is an example of Moravec’s Paradox – the things that seem simple for humans are amazingly complex for machines. And vice versa. As artificial intelligence pioneer Marvin Minsky once recognized, it’s the things we do unconsciously that represent the biggest challenges for artificial intelligence, “In general, we’re least aware of what our minds do best.” Machines may never be as good as humans at creating a hunch – or, at least – we’re certainly not there yet. But machines have already outstripped humans in the ability to empirically analyze and validate multiple options.

Fellow Online Spin columnist Kaila Colbin posited this in her last column, “When Watson Comes for Your Job, Give it to Him.” As she points out, IBM’s Watson can kick any human ass when it comes to reviewing case law – or plowing through the details required for an accurate medical diagnosis – or assisting students prepare for an upcoming exam. But Watson isn’t very good at coming up with hunches. It’s because hunches aren’t rational. They’re inspirational. And machines aren’t fluent in inspiration. Not yet, anyway.

Maybe that’s why – even in something as logical as chess – the current champion isn’t a machine, or a human. It’s a combination of both. As American economist and author (Average is Over) Tyler explained in a blog post, a “striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.” Cowen shows four ways a man-machine team can outperform and they all have to do with leveraging the respective strengths of each. Humans use intuition to create hunches, and then harness the power of the machine to analyze relevant options.

Hunches have served humans very well. They will continue to do so. The trick is to decouple those hunches from the belief making mechanism that has historically accompanied it. That’s where we should let machines take over.