Where Context Comes From

Fellow Spinner Cory Treffiletti told you last week that data without context is noise.

Absolutely right.

I want to continue that conversation, because it’s an important one. It’s all about context. So let’s talk a little more about context. And specifically how we decide what makes up that context.

You might have seen or heard the hubbub that emerged around a tweet from Neil Degrasse Tyson a month ago: “Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence”

Nice thought, but it ignited a social media shit-storm. Which was entirely predictable. Because we don’t want to be rational. We want to be human. Did 79 episodes of Star Trek teach us nothing?

The biggest beef against #Rationalia was that evidence is typically in the eyes of the beholder. It’s all a matter of context. I’m guessing that the policies that come from evidence in the hands of Republicans will not bear much resemblance to policies that come from the evidence of Democrats. The evidence could be the same but the context is different, because Democrats and Republicans think differently.

Like Treffiletti said – evidence without context is just noise. And our context is only marginally based on evidence. And that’s why #Rationalia – as intellectually attractive as it might be – won’t work.

We as humans understand the world through something called sense making. This is the process we use to build context. In 2006, psychologist Gary Klein shed new light on how we make sense of the world. We start with a frame that captures our current understanding of the situation and depending on the evidence presented to us, we decide whether to elaborate our frame or discard it and create a new frame. So, sensemaking is really an iterative loop that is constantly using our current frame as a reference point.

But here’s the thing. What we consider as evidence depends on the frame we already have in place. It’s the filter that determines what data we pay attention to. And much as Neil Degrasse Tyson would like the governments of the world to be totally unbiased in the filtering of evidence, “that dog just won’t hunt.” It can’t – because we can’t consider data without some context to put it in.

Perhaps someday artificial intelligence will advance to the point where it can pull unbiased context out of random data. Maybe computers will be able to do what we’re unable to – make sense of the noise without assuming a pre-existing frame. But we’re not there yet. And even if we were, we would simply look at the conclusions of the computer and decide whether we agree with them or not. As long as humans are in charge, there will always be a biased filter in place.

So back to Cory’s column. If context is so important, think about where that context is coming from. Who is defining the context and what frame are they operating from? That in turn will define what data you consider and how you consider it.

Perhaps the most important decision before considering data is to be totally clear about what the goal is. Goals, together with experience, form the underpinning of beliefs. Frames are then built on those beliefs. Context comes from those frames. And context is the filter we apply to evidence.

Happiness as a Corporate Metric

Costa Rica is the happiest place on earth. The least happy place on earth? That would be Botswana.

At least, those are the results according to by the things measured by the Happy Planet Index. The index is a measure of three factors, life expectancy, Experienced Well Being and Ecological Footprint. Western nations tend to do very well on the first two measures, but suck at the third. The index is looking for balance – being happy without raping and pillaging the earth. Here in North America, we still have a ways to go in that department.

In another study – the 2015 UN’s World Happiness Report – a different weighting of factors treated the western world a little better. When we tip the balance towards individual happiness and away from the environment and sustainability; Denmark, Switzerland, Iceland, Norway, Finland and Canada topped the rankings. Apparently, snow is good for the soul. At the bottom of the list were Benin, Afghanistan, Togo, Syria and Burundi (it’s hard to believe anywhere scored worse than Syria – mental note: stroke Burundi off my travel bucket list).

Jigme-Singye-Wangchuck

The 4th King of Bhutan: Jigme Singye Wangchuck

In 1971, the 4th Dragon King of Bhutan, Jigme Singye Wangchuck was so enamored with the idea of happiness as a goal that he introduced a new measure of a nation’s worth: Gross National Happiness. He believed that the western world’s obsession with materialism represented by Gross National Product shouldn’t be the sole measure of progress. Things like sustainable development, care for the environment, good governance and preservation of culture deserved to be measured as well. In the 45 years since the idea of Gross National Happiness was first floated by his Royal Dragonship, it’s been slow to take, but perhaps it’s time has come. By the way, in the UN survey, Bhutan was in the middle of the pack for happiness, ranking 84th out of 157 countries.

Happiness should be important with companies as well. There’s even an investment fund that invests exclusively in companies with happy employees. But happiness can be an elusive goal, especially when we try to wrestle it to the ground in the way of a hard performance metric in a corporate environment. What exactly are we measuring when we measure happiness? And who’s happiness are we measuring? Our customers? Our shareholders? Our employees? All of the above?

Let’s single out employees. Companies like Zappos and Southwest Airlines have tried to make employee happiness a metric that matters. But what makes an employee happy? Perhaps we can find a clue in a recent survey from Ypulse that asked Millennials which companies they’d most like to work at. The top 10 answers were:

  1. Google
  2. Apple
  3. Disney
  4. Non-profit/charity
  5. School/community/university
  6. Hospital
  7. U.S. government
  8. Myself/my own company
  9. Amazon
  10. FBI/CIA

It’s an interesting list. It’s not the list you’d expect from a generation that simply wants to get rich quick. You don’t work at a hospital or the FBI if you want to make big bucks. This is a list that comes from people who want to make a difference. They want meaning. In the words of Steve Jobs, they “want to put a ding in the universe.”

I get that. I recently discovered just how hard happiness is to pin down. After selling my company, I was fortunate enough to achieve financial independence and retire at 51. I should have been deliriously happy, right? Well, I wasn’t suicidal by any means, but I would say my level of happiness actually decreased after I tried retirement. I was at the other end of my career path from Millennials, but meaning remained just as important to me.

In a study of retirement satisfaction published in the Journal of Financial Counselling and Planning, Sarah Arsebedo and Martin Seay found that psychologist Martin Seligman’s positive psychological attributes, referred to as PERMA (Positive emotions, Engagement, [Family] Relationships, Meaning and Accomplishment) – don’t go away when we retire. These things are necessary to happiness. For men in particular – and increasingly so with women – we rely on our jobs to provide many of these. This was certainly true for me.

It’s good we’re paying more attention to happiness. But it’s also important that we understand what we’re talking about when we refer to happiness. It has little to do with monetary measures of success. Whether we’re talking nations, corporations or employees, it turns out that happiness means a sense of interconnectedness, contribution and personal values. It means living beyond ourselves and leaving some footprint that won’t fade when we no longer walk this earth.

Ultimately, it means doing stuff that matters.

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Can Stories Make Us Better?

In writing this column, I often put ideas on the shelf for a while. Sometimes, world events conspire to make one of these shelved ideas suddenly relevant. This happened this past weekend.

The idea that caught my eye some months ago was an article that explored whether robots could learn morality by reading stories. On the face of it, it was mildly intriguing. But early Sunday morning as the heartbreaking news filtered to me from Orlando, a deeper connection emerged.

When we speak of unintended consequence, which we have before, the media amplification of acts of terror are one of them. The staggeringly sad fact is that shocking casualty numbers have their own media value. And that, said one analyst who was commenting on ways to deal with terrorism, is a new reality we have to come to terms with. When we in the media business make stories news worthy we assign worth not just for news consumers but also to newsmakers – those troubled individuals who have the motivation and the means to blow apart the daily news cycle.

This same analyst, when asked how we deal with terrorism, made the point you can’t prevent lone acts of terrorism. The only answer is to use that same network of cultural connections we use to amplify catastrophic events to create an environment that dampens rather than intensifies violent impulse. We in the media and advertising industries have to use our considerable skills in setting cultural contexts to create an environment that reduces the odds of a violent outcome. And sadly, this is a game of odds. There are no absolute answers here – there is just a statistical lowering of the curve. Sometimes, despite your best efforts, the unimaginable still happens.

But how do you use the tools at our disposal to amplify morality? Here, perhaps the story I shelved some months ago can provide some clues.

In the study from Georgia Tech, Mark Riedl and Brent Harrison used stories as models of acceptable morality. For most of human history, popular culture included at least an element of moral code. We encoded the values we held most dear into our stories. It provided a base for acceptable behavior, either through positive reinforcement of commonly understood virtues (prudence, justice, temperance, courage, faith, hope and charity) or warnings about universal vices (lust, gluttony, greed, sloth, wrath, envy and pride). Sometimes these stories had religious foundations, sometimes they were secular morality fables but they all served the same purpose. They taught us what was acceptable behavior.

Stories were never originally intended to entertain. They were created to pass along knowledge and cultural wisdom. Entertainment came after when we discovered the more entertaining the story, the more effective it was at its primary purpose: education. And this is how the researchers used stories. Robots can’t be entertained, but they can be educated.

At some point in the last century, we focused on the entertainment value of stories over education and, in doing so, rotated our moral compass 180 degrees. If you look at what is most likely to titillate, sin almost always trumps sainthood. Review that list of virtues and vices and you’ll see that the stories of our current popular culture focus on vice – that list could be the programming handbook for any Hollywood producer. I don’t intend this a sermon – I enjoy Game of Thrones as much as the next person. I simply state it as a fact. Our popular culture – and the amplification that comes from it – is focused almost exclusively on the worst aspects of human nature. If robots were receiving their behavioral instruction through these stories, they would be programmed to be psychopathic moral degenerates.

For most of us, we can absorb this continual stream of anti-social programming and not be affected by it. We still know what is right and what is wrong. But in a world where it’s the “black swan” outliers that grab the news headlines, we have to think about the consequences that reach beyond the mainstream. When we abandon the moral purpose of stories and focus on their entertainment aspect, are we also abandoning a commonly understood value landscape?

If you’re looking for absolute answers here, you won’t find them. That’s just not the world we live in. And am I naïve when I say the stories we chose to tell may have an influence on isolated violent events such as happened in Orlando? Perhaps. Despite all our best intentions, Omar Mateen might still have gone horribly offside.

But all things and all people are, to some extent, products of their environment. And because we in media and advertising are storytellers, we set that cultural environment. That’s our job. Because of this, I belief we have a moral obligation. We have to start paying more attention to the stories we tell.

 

 

 

 

Where Should Science Live?

Science, like almost every other aspect of our society, is in the midst of disruption. In that disruption, the very nature of science may be changing. And that is bringing a number of very pertinent questions up.

Two weeks ago I took Malcolm Gladwell to task for oversimplifying science for the sake of a good story. I offered Duncan Watts as a counter example. One reader, Ted Wright, came to Gladwell’s defence and in the process of doing so, took a shot at the reputation of Watts, saying with tongue firmly in cheek, “people who are academically lauded often leave an Ivy League post, in this case at Columbia, to go be a data scientist at Yahoo.”

Mr. Wright (yes, I have finally found Mr. Wright) implies this a bad thing, a step backwards, or even an academic “selling out.” (Note: Watts is now at Microsoft where he’s a principal researcher)

Since Wright offered his comment, I’ve been thinking about it. Where should science live? Is it a sell out when science happens in private companies? Should it be the sole domain of universities? I’m not so sure.

Watts is a sociologist. His area of study is network structures and system behaviors in complex environments. His past studies tend to involve analyzing large data sets to identify patterns of behavior. There are few companies who could provide larger or more representative data sets than Microsoft.

peter-2937-X2

Peter Norvig, Director of Research at Google

One such company is Google. And there are many renowned scientists working there. One of them is Peter Norvig, Google’s Director of Research. In a blog post a few years ago where he took issue with Chris Anderson’s Wired article signaling the “End of Theory”, Norvig said:

“(Chris Anderson) correctly noted that the methodology for science is evolving; he cites examples like shotgun sequencing of DNA. Having more data, and more ways to process it, means that we can develop different kinds of theories and models. But that does not mean we throw out the scientific method. It is not “The End of Theory.” It is an important change (or addition) in the methodology and set of tools that are used by science, and perhaps a change in our stereotype of scientific discovery.”

Science as we have known it has always been reductionist in nature. It requires simplification down to a controllable set of variables. It has also relied on a rigorous framework that was most at home in the world of academia. But as Norvig notes, that isn’t necessarily the only viable option now. We live in a world of complexity and the locked down, reductionist approach to science where a certain amount of simplification is required doesn’t really do this world justice. This is particularly true in areas like sociology, which attempts to understand cultural complexity in context. You can’t really do that in a lab.

But perhaps you can do it at Google. Or Microsoft. Or Facebook. These places have reams of data and all the computing power in the world to crunch it. These places precisely meet Norvig’s definition of the evolving methodology of science: “More data, and more ways to process it.”

If that’s the trade-off Duncan Watts decided to make, one can certainly understand it. Scientists follow the path of greatest promise. And when it comes to science that depends on data and processing power, increasing that is best found in places like Microsoft and Google.

 

 

 

 

 

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

Why Marketers Love Malcolm Gladwell … and Why They Shouldn’t

Marketers love Malcolm Gladwell. They love his pithy, reductionist approach to popular science – his tendency to sacrifice verity for the sake of a good “Just-so” story. And in doing this, what is Malcolm Gladwell but a marketer at heart? No wonder our industry is ga-ga over him. We love anyone who can oversimplify complexity down to the point where it can be appropriated as yet another marketing “angle”.

Take the entire influencer advertising business, for instance. Earlier this year, I saw an article saying more and more brands are expanding their influencer marketing programs. We are desperately searching for that holy nexus where social media and those super-connected “mavens” meet. While the idea of influencer marketing has been around for a while, it really gained steam with the release of Gladwell’s “The Tipping Point.” And that head of steam seems to have been building since the release of the book in 2000.

As others have pointed out, Gladwell has made a habit of taking one narrow perspective that promises to “play well” with the masses, supporting it with just enough science to make it seem plausible and then enshrining it as a “Law.”

Take “The Law of the Few”, for instance, from The Tipping Point: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” You could literally hear the millions of ears attached to marketing heads “perk up” when they heard this. “All we have to do,” the reasoning went, “is reach these people, plant a favorable opinion of our product and give them the tools to spread the word. Then we just sit back and wait for the inevitable epidemic to sweep us to new heights of profitability.”

Certainly commercial viral cascades do happen. They happen all the time. And, in hindsight, if you look long and hard enough, you’ll probably find what appears to be a “maven” near ground-zero. From this perspective, Gladwell’s “Law of the Few” seems to hold water. But that’s exactly the type of seductive reasoning that makes “Just So” stories so misleading. You mistakenly believe that because it happened once, you can predict when it’s going to happen again. Gladwell’s indiscriminate use of the term “Law” contributes to this common deceit. A law is something that is universally applicable and constant. When a law governs something, it plays out the same way, every time. And this is certainly not the case in social epidemics.

duncan-watts

Duncan Watts

If Malcolm Gladwell’s books have become marketing and pop-culture bibles, the same, sadly, cannot be said for Duncan Watts’ books. I’m guessing almost everyone reading this column has heard of Malcolm Gladwell. I further guess that almost none of you have heard of Duncan Watts. And that’s a shame. But it’s completely understandable.

Duncan Watts describes his work as determining the “role that network structure plays in determining or constraining system behavior, focusing on a few broad problem areas in social science such as information contagion, financial risk management, and organizational design.”

You started nodding off halfway through that sentence, didn’t you?

As Watts shows in his books, “Firms spent great effort trying to find “connectors” and “mavens” and to buy the influence of the biggest influencers, even though there was never causal evidence that this would work.” But the work required to get to this point is not trivial. While he certainly aims at a broad audience, Watts does not read like Gladwell. His answers are not self-evident. There is no pithy “bon mot” that causes our neural tumblers to satisfyingly click into place. Watts’ explanations are complex, counter-intuitive, occasionally ambiguous and often non-conclusive – just like the world around us. As he explains his book “Everything is Obvious: *Once You Know the Answer”, it’s easy to look backwards to find causality. But it’s not always right.

Marketers love simplicity. We love laws. We love predictability. That’s why we love Gladwell. But in following this path of least resistance, we’re straying further and further from the real world.

Decoupling Our Hunch Making Mechanism

Humans are hunch-making machines. We’re gloriously good at it. In fact, no one and no thing is better at coming up with a hunch. It’s what sets up apart on our planet and, thus far, nothing we’ve invented has proven to be better suited to strike the spark of intuition.

We can seemingly draw speculative guesses out of thin air – literally. From all the noise that surrounds us, we recognize potential patterns and infer significance. Scientists call them hypotheses. Artists call them artistic inspirations. Entrepreneurs call them innovations.

Whatever the label, we’re not exactly sure what happens. Mihaly Czikszentmihaly (which, in case you’re wondering, is pronounced Me-high Cheek-sent-me-high) explored where these hunches come from in his fascinating book, Creativity, The Psychology of Discovery and Invention. But despite the collective curiosity about the source of human creativity – the jury remains out. The mechanism that turns these very human gears and sparks the required connections between our synapses remains a mystery.

We’re good at making hunches. But we suck at qualifying those hunches. The reason is that we rush a hunch straight into becoming a belief. And that’s where things go off the rails. A hunch is a guess about what might be true. A belief is what we deem to be true. We go straight from what is one of many possible scenarios to the only scenario we execute against. The entire scientific method was created to counteract this very human tendency – forcing rational analysis of the hunches we churn out.

Philip Tetlock’s work on expertise in prediction shows how fragile this tendency to go from hunch to belief can make us. After all, a prediction is nothing more than a hunch of what might be. He referred to Isaiah Berlin’s 1950 essay, “The Hedgehog and the Fox.” In the essay, Berlin quotes the ancient Greek poet Archilochus, “”a fox knows many things, but a hedgehog one important thing.” Taking some poetic license, you could said that a hedgehog is more prone to moving straight from hunch to belief, where a fox tends to evaluate her hunches against multiple sources. Tetlock found that when it came to the accuracy of predictions, it was better to be a fox than a hedgehog. In some cases, much better.

But Tetlock also found that when it comes down to “crunching hunches”, machines tend to bet man hands down. It’s because humans have been programmed for thousands of generations to trust our hunches and no matter how much we fight it, we are born to treat our hunches as fact. Machines bear no such baggage.

This is an example of Moravec’s Paradox – the things that seem simple for humans are amazingly complex for machines. And vice versa. As artificial intelligence pioneer Marvin Minsky once recognized, it’s the things we do unconsciously that represent the biggest challenges for artificial intelligence, “In general, we’re least aware of what our minds do best.” Machines may never be as good as humans at creating a hunch – or, at least – we’re certainly not there yet. But machines have already outstripped humans in the ability to empirically analyze and validate multiple options.

Fellow Online Spin columnist Kaila Colbin posited this in her last column, “When Watson Comes for Your Job, Give it to Him.” As she points out, IBM’s Watson can kick any human ass when it comes to reviewing case law – or plowing through the details required for an accurate medical diagnosis – or assisting students prepare for an upcoming exam. But Watson isn’t very good at coming up with hunches. It’s because hunches aren’t rational. They’re inspirational. And machines aren’t fluent in inspiration. Not yet, anyway.

Maybe that’s why – even in something as logical as chess – the current champion isn’t a machine, or a human. It’s a combination of both. As American economist and author (Average is Over) Tyler explained in a blog post, a “striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.” Cowen shows four ways a man-machine team can outperform and they all have to do with leveraging the respective strengths of each. Humans use intuition to create hunches, and then harness the power of the machine to analyze relevant options.

Hunches have served humans very well. They will continue to do so. The trick is to decouple those hunches from the belief making mechanism that has historically accompanied it. That’s where we should let machines take over.

 

 

How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

In Search- Even in Mobile – Organic Still Matters

I told someone recently that I feel like Rick Astley. You know, the guy that had the monster hit “Never Gonna Give You Up” in 1987 and is still trading on it almost 30 years later? He even enjoyed a brief resurgence of viral fame in 2007 when the world discovered what it meant to be “Rickrolled”

google-golden-triangle-eye-trackingFor me, my “Never Gonna Give You Up” is the Golden Triangle eye tracking study we released in 2005. It’s my one hit wonder (to be fair to Astley, he did have a couple other hits, but you get the idea). And yes, I’m still talking about it.

The Golden Triangle as we identified it existed because people were drawn to look at the number one organic listing. That’s an important thing to keep in mind. In today’s world of ad blockers and teeth gnashing about the future of advertising, there is probably no purer or more controllable environment than the search results page. Creativity is stripped to the bare minimum. Ads have to be highly relevant and non-promotional in nature. Interaction is restricted to the few seconds required to scan and click. If there was anywhere where ads might be tolerated, its on the search results page

But…

If we fully trusted ads – especially those as benign as those that show up on search results – there would have be no Golden Triangle. It only existed because we needed to see that top Organic result and dragging our eyes down to it formed one side of the triangle.

eyetracking2014Fast forward almost 10 years. Mediative, which is the current incarnation of my old company, released a follow up two years ago. While the Golden Triangle had definitely morphed into a more linear scan, the motivation remained – people wanted to scan down to see at least one organic listing. They didn’t trust ads then. They don’t trust ads now.

Google has used this need to anchor our scanning with the top organic listing to introduce a greater variety of results into the top “hot zone” – where scanning is the greatest. Now, depending on the search, there is likely to be at least a full screen of various results – including ads, local listings, reviews or news items – before your eyes hit that top organic web result. Yet, we seem to be persistent in our need to see it. Most people still make the effort to scroll down, find it and assess its relevance.

It should be noted that all of the above refers to desktop search. But almost a year ago, Google announced that – for the first time ever – more searches happened on a mobile device than on a desktop.

eyetracking mobile.pngMediative just released a new eye-tracking study (Note: I was not involved at all with this one). This time, they dove into scan patterns on mobile devices. Given the limited real estate and the fact that for many popular searches, you would have to consciously scroll down at least a couple times to see the first organic result, did users become more accepting of ads?

Nope. They just scanned further down!

The study’s first finding was that the #1 organic listing still captures the most click activity, but it takes users almost twice as long to find it compared to a desktop.

The study’s second finding was that even though organic is still important, position matters more than ever. Users will make the effort to find the top organic result and, once they do, they’ll generally scan the top 4 results, but if they find nothing relevant, they probably won’t scan much further. In the study, 92.6% of the clicks happened above the 4th organic listing. On a desktop, 84% of the clicks happened above the number 4 listing.

The third listing shows an interesting paradox that’s emerging on mobile devices: we’re carrying our search habits from the desktop over with us – especially our need to see at least one organic listing. The average time to scan the top sponsored listing was only 0.36 seconds, meaning that people checked it out immediately after orienting themselves to the mobile results page, but for those that clicked the listing, the average time to click was 5.95 seconds. That’s almost 50% longer than the average time to click on a desktop search. When organic results are pushed down the page because of other content, it’s taking us longer before we feel confident enough to make our choice. We still need to anchor our relevancy assessment with that top organic result and that’s causing us to be less efficient in our mobile searches than we are on the desktop.

The study also indicated that these behaviors could be in flux. We may be adapted our search strategies for mobile devices, but we’re just not quite there yet. I’ll touch on this in next week’s column.