A Possibly Premature Post-Mortem on Yahoo

Last Thursday, Yahoo held it ‘s annual shareholder meeting. At that meeting, CEO Marissa Mayer dealt the company a doubled down kiss of death. She stated the goals of the board are fully aligned with one clear priority: “delivering shareholder value to all of you.” She further mentioned, when dealing with the divesture of all that once was Yahoo, that she’s “been very heartened by the level of interest in Yahoo. It validates our business processes as well as our achievements to date.”

It’s fancier language, but it’s basically the same as the butcher saying, “This cow is no longer viable as a cow, so I’m looking at it as a collection of rump roasts, T-Bones and hamburger. I’m hoping we have more of the former and less of the later.”

Yahoo_1996I first encountered Yahoo in 1995, shortly after it’s brief life as Jerry and David’s Guide to the World Wide Web. I think it was probably still parked on Stanford’s servers at the time. At the time, the Internet was like the world’s biggest second-hand store – a huge collection that was 95% junk/5 % useful stuff with no overarching order or organization. David Filo and Jerry Yang’s site was one of the very first to try to provide that order.

As an early search marketer in the run up to the dot-com bubble, you couldn’t ignore the Yahoo directory. The Yahooligans walked with typical Valley swagger. Hubris was never in short supply. They were the cocks of the walk and they knew it.

It was a much-humbled post-bubble Yahoo that I visited in 2004. They had got their search asses soundly kicked by Google, who was now powering their non-directory results. The age of the curated directory was gone, replaced by the scalability of algorithmic search.

As a culture, the Yahooligans were struggling with the mixed management signals that came from then CEO Terry Semel and his team. Sunnyvale was clouded in a purple haze. The Yahooligans didn’t know who the hell they were or what they were supposed to do. Where they a tech company or an entertainment company? The answer, as it turned out, was neither.

I met with the remnants of the once mighty search team to talk about user behaviors. I didn’t know it at the time, but Yahoo was gearing up to relaunch their search service. A much vilified paid inclusion program would also be debuted. It was one of many ill-fated attempts to find the next “Big Thing.”

Marissa Mayer continues to put a brave face on it, but the Yahoo engine ran out of steam at least a decade and a half ago. What amazes me is how long the ride has been. There is a message here for tech-based companies.

If you dig down to the critical incubation period of any tech company, you find a recurring pattern. Some technologically mediated connection allows people to do something they were previously unable to do. This releases pent up market demand. It’s like a thin sliver trying to poke through a water balloon. If successful, this released market demand creates an immediate and sizable audience for whomever introduced the innovation. Yahoo’s directory, Google’s PageRank, Facebook’s “Facemash”, AirBnB’s accommodation directory, Uber’s ridesharing app – they all share the same modus operandi – a tech-step forward creates a new audience and market opportunity.

In hindsight, once you strip away all the hype, it’s amazing how tenuous and unimpressive these technological advances are. Luck and timing typically play a huge part. If the conditions are right, the sliver eases through the balloon’s membrane and for a time, there is a steady stream of opportunity.

The problem is that is that as easily as these markets form, they can just as easily evaporate. When the technological advantage passes to the next competitor, as it did when Yahoo gave way to Google, all that’s left is the audience. When you consider that Yahoo has been coasting on this audience for close to two decades, it’s rather amazing that Mayer still has any assets at all to sell.

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Dad: Unplugged

I went off the grid last week. It wasn’t intentional. First we changed ISPs and the connectivity we take for granted had a hiccup. We were soon back online, but it was irritating none-the-less.

As luck would have it, it was a warning of what was to come. The main logic board on my laptop packed it in the next day and I was once more cut off. I realized how dependent I am on that little 10 by 15 inch slab of brushed aluminum and electronics. My world was unplugged. It felt like it was a very big deal.

Given that I felt like my right arm was lopped off, you would think this might impact the quality of my Father’s Day. And it did. But it was all for the better. I didn’t have to check emails. There were no task reminders beeping. No Google searches itching to be launched. No Facebook posts to like. I was off the grid. And the day was glorious.

I realized that the things my daughters were thanking me for on last Sunday had little to do with the thousands and thousands of hours I have spent online in my life. They seem to appreciate my sense of humor. That predated the Internet by at least three decades. They like that I’m fairly calm and levelheaded. To be honest, being online generally has a negative correlation with my current state of calmness. I’m a pretty good listener but I’m a much better listener when my attention is not being distracted by a nearby screen. I try to be thoughtful. I’ve previously gone on record as saying that I fear the thoughtfulness of our species is eroding in the world of wired instant gratification. And finally, I try to be a good and ethical person. While being online helps inform those ethics, they are mostly the product of that off-line thoughtfulness I try to set time aside for.

I certainly felt the pain of being off-the-grid, but I realized that much of the urgency that caused that pain was a by-product of my being online. I think technology is creating it’s own cloud of noise that continually intrudes on our lives. These things all seem urgent, but are they important? Are we ignoring other, more important things because of the incessant noise of our digital lives?

If we sat down and made a list of the values that we hold to be important to us, how many of these would require being connected? Would being online make us a better parent? A better husband or wife? A better son or daughter? Probably not.

Technology should be a tool we use to help express the person we are and what we hold to be valuable and true. Technology should not define us. It should not be it’s own truth. It should not create it’s own values. But when technology becomes as ubiquitous as it has become, I fear the line is becoming permanently blurred. Our being online may be changing who we are. I’m pretty sure none of us intend to be distracted, short tempered, disconnected or intellectually shallow but the world is increasingly being filled with such people. I sometimes am one of these people. And I’m usually online when it happens.

This Sunday reminded me that there are things that can wait. This includes about 99% of what we do online.

And there are things that can’t wait. Like children who grow up way too fast. My kids are now 22 and 20. I’m pretty sure neither of them wish their dad had spent more time doing things on his computer.

 

 

 

Can Stories Make Us Better?

In writing this column, I often put ideas on the shelf for a while. Sometimes, world events conspire to make one of these shelved ideas suddenly relevant. This happened this past weekend.

The idea that caught my eye some months ago was an article that explored whether robots could learn morality by reading stories. On the face of it, it was mildly intriguing. But early Sunday morning as the heartbreaking news filtered to me from Orlando, a deeper connection emerged.

When we speak of unintended consequence, which we have before, the media amplification of acts of terror are one of them. The staggeringly sad fact is that shocking casualty numbers have their own media value. And that, said one analyst who was commenting on ways to deal with terrorism, is a new reality we have to come to terms with. When we in the media business make stories news worthy we assign worth not just for news consumers but also to newsmakers – those troubled individuals who have the motivation and the means to blow apart the daily news cycle.

This same analyst, when asked how we deal with terrorism, made the point you can’t prevent lone acts of terrorism. The only answer is to use that same network of cultural connections we use to amplify catastrophic events to create an environment that dampens rather than intensifies violent impulse. We in the media and advertising industries have to use our considerable skills in setting cultural contexts to create an environment that reduces the odds of a violent outcome. And sadly, this is a game of odds. There are no absolute answers here – there is just a statistical lowering of the curve. Sometimes, despite your best efforts, the unimaginable still happens.

But how do you use the tools at our disposal to amplify morality? Here, perhaps the story I shelved some months ago can provide some clues.

In the study from Georgia Tech, Mark Riedl and Brent Harrison used stories as models of acceptable morality. For most of human history, popular culture included at least an element of moral code. We encoded the values we held most dear into our stories. It provided a base for acceptable behavior, either through positive reinforcement of commonly understood virtues (prudence, justice, temperance, courage, faith, hope and charity) or warnings about universal vices (lust, gluttony, greed, sloth, wrath, envy and pride). Sometimes these stories had religious foundations, sometimes they were secular morality fables but they all served the same purpose. They taught us what was acceptable behavior.

Stories were never originally intended to entertain. They were created to pass along knowledge and cultural wisdom. Entertainment came after when we discovered the more entertaining the story, the more effective it was at its primary purpose: education. And this is how the researchers used stories. Robots can’t be entertained, but they can be educated.

At some point in the last century, we focused on the entertainment value of stories over education and, in doing so, rotated our moral compass 180 degrees. If you look at what is most likely to titillate, sin almost always trumps sainthood. Review that list of virtues and vices and you’ll see that the stories of our current popular culture focus on vice – that list could be the programming handbook for any Hollywood producer. I don’t intend this a sermon – I enjoy Game of Thrones as much as the next person. I simply state it as a fact. Our popular culture – and the amplification that comes from it – is focused almost exclusively on the worst aspects of human nature. If robots were receiving their behavioral instruction through these stories, they would be programmed to be psychopathic moral degenerates.

For most of us, we can absorb this continual stream of anti-social programming and not be affected by it. We still know what is right and what is wrong. But in a world where it’s the “black swan” outliers that grab the news headlines, we have to think about the consequences that reach beyond the mainstream. When we abandon the moral purpose of stories and focus on their entertainment aspect, are we also abandoning a commonly understood value landscape?

If you’re looking for absolute answers here, you won’t find them. That’s just not the world we live in. And am I naïve when I say the stories we chose to tell may have an influence on isolated violent events such as happened in Orlando? Perhaps. Despite all our best intentions, Omar Mateen might still have gone horribly offside.

But all things and all people are, to some extent, products of their environment. And because we in media and advertising are storytellers, we set that cultural environment. That’s our job. Because of this, I belief we have a moral obligation. We have to start paying more attention to the stories we tell.

 

 

 

 

Where Should Science Live?

Science, like almost every other aspect of our society, is in the midst of disruption. In that disruption, the very nature of science may be changing. And that is bringing a number of very pertinent questions up.

Two weeks ago I took Malcolm Gladwell to task for oversimplifying science for the sake of a good story. I offered Duncan Watts as a counter example. One reader, Ted Wright, came to Gladwell’s defence and in the process of doing so, took a shot at the reputation of Watts, saying with tongue firmly in cheek, “people who are academically lauded often leave an Ivy League post, in this case at Columbia, to go be a data scientist at Yahoo.”

Mr. Wright (yes, I have finally found Mr. Wright) implies this a bad thing, a step backwards, or even an academic “selling out.” (Note: Watts is now at Microsoft where he’s a principal researcher)

Since Wright offered his comment, I’ve been thinking about it. Where should science live? Is it a sell out when science happens in private companies? Should it be the sole domain of universities? I’m not so sure.

Watts is a sociologist. His area of study is network structures and system behaviors in complex environments. His past studies tend to involve analyzing large data sets to identify patterns of behavior. There are few companies who could provide larger or more representative data sets than Microsoft.

peter-2937-X2

Peter Norvig, Director of Research at Google

One such company is Google. And there are many renowned scientists working there. One of them is Peter Norvig, Google’s Director of Research. In a blog post a few years ago where he took issue with Chris Anderson’s Wired article signaling the “End of Theory”, Norvig said:

“(Chris Anderson) correctly noted that the methodology for science is evolving; he cites examples like shotgun sequencing of DNA. Having more data, and more ways to process it, means that we can develop different kinds of theories and models. But that does not mean we throw out the scientific method. It is not “The End of Theory.” It is an important change (or addition) in the methodology and set of tools that are used by science, and perhaps a change in our stereotype of scientific discovery.”

Science as we have known it has always been reductionist in nature. It requires simplification down to a controllable set of variables. It has also relied on a rigorous framework that was most at home in the world of academia. But as Norvig notes, that isn’t necessarily the only viable option now. We live in a world of complexity and the locked down, reductionist approach to science where a certain amount of simplification is required doesn’t really do this world justice. This is particularly true in areas like sociology, which attempts to understand cultural complexity in context. You can’t really do that in a lab.

But perhaps you can do it at Google. Or Microsoft. Or Facebook. These places have reams of data and all the computing power in the world to crunch it. These places precisely meet Norvig’s definition of the evolving methodology of science: “More data, and more ways to process it.”

If that’s the trade-off Duncan Watts decided to make, one can certainly understand it. Scientists follow the path of greatest promise. And when it comes to science that depends on data and processing power, increasing that is best found in places like Microsoft and Google.

 

 

 

 

 

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

Why Marketers Love Malcolm Gladwell … and Why They Shouldn’t

Marketers love Malcolm Gladwell. They love his pithy, reductionist approach to popular science – his tendency to sacrifice verity for the sake of a good “Just-so” story. And in doing this, what is Malcolm Gladwell but a marketer at heart? No wonder our industry is ga-ga over him. We love anyone who can oversimplify complexity down to the point where it can be appropriated as yet another marketing “angle”.

Take the entire influencer advertising business, for instance. Earlier this year, I saw an article saying more and more brands are expanding their influencer marketing programs. We are desperately searching for that holy nexus where social media and those super-connected “mavens” meet. While the idea of influencer marketing has been around for a while, it really gained steam with the release of Gladwell’s “The Tipping Point.” And that head of steam seems to have been building since the release of the book in 2000.

As others have pointed out, Gladwell has made a habit of taking one narrow perspective that promises to “play well” with the masses, supporting it with just enough science to make it seem plausible and then enshrining it as a “Law.”

Take “The Law of the Few”, for instance, from The Tipping Point: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” You could literally hear the millions of ears attached to marketing heads “perk up” when they heard this. “All we have to do,” the reasoning went, “is reach these people, plant a favorable opinion of our product and give them the tools to spread the word. Then we just sit back and wait for the inevitable epidemic to sweep us to new heights of profitability.”

Certainly commercial viral cascades do happen. They happen all the time. And, in hindsight, if you look long and hard enough, you’ll probably find what appears to be a “maven” near ground-zero. From this perspective, Gladwell’s “Law of the Few” seems to hold water. But that’s exactly the type of seductive reasoning that makes “Just So” stories so misleading. You mistakenly believe that because it happened once, you can predict when it’s going to happen again. Gladwell’s indiscriminate use of the term “Law” contributes to this common deceit. A law is something that is universally applicable and constant. When a law governs something, it plays out the same way, every time. And this is certainly not the case in social epidemics.

duncan-watts

Duncan Watts

If Malcolm Gladwell’s books have become marketing and pop-culture bibles, the same, sadly, cannot be said for Duncan Watts’ books. I’m guessing almost everyone reading this column has heard of Malcolm Gladwell. I further guess that almost none of you have heard of Duncan Watts. And that’s a shame. But it’s completely understandable.

Duncan Watts describes his work as determining the “role that network structure plays in determining or constraining system behavior, focusing on a few broad problem areas in social science such as information contagion, financial risk management, and organizational design.”

You started nodding off halfway through that sentence, didn’t you?

As Watts shows in his books, “Firms spent great effort trying to find “connectors” and “mavens” and to buy the influence of the biggest influencers, even though there was never causal evidence that this would work.” But the work required to get to this point is not trivial. While he certainly aims at a broad audience, Watts does not read like Gladwell. His answers are not self-evident. There is no pithy “bon mot” that causes our neural tumblers to satisfyingly click into place. Watts’ explanations are complex, counter-intuitive, occasionally ambiguous and often non-conclusive – just like the world around us. As he explains his book “Everything is Obvious: *Once You Know the Answer”, it’s easy to look backwards to find causality. But it’s not always right.

Marketers love simplicity. We love laws. We love predictability. That’s why we love Gladwell. But in following this path of least resistance, we’re straying further and further from the real world.

Decoupling Our Hunch Making Mechanism

Humans are hunch-making machines. We’re gloriously good at it. In fact, no one and no thing is better at coming up with a hunch. It’s what sets up apart on our planet and, thus far, nothing we’ve invented has proven to be better suited to strike the spark of intuition.

We can seemingly draw speculative guesses out of thin air – literally. From all the noise that surrounds us, we recognize potential patterns and infer significance. Scientists call them hypotheses. Artists call them artistic inspirations. Entrepreneurs call them innovations.

Whatever the label, we’re not exactly sure what happens. Mihaly Czikszentmihaly (which, in case you’re wondering, is pronounced Me-high Cheek-sent-me-high) explored where these hunches come from in his fascinating book, Creativity, The Psychology of Discovery and Invention. But despite the collective curiosity about the source of human creativity – the jury remains out. The mechanism that turns these very human gears and sparks the required connections between our synapses remains a mystery.

We’re good at making hunches. But we suck at qualifying those hunches. The reason is that we rush a hunch straight into becoming a belief. And that’s where things go off the rails. A hunch is a guess about what might be true. A belief is what we deem to be true. We go straight from what is one of many possible scenarios to the only scenario we execute against. The entire scientific method was created to counteract this very human tendency – forcing rational analysis of the hunches we churn out.

Philip Tetlock’s work on expertise in prediction shows how fragile this tendency to go from hunch to belief can make us. After all, a prediction is nothing more than a hunch of what might be. He referred to Isaiah Berlin’s 1950 essay, “The Hedgehog and the Fox.” In the essay, Berlin quotes the ancient Greek poet Archilochus, “”a fox knows many things, but a hedgehog one important thing.” Taking some poetic license, you could said that a hedgehog is more prone to moving straight from hunch to belief, where a fox tends to evaluate her hunches against multiple sources. Tetlock found that when it came to the accuracy of predictions, it was better to be a fox than a hedgehog. In some cases, much better.

But Tetlock also found that when it comes down to “crunching hunches”, machines tend to bet man hands down. It’s because humans have been programmed for thousands of generations to trust our hunches and no matter how much we fight it, we are born to treat our hunches as fact. Machines bear no such baggage.

This is an example of Moravec’s Paradox – the things that seem simple for humans are amazingly complex for machines. And vice versa. As artificial intelligence pioneer Marvin Minsky once recognized, it’s the things we do unconsciously that represent the biggest challenges for artificial intelligence, “In general, we’re least aware of what our minds do best.” Machines may never be as good as humans at creating a hunch – or, at least – we’re certainly not there yet. But machines have already outstripped humans in the ability to empirically analyze and validate multiple options.

Fellow Online Spin columnist Kaila Colbin posited this in her last column, “When Watson Comes for Your Job, Give it to Him.” As she points out, IBM’s Watson can kick any human ass when it comes to reviewing case law – or plowing through the details required for an accurate medical diagnosis – or assisting students prepare for an upcoming exam. But Watson isn’t very good at coming up with hunches. It’s because hunches aren’t rational. They’re inspirational. And machines aren’t fluent in inspiration. Not yet, anyway.

Maybe that’s why – even in something as logical as chess – the current champion isn’t a machine, or a human. It’s a combination of both. As American economist and author (Average is Over) Tyler explained in a blog post, a “striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.” Cowen shows four ways a man-machine team can outperform and they all have to do with leveraging the respective strengths of each. Humans use intuition to create hunches, and then harness the power of the machine to analyze relevant options.

Hunches have served humans very well. They will continue to do so. The trick is to decouple those hunches from the belief making mechanism that has historically accompanied it. That’s where we should let machines take over.

 

 

Why I Love New York

I love people watching. I find the passing tableau of human drama endlessly fascinating. Trust me – it’s worth putting the smartphone away and paying attention to what’s happening around you. This past weekend, I hit the trifecta of snooping: subways, airports and shops in New York City.

Scene 1: We’re on a subway and an elderly Polish lady, dressed all in pink (for Mother’s Day, perhaps), going to Penn Station randomly asks a hulking young man of decidedly intimidating appearance for help getting her bag out of the car and onto the platform. You could not have picked a more unlikely duo for this particular dialogue.

He was engrossed in a conversation with his friend and didn’t hear her at first. When she asked again, he wasn’t sure what she said because of her thick accent. Finally, a young pregnant girl beside them offers to help. The Polish lady refuses and starts pointing at the man and scolding vigorously. The young man shrugs, the Polish intonations still coming a little too thick and fast for him to understand. The young girl translates, “She wants you to help because you’re big.”

He grins sheepishly and picks up the suitcase. The Polish grandmother toddles off happily.

Scene 2: We’re walking through an airport and a young man is coming home from college (again, perhaps for Mother’s Day.) He’s meandering his way from the gate, weaving back and forth across the concourse and trying to carry on a somewhat agitated conversation with said mother. My first instinct is to pass him but then I decide to hold back and eavesdrop a little bit. The son obviously has no patience for his mother:

“Mom, I told you, I never asked you to come and get me. It was your idea…”

“Why are you picking me up in departures? I’m arriving. You should be in arrivals…”

“I can’t help it if you have to go all the way around again to get there. You should have thought of that before you pulled into the airport.”

I can only imagine how the rest of this Mother’s Day visit went. Next time, let him catch a cab.

Scene 3: We’re in one of those tacky souvenir shops off Times Square (no, it wasn’t my idea). Two elderly ladies come in and ask to see a T shirt that says “Help Donald Drumpf make America Great Again”

“Why is it spelled ‘Drumpf’?”

The shop owner (in another thick accent – Middle Eastern this time), “It’s wacky spelling.”

“Why?”

“It’s a joke. It’s a jokey T-shirt”

“Do you have one spelled correctly?”

“You want real t-shirt?”

“Yes”

“No, we only have the jokey ones.”

Meanwhile, outside a sidewalk prophet is yelling that Jesus is the only true way and that we are headed straight to hell while standing under a 60-foot high electronic screen advertising “The Book of Mormon.”

New York – you crack me up.

How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.