How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

In Search- Even in Mobile – Organic Still Matters

I told someone recently that I feel like Rick Astley. You know, the guy that had the monster hit “Never Gonna Give You Up” in 1987 and is still trading on it almost 30 years later? He even enjoyed a brief resurgence of viral fame in 2007 when the world discovered what it meant to be “Rickrolled”

google-golden-triangle-eye-trackingFor me, my “Never Gonna Give You Up” is the Golden Triangle eye tracking study we released in 2005. It’s my one hit wonder (to be fair to Astley, he did have a couple other hits, but you get the idea). And yes, I’m still talking about it.

The Golden Triangle as we identified it existed because people were drawn to look at the number one organic listing. That’s an important thing to keep in mind. In today’s world of ad blockers and teeth gnashing about the future of advertising, there is probably no purer or more controllable environment than the search results page. Creativity is stripped to the bare minimum. Ads have to be highly relevant and non-promotional in nature. Interaction is restricted to the few seconds required to scan and click. If there was anywhere where ads might be tolerated, its on the search results page

But…

If we fully trusted ads – especially those as benign as those that show up on search results – there would have be no Golden Triangle. It only existed because we needed to see that top Organic result and dragging our eyes down to it formed one side of the triangle.

eyetracking2014Fast forward almost 10 years. Mediative, which is the current incarnation of my old company, released a follow up two years ago. While the Golden Triangle had definitely morphed into a more linear scan, the motivation remained – people wanted to scan down to see at least one organic listing. They didn’t trust ads then. They don’t trust ads now.

Google has used this need to anchor our scanning with the top organic listing to introduce a greater variety of results into the top “hot zone” – where scanning is the greatest. Now, depending on the search, there is likely to be at least a full screen of various results – including ads, local listings, reviews or news items – before your eyes hit that top organic web result. Yet, we seem to be persistent in our need to see it. Most people still make the effort to scroll down, find it and assess its relevance.

It should be noted that all of the above refers to desktop search. But almost a year ago, Google announced that – for the first time ever – more searches happened on a mobile device than on a desktop.

eyetracking mobile.pngMediative just released a new eye-tracking study (Note: I was not involved at all with this one). This time, they dove into scan patterns on mobile devices. Given the limited real estate and the fact that for many popular searches, you would have to consciously scroll down at least a couple times to see the first organic result, did users become more accepting of ads?

Nope. They just scanned further down!

The study’s first finding was that the #1 organic listing still captures the most click activity, but it takes users almost twice as long to find it compared to a desktop.

The study’s second finding was that even though organic is still important, position matters more than ever. Users will make the effort to find the top organic result and, once they do, they’ll generally scan the top 4 results, but if they find nothing relevant, they probably won’t scan much further. In the study, 92.6% of the clicks happened above the 4th organic listing. On a desktop, 84% of the clicks happened above the number 4 listing.

The third listing shows an interesting paradox that’s emerging on mobile devices: we’re carrying our search habits from the desktop over with us – especially our need to see at least one organic listing. The average time to scan the top sponsored listing was only 0.36 seconds, meaning that people checked it out immediately after orienting themselves to the mobile results page, but for those that clicked the listing, the average time to click was 5.95 seconds. That’s almost 50% longer than the average time to click on a desktop search. When organic results are pushed down the page because of other content, it’s taking us longer before we feel confident enough to make our choice. We still need to anchor our relevancy assessment with that top organic result and that’s causing us to be less efficient in our mobile searches than we are on the desktop.

The study also indicated that these behaviors could be in flux. We may be adapted our search strategies for mobile devices, but we’re just not quite there yet. I’ll touch on this in next week’s column.

 

 

 

 

 

 

 

 

The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

A New Definition of Order

The first time you see the University of Texas – Austin’s AIM traffic management simulator in action, you can’t believe it would work. It shows the intersection of two 12 lane, heavily trafficked roads. There are no traffic lights, no stop signs, none of the traffic control systems we’re familiar with. Yet, traffic zips through with an efficiency that’s astounding. It appears to be total chaos, but no cars have to wait more than a few seconds to get through the intersection and there’s nary a collision in site. Not even a minor fender bender.

Oh, one more thing. The model depends on there being no humans to screw things up. All the vehicles are driverless. In fact, if just one of the vehicles had a human behind the wheel, the whole system would slow dramatically. The probability of an accident would also soar.

The thing about the simulation is that there is no order – or, at least – there is no order that is apparent to the human eye. The programmers at the U of T seem to recognize this with a tongue in cheek nod to our need for rationality. This particular video clip is called “insanity.” There are other simulation videos available at the project’s website, including ones where humans drive cars at intersections controlled by stoplights. These seem much saner and controlled. They’re also much less efficient. And likely more dangerous. No simulation that includes a human factor comes even close to matching the efficiency of the 100% autonomous option.

The AIM simulation is complex, but it isn’t complicated. It’s actually quite simple. As cars approach the intersection, they signal to a central “manager” if they want to turn or go straight ahead. The manager predicts whether the vehicles path will intersect another vehicle’s predicted path. If it does, it delays the vehicle slightly until the path is clear. That’s it.

The complexity comes in trying to coordinate hundreds of these paths at any given moment. The advantage the automated solution has is that it is in communication with all the vehicles. What appears chaotic to us is actually highly connected and coordinated. It’s fluid and organic. It has a lot in common with things like beehives, ant colonies and even the rhythms of our own bodies. It may not be orderly in our rational sense, but it is natural.

Humans don’t deal very well with complexity. We can’t keep track of more than a dozen or so variables at any one time. We categorize and “chunk” data into easily managed sets that don’t overwhelm our working memory. We always try to simplify things down by imposing order. We use heuristics when things get too complex. We make gut calls and guesses. Most of the time, it works pretty well, but this system gets bogged down quickly. If we pulled the family SUV into the intersection shown in the AIM simulation, we’d probably jam on the brakes and have a minor mental meltdown as driverless cars zipped by us.

Artificial intelligence, on the other hand, loves complexity. It can juggle amounts of disparate data that humans could never dream of managing. This is not to say that computers are more powerful than humans. It’s just that they’re better at different things. It’s referred to as Moravec’s Paradox: It’s relatively easy to program a computer to do what a human finds hard, but it’s really difficult to get it to do what humans find easy. Tracking the trajectories and coordinating the flow of hundreds of autonomous cars would fall into the first category. Understanding emotions would fall into the second category.

This matters because, increasingly, technology is creating a world that is more dynamic, fluid and organic. Order, from our human perspective, will yield to efficiency. And the fact is that – in data rich environments – machines will be much better at this than humans.   Just like our perspectives on driving, our notions of order and efficiency will have to change.

 

This Message Brought to You by … Nobody

People talk about the digital revolution. I think it’s an apocalypse”
George Nimeh – What If There Was No Advertising? – TEDx Vienna 2015

 

A bigger part of my world is becoming ad-free. My TV viewing is probably 80% ad-free now. Same with my music listening. Together, that costs me about $20 per month. It’s a price I don’t mind paying.

But what if we push that to its logical extreme? What if we made the entire world ad-free? Various publications and ad-tech providers have posited that scenario. It’s actually interesting to see the two very different worlds that are conjectured, depending on what side of the church you happen to be sitting on. When that view comes from those in the ad biz, a WWA (World Without Advertising) is a post apocalyptic hell with ex-copywriters (of which I’m one) walking around as jobless zombies and the citizens of the world being squeezed penniless by exploding subscription rates. Our very society would crumble around our ears. And, for some reason, a WWA is always colored in various shades of desaturated grey, like Moscow circa 1982 or Apple’s Big Brother ad.

But those from outside our industry take a less alarming view of a WWA. This, they say, might actually work. It could be sustainable. It would probably be a more pleasant place.

adspendingLet’s do a smell test of the economics. According to eMarketer, the total ad-spend in the US for this year is $189 billion. That works out to just shy of $600 per year for each American, or $1550 for the average household. If we look at annual expenditures for the typical American family, that would put it somewhere between clothing and vehicle insurance. It would represent 2.8% of their total expenditures. A little steep, perhaps, but not out of the question.

Okay, you say. That’s fine for a rich country like the US. But what about the rest of the world? Glad you asked. The projected advertising spend worldwide – again according to eMarketer – is $592 billion, or about $84 for every single person on the planet. The average global income is about $10,000 per year. So, globally, eliminating advertising would take about 0.84% of your income. In other words, if you worked until January 3rd, you’d get to enjoy the rest of the year ad free!

So let’s say we agree that this is a price we’re willing to spend. What would an America without advertising look like? How would we support content providers, for example? Paying a few one-off subscriptions, like Netflix and Spotify, is not that big a deal, but if you multiply that by every potential content outlet, it quickly becomes unmanageable.

This could easily be managed by the converging technologies of personalization engines, digital content delivery, micro-payments and online payment solutions like ApplePay. Let’s imagine we have a digital wallet where we keep our content consumption budget. The wallet is a smart wallet, in that it knows our personal tastes and preferences. Each time we access content, it automatically pays the producer for it and tracks our budget to ensure we’re staying within preset guidelines. The ecosystem of this content marketplace would be complex, true, but the technology exists. And it can’t be any more complex than the current advertising marketplace.

A WWA would be a less cluttered and interruptive place. But would it also be a better place? Defendants of the ad biz generally say that advertising nets out as a plus for our society. It creates awareness of new products, builds appreciation for creativity and generally adds to our collective well-being.

I’m not so sure. I’ve mentioned before that I suspect advertising may be inherently evil. I know it persuades us to buy stuff we may desire, but certainly don’t need. I have no idea what our society would be like without advertising, but I have a hard time imagining we’d be worse off than we are now.

The biggest problem, I think, is the naiveté of this hypothetically ad-free world. Content will still have to be produced. And if the legitimized ad channel is removed, I suspect things will simply go underground. Content producers will be offered kickbacks to work commercial content into supposedly objective channels. Perhaps I’m just being cynical, but I’d be willing to place a fairly large bet on the bendability of the morals of the marketing community.

Ultimately, it comes down to sustainability. Let’s not forget that about a third of all Americans are using ad blockers, and that percentage is rising rapidly. When I test the ideological waters of the people whose opinions I trust, there is no good news for the current advertising ecosystem. We all agree that advertising is in bad shape. It’s just the severity of the prognosis that differs – ranging from a chronic but gradual debilitating condition to the land of walking dead. A world without advertising may be tough to imagine, but a world that continues to prop up the existing model is even more unlikely.

 

Basic Instincts and Attention Economics

We’ve been here before. Something becomes valuable because it’s scarce. The minute society agrees on the newly assigned value, wars begin because of it. Typically these things have been physical. And the battle lines have been drawn geographically. But this time is different. This time, we’re fighting over attention – specifically, our attention – and the battle is between individuals and corporations. Do we, as individuals, have the right to choose what we pay attention to? Or do the creators of content own our attention and can they harvest it at their will? This is the question that is rapidly dismantling the entire advertising industry. It has been debated at length here at Mediapost and pretty much every other publication everywhere.

I won’t join in the debate at this time. The reality here is that we do control our attention and the advertising industry was built on a different premise of scarcity from a different time. It was built on a foundation of access and creation, when both those things were in short supply. By creating content and solving the physical problem of giving us access to that content, the industry gained the right to ask us to watch an ad. No ads, no content. It was a bargain we agreed to because we had no other choice.

The Internet then proceeded to blow that foundation to smithereens.

By removing the physical constraints that restricted both the creation and distribution of content, technology has also erased the scarcity. In fact, the balance has been forever tipped the other way. We now have access to so much content; we don’t have enough attention to digest it all. Viewed in this light, it makes the debate around ad blockers seem hopelessly out of touch. Accusing someone of stealing content is like accusing someone of stealing air. The anti-blocking side is trying to apply the economic rational of a market that no longer exists.

So let us accept the fact that we are the owners of our own attention, and that it is a scarce commodity. That makes it valuable. My point is that we should pay more attention to how we pay attention. If the new economy is going to be built on attention, we should treat it with more respect.

The problem here is that we have two types of attention, the same as we have two types of thinking: Fast and Slow. Our slow attention is our focused, conscious attention. It is the attention we pay when we’re reading a book, watching a video or talking to someone. We consciously make a choice when we pay this type of attention. Think of it like a spotlight we shine on something for an extended period of time.

It’s the second type of attention, fast attention, which is typically the target of advertising. It plays on the edge of our spotlight, quickly and subconsciously monitoring the environment so it can swing the spotlight of conscious attention if required. Because this type of attention operates below the level of rational thought, it is controlled by base instincts. It’s why sex works in advertising. It’s why Kim Kardashian can repeatedly break the Internet. It’s why Donald Trump is leading the Republican race. And it’s why adorable Asian babies wearing watermelons can go viral.

It’s this type of attention that really determines the value of the attention economy. It’s the gatekeeper that determines how slow attention is focused. And it’s here where we may need some help. I don’t think instincts developed 200,000 years ago are necessarily the best guide for how we should invest something that has become so valuable. We need a better yardstick that simple titillation for determining where our attention should be spent.

I expect the death throes of the previous access economy to go on for some time. The teeth gnashing of the advertising industry will capture a lot of attention. But the end is inevitable. The economic underpinnings are gone, so it’s just a matter of time before the superstructures built on top of them will collapse. In my opinion, we should just move on and think about what the new world will look like. If attention is the new currency, what is the smartest way to spend it?

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.

Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.