The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

Is Amazon Creating a Personalized Store?

There was a brief Amazon-related flurry of speculation last week. Apparently, according to a podcast posted by Wharton, Amazon is planning on opening 300 to 400 bricks and mortar stores.

That’s right. Stores – actual buildings – with stuff in them.

What’s more, this has been “on the books” at Amazon for a while. Amazon CEO Jeff Bezos was asked by Charlie Rose in 2012 if they would every open physical stores. Bezos replied, ““We would love to, but only if we can have a truly differentiated idea,” he said. “We want to do something that is uniquely Amazon. We haven’t found it yet, but if we can find that idea … we would love to open physical stores.”

With that background, the speculation makes sense. If Amazon is pulling the trigger, they must have “found the idea.” So what might that idea be?

Amazon does have a test store in their own backyard of Seattle. What they have chosen to do there, in a footprint about the tenth of the size of the former Barnes and Noble store that was there, is present a “highly curated” store that caters to “local interests.”

Most of the speculation about the new Amazon experiment in “back-to-the-future” retail centers around potential new supply chain management technology or payment methods. But there was one quote from Amanda Nicholson, professor of retail practice at Syracuse University’s Whitman School of Management, that caught my attention; “she said that space represents ‘a test’ to see if Amazon can create ‘a new kind of experience’ using data analytics about customers’ preferences.”

This becomes interesting if we spend some time thinking about the purchase journey we typically take. What Amazon had done online brilliantly is remove friction from two steps in that journey: filtering options and conducting the actual transaction. For certain kinds of purchases, this is all we need. If we’re buying a product that doesn’t rely on tactile feedback, like a digital file or a book, Amazon has connected all the dots required to take us from awareness to purchase.

But that certainly doesn’t represent all potential purchases. That could be the reason that online purchases only represent 9% of all retail. There are many products that require an “experience” between the filtering of options available to us and the actual purchase. These things still require the human “touch” – literally. Up to now, Amazon has remained emotionally distant from these types of purchases. But perhaps a new type of retail location could change that.

Let me give you an example. If you’re a cyclist (like me) you probably have a favorite bike shop. Bike stores are not simply retail outlets. They are temples of bike worship. Bike shops are usually an independent business run by people who love to talk about their favorite rides, the latest bikes or pretty much anything to do with cycling. Going to a bike store is an experience.

But Trek, one of the largest bike manufacturers in the world, also recognized the efficiency of the online model. In 2015, they announced the introduction of Trek Connect, their attempt to find a happy middle ground between practical efficiency and emotional experience. Through Trek Connect, you can configure and order your bike online, but pick it up and have it serviced at your local bike shop.

However, what Amazon may be proposing is not simply about the tactile requirements of certain types of purchases. What if Amazon could create a personalized real world shopping experience?

Right now, there is a gap between our online research and filtering activity and our real world experiential activity. Typically, we shortlist our candidates, gather required information, often in the form of a page printed off from a website, and head down to the nearest retail location. There, the hand off typically leaves a lot to be desired. We have to navigate a store layout that was certainly not designed with our immediate needs in mind. We have to explain what we want to a floor clerk who seems to have at least a thousand other things they’d rather be doing. And we are not guaranteed that what we’re looking for will even be in stock.

But what if Amazon could make the transition seamless? What if they could pick up all the signals from our online activity and create a physical “experiential bubble” for us when we visited the nearest Amazon retail outlet?

Let me go back to my bike purchasing analogy in way of an example. Let’s say I need a new bike because I’m taking up triathlons. Amazon knows this because my online activity has flagged me as an aspiring triathlete. They know where I live and they have a rich data set on my other interests, which includes my favored travel destinations. Amazon could take this data and, under the pretext of my picking up my bike, create a personalized in store experience for me, including a rich selection of potential add-on sales. With Amazon’s inventory and fulfillment prowess, it would be possible to merchandise a store especially for me.

I have no idea if this is what Amazon has “in store” for the future, but the possibility is tantalizing.

It may even make me like shopping.

 

 

 

We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

Luddites Unite…

Throw off the shackles of technology. Rediscover the true zen of analog pleasures!

The Hotchkisses had a tech-free Christmas holiday – mostly. The most popular activity around our home this year was adult coloring. Whodathunkit?

There were no electronic gadgets, wired home entertainment devices or addictive apps exchanged. No personal tech, no connected platforms, no internet of things (with one exception). There were small appliances, real books printed on real paper, various articles of clothing – including designer socks – and board games.

As I mentioned, I did give one techie gift, but with a totally practical intention. I gave everyone Tiles to keep track of the crap we keep losing with irritating regularity. Other than that, we were surprisingly low tech this year.

Look, I’m the last person in the world that could be considered a digital counter-revolutionary. I love tech. I eat, breathe and revel in stuff that causes my wife’s eyes to repeatedly roll. But this year – nada. Not once did I sit down with a Chinglish manual that told me “When the unit not work, press “C” and hold on until you hear (you should loose your hands after you hear each sound) “

This wasn’t part of any pre-ordained plan. We didn’t get together and decide to boycott tech this holiday. We were just technology fatigued.

Maybe it’s because technology is ceasing to be fun. Sometimes, it’s a real pain in the ass. It nags us. It causes us to fixate on stupid things. It beeps and blinks and points out our shortcomings. It can lull us into catatonic states for hours on end. And this year, we just said “Enough!” If I’m going to be catatonic, it’s going to be at the working end of a pencil crayon, trying to stay within the lines.

Even our holiday movie choice was anti-tech, in a weird kind of way. We, along with the rest of the world, went to see Star Wars, the Force Awakens. Yes, it’s a sci-fi movive, but no one is going to see this movie for its special effects or CGI gimcrackery. Like the best space opera entries, we want to get reacquainted with people in the story. The Force’s appeal is that it is a long-awaited (32 years!) family reunion. We want to see if Luke Skywalker got bald and fat, despite the force stirring within him.

I doubt that this is part of any sustained move away from tech. We are tech-dependent. But maybe that’s the point. It used to be that tech gadgets separated us from the herd. It made us look coolly nerdish and cutting edge. But when the whole world is wearing an iWatch, the way to assert your independence is to use a pocket watch. Or maybe a sundial.

And you know what else we discovered? Turning away from tech usually means you turn towards people. We played board games together – actual board games, with cards and dice and boards that were made of pasteboard, not integrated circuits. We were in the same room together. We actually talked to each other. It was a form of communication that – for once – didn’t involve keyboards, emojis or hashtags.

I know this was a fleeting anomaly. We’re already back to our regular tech-dependent habits, our hands nervously seeking the nearest connected device whenever we have a millisecond to spare.

But for a brief, disconnected moment, it was nice.

A New Definition of Order

The first time you see the University of Texas – Austin’s AIM traffic management simulator in action, you can’t believe it would work. It shows the intersection of two 12 lane, heavily trafficked roads. There are no traffic lights, no stop signs, none of the traffic control systems we’re familiar with. Yet, traffic zips through with an efficiency that’s astounding. It appears to be total chaos, but no cars have to wait more than a few seconds to get through the intersection and there’s nary a collision in site. Not even a minor fender bender.

Oh, one more thing. The model depends on there being no humans to screw things up. All the vehicles are driverless. In fact, if just one of the vehicles had a human behind the wheel, the whole system would slow dramatically. The probability of an accident would also soar.

The thing about the simulation is that there is no order – or, at least – there is no order that is apparent to the human eye. The programmers at the U of T seem to recognize this with a tongue in cheek nod to our need for rationality. This particular video clip is called “insanity.” There are other simulation videos available at the project’s website, including ones where humans drive cars at intersections controlled by stoplights. These seem much saner and controlled. They’re also much less efficient. And likely more dangerous. No simulation that includes a human factor comes even close to matching the efficiency of the 100% autonomous option.

The AIM simulation is complex, but it isn’t complicated. It’s actually quite simple. As cars approach the intersection, they signal to a central “manager” if they want to turn or go straight ahead. The manager predicts whether the vehicles path will intersect another vehicle’s predicted path. If it does, it delays the vehicle slightly until the path is clear. That’s it.

The complexity comes in trying to coordinate hundreds of these paths at any given moment. The advantage the automated solution has is that it is in communication with all the vehicles. What appears chaotic to us is actually highly connected and coordinated. It’s fluid and organic. It has a lot in common with things like beehives, ant colonies and even the rhythms of our own bodies. It may not be orderly in our rational sense, but it is natural.

Humans don’t deal very well with complexity. We can’t keep track of more than a dozen or so variables at any one time. We categorize and “chunk” data into easily managed sets that don’t overwhelm our working memory. We always try to simplify things down by imposing order. We use heuristics when things get too complex. We make gut calls and guesses. Most of the time, it works pretty well, but this system gets bogged down quickly. If we pulled the family SUV into the intersection shown in the AIM simulation, we’d probably jam on the brakes and have a minor mental meltdown as driverless cars zipped by us.

Artificial intelligence, on the other hand, loves complexity. It can juggle amounts of disparate data that humans could never dream of managing. This is not to say that computers are more powerful than humans. It’s just that they’re better at different things. It’s referred to as Moravec’s Paradox: It’s relatively easy to program a computer to do what a human finds hard, but it’s really difficult to get it to do what humans find easy. Tracking the trajectories and coordinating the flow of hundreds of autonomous cars would fall into the first category. Understanding emotions would fall into the second category.

This matters because, increasingly, technology is creating a world that is more dynamic, fluid and organic. Order, from our human perspective, will yield to efficiency. And the fact is that – in data rich environments – machines will be much better at this than humans.   Just like our perspectives on driving, our notions of order and efficiency will have to change.

 

This Message Brought to You by … Nobody

People talk about the digital revolution. I think it’s an apocalypse”
George Nimeh – What If There Was No Advertising? – TEDx Vienna 2015

 

A bigger part of my world is becoming ad-free. My TV viewing is probably 80% ad-free now. Same with my music listening. Together, that costs me about $20 per month. It’s a price I don’t mind paying.

But what if we push that to its logical extreme? What if we made the entire world ad-free? Various publications and ad-tech providers have posited that scenario. It’s actually interesting to see the two very different worlds that are conjectured, depending on what side of the church you happen to be sitting on. When that view comes from those in the ad biz, a WWA (World Without Advertising) is a post apocalyptic hell with ex-copywriters (of which I’m one) walking around as jobless zombies and the citizens of the world being squeezed penniless by exploding subscription rates. Our very society would crumble around our ears. And, for some reason, a WWA is always colored in various shades of desaturated grey, like Moscow circa 1982 or Apple’s Big Brother ad.

But those from outside our industry take a less alarming view of a WWA. This, they say, might actually work. It could be sustainable. It would probably be a more pleasant place.

adspendingLet’s do a smell test of the economics. According to eMarketer, the total ad-spend in the US for this year is $189 billion. That works out to just shy of $600 per year for each American, or $1550 for the average household. If we look at annual expenditures for the typical American family, that would put it somewhere between clothing and vehicle insurance. It would represent 2.8% of their total expenditures. A little steep, perhaps, but not out of the question.

Okay, you say. That’s fine for a rich country like the US. But what about the rest of the world? Glad you asked. The projected advertising spend worldwide – again according to eMarketer – is $592 billion, or about $84 for every single person on the planet. The average global income is about $10,000 per year. So, globally, eliminating advertising would take about 0.84% of your income. In other words, if you worked until January 3rd, you’d get to enjoy the rest of the year ad free!

So let’s say we agree that this is a price we’re willing to spend. What would an America without advertising look like? How would we support content providers, for example? Paying a few one-off subscriptions, like Netflix and Spotify, is not that big a deal, but if you multiply that by every potential content outlet, it quickly becomes unmanageable.

This could easily be managed by the converging technologies of personalization engines, digital content delivery, micro-payments and online payment solutions like ApplePay. Let’s imagine we have a digital wallet where we keep our content consumption budget. The wallet is a smart wallet, in that it knows our personal tastes and preferences. Each time we access content, it automatically pays the producer for it and tracks our budget to ensure we’re staying within preset guidelines. The ecosystem of this content marketplace would be complex, true, but the technology exists. And it can’t be any more complex than the current advertising marketplace.

A WWA would be a less cluttered and interruptive place. But would it also be a better place? Defendants of the ad biz generally say that advertising nets out as a plus for our society. It creates awareness of new products, builds appreciation for creativity and generally adds to our collective well-being.

I’m not so sure. I’ve mentioned before that I suspect advertising may be inherently evil. I know it persuades us to buy stuff we may desire, but certainly don’t need. I have no idea what our society would be like without advertising, but I have a hard time imagining we’d be worse off than we are now.

The biggest problem, I think, is the naiveté of this hypothetically ad-free world. Content will still have to be produced. And if the legitimized ad channel is removed, I suspect things will simply go underground. Content producers will be offered kickbacks to work commercial content into supposedly objective channels. Perhaps I’m just being cynical, but I’d be willing to place a fairly large bet on the bendability of the morals of the marketing community.

Ultimately, it comes down to sustainability. Let’s not forget that about a third of all Americans are using ad blockers, and that percentage is rising rapidly. When I test the ideological waters of the people whose opinions I trust, there is no good news for the current advertising ecosystem. We all agree that advertising is in bad shape. It’s just the severity of the prognosis that differs – ranging from a chronic but gradual debilitating condition to the land of walking dead. A world without advertising may be tough to imagine, but a world that continues to prop up the existing model is even more unlikely.

 

Giving Thanks for The Law of Accelerating Returns

For the past few months, I’ve been diving into the world of show programming again, helping MediaPost put together the upcoming Email Insider Summit up in Park City. One of the keynotes for the Summit, delivered by Charles W. Swift, VP of Strategy and Marketing Operations for Hearst Magazines, is going to tackle a big question, “How do companies keep up with the ever accelerating rate of change of our culture?”

After an initial call with Swift, I did some homework and reacquainted myself with Ray Kurzweil’s Law of Accelerating Returns. Shortly after, I had to stop because my brain hurt. Now, I would like to pass that unique experience along to you.

In an interview that is now 12 years old, Kurzweil explained the concept, using biological evolution as an analogy. I’ll try to make this fast. Earth is about 4.6 billion years old. The very first life appeared about 3.8 billion years ago. It took another 1.7 billion years for multicellular life to appear. Then, about 1.2 billion years later, we had something called the Cambrian Explosion. This was really when the diversity of life we recognize today started. If you’ve been keeping track, you know that it took the earth 4.1 of it’s 4.6 billion year history, or about 90% of the time since the earth was formed, to produce complex life forms of any kind.

Things started to move much quicker at that point. Amphibians and reptiles appeared about 350 million years ago, dinosaurs appeared 225 million years ago, mammals 200 million years ago, dinosaurs disappeared about 70 million years ago, the first great apes appeared about 15 million years ago and we homo sapiens have only been around for 200,000 years or so. And, as a species, we really have only made much of dent in the world in the last 10,000 years of our history. In the entire history of the world, that represents a very tiny 0.00022% slice. But consider how much the world has changed in that 10,000 years.

Accelerating Returns

Kurzweil’s Law says that, like biology, technology also evolves exponentially. It took us a very long time to do much of anything at all. The wheel, stone tools and fire took us tens of thousands of years to figure out. But now, technological paradigms shifts happen in decades or less. And the pace keeps accelerating. The Law of Accelerating Returns states that in the first 20 years of the 21st century, we’ll have progressed as much as we did during the entire 20th century. Then we’ll double that progress again by 2034, and double it once more by 2041.

Let me put this in perspective. At this rate, if my youngest daughter – born in 1995 – lives to be 100 (not an unlikely forecast), she will see more technological change in her life than in previous 20,000 years of human history!

This is one of those things we probably don’t think about because, frankly, it’s really hard to wrap your head around this. The math shows why predictability is flying out the window and why we have to get comfortable reacting to the unexpected. It would also be easy to dismiss it, but Kurzweil’s concepts are sound. Evolution does accelerate exponentially, as has our rate of technological advancement. Unless the later showed a dramatic reversal or slowing down, the future will move much much faster than we can possibly imagine.

The reason change accelerates is that the technology we develop today builds the foundations required for the technological leaps that will happen tomorrow. Agriculture set the stage for industry. Industry enabled electricity. Electricity made digital technology possible. Digital technology enables nanotechnology. And so on. Each advancement sets the stage for the next, and we progress from stage to stage more rapidly each time.

So, for your extended long weekend, if you’re sitting in a turkey-induced tryptophan daze and there’s no game on, try wrapping your head around The Law of Accelerating Returns.

Happy Thanksgiving. You’re welcome.

Basic Instincts and Attention Economics

We’ve been here before. Something becomes valuable because it’s scarce. The minute society agrees on the newly assigned value, wars begin because of it. Typically these things have been physical. And the battle lines have been drawn geographically. But this time is different. This time, we’re fighting over attention – specifically, our attention – and the battle is between individuals and corporations. Do we, as individuals, have the right to choose what we pay attention to? Or do the creators of content own our attention and can they harvest it at their will? This is the question that is rapidly dismantling the entire advertising industry. It has been debated at length here at Mediapost and pretty much every other publication everywhere.

I won’t join in the debate at this time. The reality here is that we do control our attention and the advertising industry was built on a different premise of scarcity from a different time. It was built on a foundation of access and creation, when both those things were in short supply. By creating content and solving the physical problem of giving us access to that content, the industry gained the right to ask us to watch an ad. No ads, no content. It was a bargain we agreed to because we had no other choice.

The Internet then proceeded to blow that foundation to smithereens.

By removing the physical constraints that restricted both the creation and distribution of content, technology has also erased the scarcity. In fact, the balance has been forever tipped the other way. We now have access to so much content; we don’t have enough attention to digest it all. Viewed in this light, it makes the debate around ad blockers seem hopelessly out of touch. Accusing someone of stealing content is like accusing someone of stealing air. The anti-blocking side is trying to apply the economic rational of a market that no longer exists.

So let us accept the fact that we are the owners of our own attention, and that it is a scarce commodity. That makes it valuable. My point is that we should pay more attention to how we pay attention. If the new economy is going to be built on attention, we should treat it with more respect.

The problem here is that we have two types of attention, the same as we have two types of thinking: Fast and Slow. Our slow attention is our focused, conscious attention. It is the attention we pay when we’re reading a book, watching a video or talking to someone. We consciously make a choice when we pay this type of attention. Think of it like a spotlight we shine on something for an extended period of time.

It’s the second type of attention, fast attention, which is typically the target of advertising. It plays on the edge of our spotlight, quickly and subconsciously monitoring the environment so it can swing the spotlight of conscious attention if required. Because this type of attention operates below the level of rational thought, it is controlled by base instincts. It’s why sex works in advertising. It’s why Kim Kardashian can repeatedly break the Internet. It’s why Donald Trump is leading the Republican race. And it’s why adorable Asian babies wearing watermelons can go viral.

It’s this type of attention that really determines the value of the attention economy. It’s the gatekeeper that determines how slow attention is focused. And it’s here where we may need some help. I don’t think instincts developed 200,000 years ago are necessarily the best guide for how we should invest something that has become so valuable. We need a better yardstick that simple titillation for determining where our attention should be spent.

I expect the death throes of the previous access economy to go on for some time. The teeth gnashing of the advertising industry will capture a lot of attention. But the end is inevitable. The economic underpinnings are gone, so it’s just a matter of time before the superstructures built on top of them will collapse. In my opinion, we should just move on and think about what the new world will look like. If attention is the new currency, what is the smartest way to spend it?

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.