This Message Brought to You by … Nobody

People talk about the digital revolution. I think it’s an apocalypse”
George Nimeh – What If There Was No Advertising? – TEDx Vienna 2015

 

A bigger part of my world is becoming ad-free. My TV viewing is probably 80% ad-free now. Same with my music listening. Together, that costs me about $20 per month. It’s a price I don’t mind paying.

But what if we push that to its logical extreme? What if we made the entire world ad-free? Various publications and ad-tech providers have posited that scenario. It’s actually interesting to see the two very different worlds that are conjectured, depending on what side of the church you happen to be sitting on. When that view comes from those in the ad biz, a WWA (World Without Advertising) is a post apocalyptic hell with ex-copywriters (of which I’m one) walking around as jobless zombies and the citizens of the world being squeezed penniless by exploding subscription rates. Our very society would crumble around our ears. And, for some reason, a WWA is always colored in various shades of desaturated grey, like Moscow circa 1982 or Apple’s Big Brother ad.

But those from outside our industry take a less alarming view of a WWA. This, they say, might actually work. It could be sustainable. It would probably be a more pleasant place.

adspendingLet’s do a smell test of the economics. According to eMarketer, the total ad-spend in the US for this year is $189 billion. That works out to just shy of $600 per year for each American, or $1550 for the average household. If we look at annual expenditures for the typical American family, that would put it somewhere between clothing and vehicle insurance. It would represent 2.8% of their total expenditures. A little steep, perhaps, but not out of the question.

Okay, you say. That’s fine for a rich country like the US. But what about the rest of the world? Glad you asked. The projected advertising spend worldwide – again according to eMarketer – is $592 billion, or about $84 for every single person on the planet. The average global income is about $10,000 per year. So, globally, eliminating advertising would take about 0.84% of your income. In other words, if you worked until January 3rd, you’d get to enjoy the rest of the year ad free!

So let’s say we agree that this is a price we’re willing to spend. What would an America without advertising look like? How would we support content providers, for example? Paying a few one-off subscriptions, like Netflix and Spotify, is not that big a deal, but if you multiply that by every potential content outlet, it quickly becomes unmanageable.

This could easily be managed by the converging technologies of personalization engines, digital content delivery, micro-payments and online payment solutions like ApplePay. Let’s imagine we have a digital wallet where we keep our content consumption budget. The wallet is a smart wallet, in that it knows our personal tastes and preferences. Each time we access content, it automatically pays the producer for it and tracks our budget to ensure we’re staying within preset guidelines. The ecosystem of this content marketplace would be complex, true, but the technology exists. And it can’t be any more complex than the current advertising marketplace.

A WWA would be a less cluttered and interruptive place. But would it also be a better place? Defendants of the ad biz generally say that advertising nets out as a plus for our society. It creates awareness of new products, builds appreciation for creativity and generally adds to our collective well-being.

I’m not so sure. I’ve mentioned before that I suspect advertising may be inherently evil. I know it persuades us to buy stuff we may desire, but certainly don’t need. I have no idea what our society would be like without advertising, but I have a hard time imagining we’d be worse off than we are now.

The biggest problem, I think, is the naiveté of this hypothetically ad-free world. Content will still have to be produced. And if the legitimized ad channel is removed, I suspect things will simply go underground. Content producers will be offered kickbacks to work commercial content into supposedly objective channels. Perhaps I’m just being cynical, but I’d be willing to place a fairly large bet on the bendability of the morals of the marketing community.

Ultimately, it comes down to sustainability. Let’s not forget that about a third of all Americans are using ad blockers, and that percentage is rising rapidly. When I test the ideological waters of the people whose opinions I trust, there is no good news for the current advertising ecosystem. We all agree that advertising is in bad shape. It’s just the severity of the prognosis that differs – ranging from a chronic but gradual debilitating condition to the land of walking dead. A world without advertising may be tough to imagine, but a world that continues to prop up the existing model is even more unlikely.

 

Giving Thanks for The Law of Accelerating Returns

For the past few months, I’ve been diving into the world of show programming again, helping MediaPost put together the upcoming Email Insider Summit up in Park City. One of the keynotes for the Summit, delivered by Charles W. Swift, VP of Strategy and Marketing Operations for Hearst Magazines, is going to tackle a big question, “How do companies keep up with the ever accelerating rate of change of our culture?”

After an initial call with Swift, I did some homework and reacquainted myself with Ray Kurzweil’s Law of Accelerating Returns. Shortly after, I had to stop because my brain hurt. Now, I would like to pass that unique experience along to you.

In an interview that is now 12 years old, Kurzweil explained the concept, using biological evolution as an analogy. I’ll try to make this fast. Earth is about 4.6 billion years old. The very first life appeared about 3.8 billion years ago. It took another 1.7 billion years for multicellular life to appear. Then, about 1.2 billion years later, we had something called the Cambrian Explosion. This was really when the diversity of life we recognize today started. If you’ve been keeping track, you know that it took the earth 4.1 of it’s 4.6 billion year history, or about 90% of the time since the earth was formed, to produce complex life forms of any kind.

Things started to move much quicker at that point. Amphibians and reptiles appeared about 350 million years ago, dinosaurs appeared 225 million years ago, mammals 200 million years ago, dinosaurs disappeared about 70 million years ago, the first great apes appeared about 15 million years ago and we homo sapiens have only been around for 200,000 years or so. And, as a species, we really have only made much of dent in the world in the last 10,000 years of our history. In the entire history of the world, that represents a very tiny 0.00022% slice. But consider how much the world has changed in that 10,000 years.

Accelerating Returns

Kurzweil’s Law says that, like biology, technology also evolves exponentially. It took us a very long time to do much of anything at all. The wheel, stone tools and fire took us tens of thousands of years to figure out. But now, technological paradigms shifts happen in decades or less. And the pace keeps accelerating. The Law of Accelerating Returns states that in the first 20 years of the 21st century, we’ll have progressed as much as we did during the entire 20th century. Then we’ll double that progress again by 2034, and double it once more by 2041.

Let me put this in perspective. At this rate, if my youngest daughter – born in 1995 – lives to be 100 (not an unlikely forecast), she will see more technological change in her life than in previous 20,000 years of human history!

This is one of those things we probably don’t think about because, frankly, it’s really hard to wrap your head around this. The math shows why predictability is flying out the window and why we have to get comfortable reacting to the unexpected. It would also be easy to dismiss it, but Kurzweil’s concepts are sound. Evolution does accelerate exponentially, as has our rate of technological advancement. Unless the later showed a dramatic reversal or slowing down, the future will move much much faster than we can possibly imagine.

The reason change accelerates is that the technology we develop today builds the foundations required for the technological leaps that will happen tomorrow. Agriculture set the stage for industry. Industry enabled electricity. Electricity made digital technology possible. Digital technology enables nanotechnology. And so on. Each advancement sets the stage for the next, and we progress from stage to stage more rapidly each time.

So, for your extended long weekend, if you’re sitting in a turkey-induced tryptophan daze and there’s no game on, try wrapping your head around The Law of Accelerating Returns.

Happy Thanksgiving. You’re welcome.

Step One. React.

We’re learning what it means to be forced to be reactive. Sometimes, as on Friday, november 13 at 9:20 pm CET in Paris, France, we react with horror. But that’s the new reality. Plan as we might, we can’t predict everything. Sometimes we can’t predict anything. We just have to make ourselves – in the words of Nassim Taleb – antifragile.

Why is the world less predictable? One reason could be that it’s more connected. Things happen faster. Actions and reactions are connected by wired milliseconds. The world has become one huge Rube Goldberg machine and anyone can put it in motion at any time.

I suspect the world is also more organic. The temporary stasis of human effort that used to hold nature at bay for a bit is giving way to a more natural ebb and flow. Artificial barriers and constraints, like national borders, have little meaning any more. We flow back and forth across geography. Hierarchies have become networks. Centralized planning yields to spontaneous emerging events. We are afloat in an ocean of unpredictability. It’s hard to steer a straight path in such an environment.

Because of these two things, the world is definitely more amplified. Small things become big things much faster. Implications can grow thunderous in mere seconds. Ripple effects become tsunamis.

We want predictability. We want control. We hate that our world can be thrown into a tailspin by 8 people who hate us and what we stand for. We want intelligence to be fool proof. We want detection to be flawless. But while we wish for these things with all our hearts, the reality is that we will be forced to react. This is the world we have built. The technology that makes it wonderful is the same technology that, in a span of 40 minutes (the time it took for all the attacks in Paris), can make it heart-achingly painful.

In a world where structures give way to flow – where straight lines blur into oscillating waves – what can we do?

First of all, we can continually improve our ability to react. We have to make sense of new events as quickly as possible. We have to adapt more rapidly. Our world has to be more sensitive, more flexible, more nimble. Again, with a head nod to Nassim Taleb, we have to know how to minimize the downside and maximize the upside.

Secondly, we have to rethink how our institutions work. They have to evolve for a new world. And this evolution will happen faster in the areas of greatest unpredictability. For an enlightening read, try Team of Teams by General Stanley McChrystal. As leader of the Joint Special Operations Task Force in Afghanistan and Iraq, he was at the eye of the storm of unpredictability. His lessons have gained a terrifying new relevance after the events of last Friday.

Finally, we have to hold fast to our values. They are the things that cannot – must not – change. They are the one constant that helps us set our bearings when we react and adapt. While plans are a constant “work in progress,” values must be rock solid.

For most of our recorded history, we have tried to understand the world and gain some sense of control over it. We have tried to push back chaos with order – replace jagged or fluid curves of nature with artificially straight lines. Ironically, the more we have imposed our rational will, the more our environment has become dynamic, networked, organic, reactive and complex – all the things the world has always been. The harder we try to set our own beat, the more we find ourselves moving to the timeless rhythm of nature.

And in that world, adaptation is the whole ball of wax.

 

Basic Instincts and Attention Economics

We’ve been here before. Something becomes valuable because it’s scarce. The minute society agrees on the newly assigned value, wars begin because of it. Typically these things have been physical. And the battle lines have been drawn geographically. But this time is different. This time, we’re fighting over attention – specifically, our attention – and the battle is between individuals and corporations. Do we, as individuals, have the right to choose what we pay attention to? Or do the creators of content own our attention and can they harvest it at their will? This is the question that is rapidly dismantling the entire advertising industry. It has been debated at length here at Mediapost and pretty much every other publication everywhere.

I won’t join in the debate at this time. The reality here is that we do control our attention and the advertising industry was built on a different premise of scarcity from a different time. It was built on a foundation of access and creation, when both those things were in short supply. By creating content and solving the physical problem of giving us access to that content, the industry gained the right to ask us to watch an ad. No ads, no content. It was a bargain we agreed to because we had no other choice.

The Internet then proceeded to blow that foundation to smithereens.

By removing the physical constraints that restricted both the creation and distribution of content, technology has also erased the scarcity. In fact, the balance has been forever tipped the other way. We now have access to so much content; we don’t have enough attention to digest it all. Viewed in this light, it makes the debate around ad blockers seem hopelessly out of touch. Accusing someone of stealing content is like accusing someone of stealing air. The anti-blocking side is trying to apply the economic rational of a market that no longer exists.

So let us accept the fact that we are the owners of our own attention, and that it is a scarce commodity. That makes it valuable. My point is that we should pay more attention to how we pay attention. If the new economy is going to be built on attention, we should treat it with more respect.

The problem here is that we have two types of attention, the same as we have two types of thinking: Fast and Slow. Our slow attention is our focused, conscious attention. It is the attention we pay when we’re reading a book, watching a video or talking to someone. We consciously make a choice when we pay this type of attention. Think of it like a spotlight we shine on something for an extended period of time.

It’s the second type of attention, fast attention, which is typically the target of advertising. It plays on the edge of our spotlight, quickly and subconsciously monitoring the environment so it can swing the spotlight of conscious attention if required. Because this type of attention operates below the level of rational thought, it is controlled by base instincts. It’s why sex works in advertising. It’s why Kim Kardashian can repeatedly break the Internet. It’s why Donald Trump is leading the Republican race. And it’s why adorable Asian babies wearing watermelons can go viral.

It’s this type of attention that really determines the value of the attention economy. It’s the gatekeeper that determines how slow attention is focused. And it’s here where we may need some help. I don’t think instincts developed 200,000 years ago are necessarily the best guide for how we should invest something that has become so valuable. We need a better yardstick that simple titillation for determining where our attention should be spent.

I expect the death throes of the previous access economy to go on for some time. The teeth gnashing of the advertising industry will capture a lot of attention. But the end is inevitable. The economic underpinnings are gone, so it’s just a matter of time before the superstructures built on top of them will collapse. In my opinion, we should just move on and think about what the new world will look like. If attention is the new currency, what is the smartest way to spend it?

The Required Conditions for Innovation

Statistically speaking, it appears that there’s a correlation between atheism and innovation. But my point in last week’s column was not to show that atheists are more innovative. My goal was to try to hypothesize what the underlying causation might be. I don’t really care if atheists, or Buddhists – or Seventh Day Adventists for that matter – are more innovative. What does interest me, however, is what is unique about an environment in which both atheism and innovation can flourish.

Why am I so focused on innovation? Because innovation drives economic growth. It is the force that unleashes Schumpeterian Gales of Creative Distruction. In any formula measuring economic performance, Innovation always equals N. It’s a very big deal. The biggest deal.

That means the conditions that lead to innovation are worth noting. And I started on the national scale for a reason. Sometimes, it helps to change our perspective if we’re exploring the “why” of a question. Either pulling back to the macro or zooming in to the micro allows us to see things we may not see when we remain stuck in our current context. So, what can we learn about the conditions of innovation from the world’s most innovative countries?

Atheism = Innovation?

Let’s look at the atheist factor. How might a lack of religion lead to a surfeit of innovation? I think it may have something to do with belief – or rather – the lack of belief. When we believe something, we usually don’t go out of our way to prove it true. That also means we never find out it’s false. But nations that have a lot of atheists are not a very trusting lot, at least when it comes to things like government and other institutions. There was a moderately negative correlation (r=-0.4425). They are skeptics. And when it comes to innovation, skepticism is very healthy.

If we looked back to Alex Pentland’s ideas about Social Physics, we need two types of social interactions: exploration and engagement. The first of these is where innovation comes from. And skeptics are more exploratory than the trustful. They probe the unknown rather than rely on their beliefs. As Pentland says in his book Social Physics, “If you can find many such independent thinkers and discover that there is a consensus among a large subset of them, then a really, really good trading strategy is to follow the contrarian consensus.”

Ideological Diversity

It’s not just skepticism, however, that drives innovation. It’s also ideological diversity. You need a social network that encompasses a lot of different experiences and points of view. The broader the spectrum of ideas, the more likely that you’ve captured something that approximates the truth somewhere in that spectrum. If you can trade monolithic beliefs for a healthy respect for ideas that may not mirror your own in your own organization, you’ve probably laid the groundwork for innovation.

Going Rogue

Not so many years ago, an organization I was part of called for a rather rushed retreat of all the executive management. We gathered in a posh ski resort for a brainstorming session. All the top managerial talent was present. The CEO took the stage and called on us to be innovative. But he, like most managers, did not encourage diversity. Rather, he believed unity was the way to innovate.

“No one can go rogue!” he preached from the corporate pulpit. In other words, “No one can disagree with me.”

If this CEO (who has since stepped down) had looked at countries like Sweden, or Japan, or South Korea, he might have realized that sometimes, “going rogue” is exactly what you need to come up with a new idea.

Are Atheists More Innovative?

A few columns back, I talked about the most innovative countries in the world, according to INSEAD, Johnson School of Management and WIPO. Switzerland, of all places, topped the list. At the time, I mentioned diversity as possibly being one of the factors. But for some reason, I just couldn’t let it lie there.

Last Friday afternoon, it being pretty miserable outside, I dusted off my Stats 101 prowess and decided to look for correlations. The next thing I knew, 3 hours had passed and I was earlobe deep in data tables and spreadsheets.

Yeah..that’s how I roll. That’s wassup.

But I digress. What initially sent me down this path was a new study out of the University of Kansas by Tien-Tsung lee and co-authors Masahiro Yamamoto and Weina Ran. Working with data from Japan, they found that the amount of trust you have in media depends on the diversity of the community you live in. The more diverse the population, the lower the degree of trust in media.

This caught my attention – a negative correlation between trust and diversity. I wondered how those two things might triangulate with innovation. Was there a three-way link here?

So, I started compiling the data. First, I wanted to broaden the definition of innovation. Originally, I had cited the INSEAD Global Innovation Index. Bloomberg also has a ranking of innovation by country that uses a few different criteria. I decided to take an average, normalized score of the two together. In case you’re wondering, Switzerland scored much lower in the Bloomberg ranking, which had South Korea, Japan and Germany in the top three spots.

With my new innovation ranking, I then started to look for correlations. What part, for example, did trust play? According to Edelman, the global marketing giant, who publishes an annual trust barometer, it plays a massive role: “Building trust is essential to successfully bringing new products and services to market.” Their trust barometer measures trust in the infrastructural institutions of the respective countries. So I added Edelman’s indexed trust scores to my spreadsheet and used a quick and dirty Pearson r-value test to look for significant correlations. For those as rusty as I when it comes to stats, a perfect correlation would be 1.0. Strong relationships show up in the 0.6 and above range. Moderate relationships are in the 0.3 to 0.6 range. Weak relationships are 0.3 and below. Zero values indicate no relationship. Inverse relationships follow the same scale but with negative values.

The result? Not only was there no positive correlation, there was actually a moderately significant negative correlation! For those interested, the r-value was -0.4224. Based on this admittedly amateur analysis, trust in national institutions and innovation do not seem to go hand-in-hand. Some of the most innovative countries are the least trusting and vice-versa. It certainly wasn’t the neat supposed linear relationship that Edelman referred to in their press releases for their barometer.

Next, I turned to the obvious – the wealth of the respective nations. I added GDP per capita as a data point. Predictably, there was a strong positive correlation here – I came up with an r-value of .793. Rich countries are more innovative. Duh.

Now comes the really interesting part. What was the relationship between cultural diversity and innovation? If my original hypothesis was correct, there should be at least a moderate correlation here. The problem was trying to find an accurate measure of cultural diversity. I ended up using three measures from Alesina et al: Ethnic Fractionalization, Linguistic Fractionalization and Religious Fractionalization. I averaged these out and indexed them to give me a single score of cultural diversity. To my surprise, my hypothesis appeared to be significantly flawed – my r value was -0.2488.

But then I started analyzing the individual measures of diversity. Ethnic Diversity and Innovation showed a moderate negative correlation: -0.5738. Linguistic Diversity and Innovation showed a less significant negative correlation: -0.3886. But Religious Diversity and Innovation came up as a moderate positive correlation: 0.4129! Of the three, religion is the only measure of diversity that’s directly ideological, at least to some extent.

This seemed to be promising, so I pushed it to the extreme. If religious diversity shows to be correlated with innovation, I wonder how the prevalence of atheists would relate? After all, this should be the ultimate measure of religious ideological freedom. So, using a combination of results from a worldwide Gallup survey and a study from Phil Zuckerman, I added an indexed “atheism” score. Sure enough, the r-value was 0.7461! This was almost as significant as the correlation between national wealth and innovation! Based on my combined innovation scores, some of the least religious countries in the world (Japan, Sweden and Switzerland) are the most innovative.

So – ignoring for a moment the barn-door sized holes in my impromptu methodology and a whack of confounding factors – what might this hypothetically mean? I’ll come back to this intriguing question in next week’s Online Spin.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.

Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Who’s Who on the Adoption Curve

For me, the Adoption Curve of the Internet of Things is fascinating to observe. Take the PoloTech shirt from Ralph Lauren, for example. It’s a “smart shirt”. The skintight shirt measures your heart rate, how deeply you’re breathing, how stable you are and a host of other key biometrics. All this is sent to your smart phone. One will set you back a cool 300 bucks. But it’s probably not the price that will separate the adopters from the laggards in this case. In the case of the PoloTech shirt, as with many of the new pieces of wearable tech, it’s likely to be your level of fitness that determines which slope of the adoption curve you’ll end up on.

polotechIf you look at the advertising of the PoloTech, it’s clear who the target is: dudes with 0.3% body fat and ridiculously sculpted torsos who live on protein drinks and 4 hour workouts. Me? Not so much. The same is true, I suspect, for the vast majority of us. Unless we’re looking for a high tech girdle to both hold back and monitor the rate of expansion of our guts, I don’t think this particular smart shirt is in the immediate future for me.

As I said, much of the current generation of wearable technology is designed to tell us just how fit we are. Logic predicts that these devices should offer the greatest benefits to those who are the least fit. They, after all, have the most to gain. But that’s not who’s jumping the adoption curve. In my world, which is recreational cycling, the ones who are religiously tracking a zillion metrics are the ones who are already on top of the statistical heap. The reason? Technology has created an open market of bragging rights. Humans are naturally competitive. We like to know how we stack up against others. But we don’t bother keeping track until we’re reasonably sure we’re well above average. So, if you log onto Strava, where many cyclists upload their tech-tracked rides, you can find out just who is the “King of the Mountain” at your local version of the Alpe d’Huez.

This brings about an interesting variation on Roger’s Technology Adoption Curve. Wearable technology often means the generation of personal data. Therefore, an appetite for that data will accelerate the adoption of those respective technologies. We don’t mind being quantified, as long as that quantification paints us in a good light. We want to live in Lake Wobegon, where all the women are strong, all the men are good-looking and all the children are above average.

Adoption of new technologies, according to Rogers, depends on 5 factors: Relative Advantage, Compatibility, Complexity, Trialability and Observability. To this, Rogers added a sixth factor – the status conferring potential of a new innovation. Physical fitness, by its nature, begs to be quantified. Athletic ability and rankings go hand in hand. Status is literally the name of the game. Therefore, there is a natural affinity between wearable technologies that tracks physical performance and fitness.

This introduces some interesting patterns of adoption for new additions to the Internet of Things. Adoption will rapidly saturate certain niches of the population, but may take much longer to cross the chasm to the general masses. And the defining characteristics of the early adopters could be completely different in each case. As more and more things become “smart” the factors of adoption will become more fragmented and diverse. Early adopters of Coke’s Freestyle vending machine will have little in common with early adopters of the PoloTech shirt.

The absorption rate of technology into our lives has been increasing exponentially, seemingly in lock step with Moore’s Law. Every day, we are introduced to more and more things that have technology embedded in them. The advantages that this technology offers will depend on who is judging it. For some, a given technology will be a perfect fit. For others, it will be like trying to squeeze into a high tech shirt that makes us look like an overstuffed sausage.