Beware Confirmation Bias

First published September 5, 2013 in Mediapost’s Search Insider

Most testing of marketing is disproportionately biased towards the positive. We test to find winners. But in the process, we often cut losers off without a second glance. And this can be dangerously myopic.

I’ve talked in the past about taking a Bayesian approach to strategy. The more I explore this idea, the better I like it. But it comes with some challenges – the biggest being that we’re not Bayesian by nature. In fact, there’s a cognitive bias roughly the size of a good-sized cow barn that often leaves us blind to the true state of affairs. In psychological circles, it’s called Confirmation Bias, and in a comprehensive academic review in 1998, Raymond Nickerson stated its potential negative impact, “If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration.”

Here’s the thing. We love to be right. We hate to be wrong. So we will go to extraordinary lengths to make sure that we’re proven correct. And we won’t even know we’re doing it. Our brain, working surreptitiously in the background, doesn’t alert us to how biased we actually are. The many tricks that go along with Confirmation Bias usually play out subconsciously.

If we try to be good little Bayesians, we have to embrace alternative ideas of all shapes and sizes, whether or not they agree with our current view of things. In fact, we should be prepared to rip our current view apart, as it’s in the disproving and rebuilding of hypotheses that the truth is eventually found.

Here’s where things go wrong in most market testing. We usually test to prove our hunches right. We go in with a favored option and try to build a case for it.  We may deny it, but we all do it. That means that the less favored alternatives usually get short shrift. And it’s often in one of these alternatives that the optimal choice may be found. The more that there is at stake in the test, the more susceptible we are to Confirmation Bias.

Here is the rogue’s gallery of typical Confirmation Bias tricks:

Favored Hypothesis Information Seeking and Interpretation – As I said, we tend to seek information that supports our favored hypothesis, and avoid information that would contradict it. In the Bayesian view, this is equivalent to ignoring likelihood ratios.

Preferential Treatment of Evidence Supporting Existing Beliefs – Even if we somehow collect unbiased information, we will tend to focus on the information that supports our favored view. It gets “over-weighted” in analysis.

Looking for Positive Cases – This is the classic trap of testing only for winners and ignoring the losers. Often, the losers can tell us more about the true state of affairs.

The Primacy Effect – We tend to pay more attention to the first information we look at, which can bias analysis of any subsequent information.

Belief Persistence – Even when the evidence mounts that our original hunch is wrong, we can be incredibly inventive in twisting evidentiary frameworks to provide continuing support. Along with this is another bias called the “Sunk Cost Fallacy.”  The more we have invested in our original hunch (i.e. a major multimillion-dollar campaign that was launched based on it) the more tenacious we are in holding on to it.

Going back a few columns to Philip Tetlock’s Hedgehogs and Foxes, he found that Foxes make much better natural Bayesians. They are more open to updating their beliefs. The big takeaway here? Keep an open mind.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”

 

The Open and Shut Mind

First published June 13, 2013 in Mediapost’s Search Insider

A few years ago I was invited to a conference on advertising at a major university. The attendees were a fairly illustrious group of advertising professionals, including several senior executives from major agencies. There was also a healthy sprinkling of academics with impeccable credentials. I was in privileged company.

The organizer of the conference asked me to come up with a “dinner topic.” She explained that she wanted to generate a lively discussion at the various tables as we dug in and broke bread. It was okay if it was a “little” controversial. I must have ignored the qualifier, because my suggestion was, “Is advertising evil?” I have never been one for half measures.

As the ad illuminati settled at their tables, I set the stage by providing two opposing points of view:

First, the positive side of advertising. It can be a way to touch the very core of what makes us human, sometimes moving us to greatness. It can unify communities, create bonds and motivate us en masse. Not only can it be a social “lubricant” but, at its best, advertising can be a powerful change agent as well.

Now, the “evil” side: Does advertising take all this power and fritter it away to drive pure avarice?  Does it short-circuit our Darwinian behavioral wiring, chaining us to a hedonistic treadmill where we constantly want something we don’t have? Regular readers will detect a theme here.

It wasn’t difficult to read the mood of the room as I was wrapping up. My dad has a saying that, despite its off-color nature, sums up the atmosphere of this particular gathering better than anything else I can think of: “It went over like a fart in the house of worship.” I cautiously headed back to my table to take part in the planned “lively discussion.”

My tablemates didn’t know where to start. It seemed that it had never crossed their mind that advertising could be anything but the highest of callings. To have a debate, you need to at least have an abstract understanding of the opposing viewpoint, even if you don’t agree with it. At my table the most common question was, “What do you mean, ‘Is advertising evil?’” I had apparently introduced an entirely foreign concept.

I swallowed and forged ahead, sketching out the basis of my hypothesis. I tried to stay in the abstract, hoping to generate a philosophical debate and avoid getting caught in an emotional catfight. It seemed, though, that I had not only hit a hot button, but had taken a sledgehammer and smashed it to smithereens. Advertisers, at least based on this particular sample, seemed unwilling to discuss the philosophical pros and cons (or at least the cons) of their profession. I just wanted the whole evening to end as soon as possible.

My purpose here is not to reopen the debate. I use this story to illustrate an unfortunate human tendency. We live in a world of grays, but we like to think in black and white. I doubt that advertising is totally evil, but I also doubt that advertising is totally good. The truth lies between the two extremes; advertising is most likely a rather dirty gray.  If we’re willing to consider alternatives to our beliefs, perhaps it will move us a little closer to reality. I think advertising would do nothing but benefit from a deeper evaluation of its moral standing.

But we often forego a search for the truth, content to stick with our beliefs, which often bear little resemblance to reality. If those beliefs are attacked, we defend them vociferously, turning a deaf ear to counter-arguments. We don’t listen, because open minds require the burning of a lot of energy.

In a simpler evolutionary environment, beliefs were a heuristic shortcut for survival.  But today, they often polarize us at either end of a moral spectrum, with no middle ground left for discussion. Case in point, the current American political landscape.

I have spent most of my adult life trying to fight this natural tendency. I have tried to keep an open mind and not let my beliefs blind me to an opposing viewpoint — at least, not when it comes to those things I believe to be truly important. Morality, religion and politics are just three arenas where open minds are much harder to find than staunchly held beliefs.

And, apparently, you can add advertising to that list as well.

Climbing the Slippery Slope of Advertising

First published June 6, 2013 in Mediapost’s Search Insider

Google’s Matt Cutts is warning advertisers not to try passing off “native ads” – or advertorials – as legit content. Apparently, the line between advertising and content continues to get blurrier. The reason is that advertisers are still trying to find an ad that works. And they have been for over 300 years.

The first newspaper ads, which seem to mark the dawn of advertising, appeared very early in the 18th century. Because they looked just like the articles surrounding them, they had to be labeled as an “Advertisement.” Sound familiar?

Now, wouldn’t you think that if you’ve been doing something for over 300 years, you would have figured it out? So why does most advertising still suck? Why are we still trying to find some magic formula that works.

We could attribute it to changing technologies, saying that advertising continues to evolve because the marketplace it operates in is in constant flux, along with the delivery channels it uses and the creative possibilities it offers. That would be what an “advertiser” would tell you.

The answer, I think, it a bit simpler than that.  It comes down to a three-century disconnect between the market and the marketers: marketers want advertising, the market doesn’t. At least, we don’t want advertising in the form that it usually takes. Advertisers have been tinkering for all that time to find something the public doesn’t reject outright.

Perhaps, as we often do in the Thursday Search Insider, we can find some clues in the etymology of the word. “Advertisement” comes from the French verb “avertir” – which means to give notice or, more ominously, warning. Ironically, the very word we use to label our industry came from roots that carry a negative connotation. To move it to a more positive light, we could say that the purpose of advertising is to make us aware of something we weren’t previously aware of. That seems rather benign. Helpful, even. And it would be accurate to say that the earliest ads aspired to this purpose.

But somewhere along the line, ads stepped over the line and became something we learned to hate. How did this happen?

Like many of the social issues that plague us today, the roots can be traced back to the Industrial Revolution. Technology enabled scale. Mass production became reality. And, to keep pace, advertising showed us its less benign side.

Prior to mass production, the output of a product was limited to the resources of a producer. Increasing quantity usually had an inverse impact on quality, which relied on the skills of a single craftsperson. One person could only produce so much. The first brands were introduced by these craftspeople to identify their products, differentiating them from inferior competitors.

But with mechanization and the introduction of the assembly line, suddenly scale became virtually unlimited. Uniform products could be produced by the trainload. Profits became tied to scale, and greed became tied to profit. From that point forward, the three moved in lock step.

It was at that point that advertising moved from being a helpful notice to an annoying plea to buy crap we don’t need. And that’s when advertisers had to learn to start pushing the public’s buttons, whether we wanted them pushed or not. Everything started to go off the rails early in the 20th century, and the wreckage really piled up with the introduction of mass communication. Suddenly, unlimited greed had an unlimited capacity to annoy us. Advertising couldn’t stop at informing. It had to start selling.

The twist in all this came right at the end of the “Century of Annoyance.”  In 1998, Goto.com introduced paid search (no, it wasn’t Google). It was an ad with one purpose, to make someone aware of something they weren’t previously aware of. And it was delivered in the perfect context. The market, in the form of a searcher, was looking to become more aware about something by seeking out new information. It gets even better. The searcher could decide whether or not to take the advertiser up on their offer by choosing to click or not.

Of course, with time, we advertisers will figure out a way to screw that up too. The good news, if you’re Matt Cutts, is that it means you’ll have a job for the foreseeable future.

The Stress of Hyper-Success

Last week, I talked about the inflation of expectations. In that case, it was the vendors we deal with that were the victims of that inflation. But we don’t only have inflated expectations about others. Increasingly, we measure ourselves against our own expectations. And that is leading us down a dangerous path.

The problem is that success is a relative thing. We can only judge it by looking at others. This creates a problem, because increasingly, we’re looking at extreme outliers as our baseline for expectations.

Take social media, for instance. Women feel more stressed than satisfied after spending time on Pinterest, according to a recent survey. “Pinterest stress” is the official label for feelings of inadequacy in trying to measure up against the unrealistic examples of domestic perfection shared on the female-dominated social network.

But it’s not just women and Pinterest. One-third of Facebook users feel worse after visiting the site. Why? Because we feel envious after going through the pictures of someone else’s dream vacation. Social media invites comparison. We try to measure ourselves up to the achievements of others in our social circle. There are two problems with that: we are naturally jealous of our neighbors, and our neighbors tend to lie (or at least embellish) when they post of their own accomplishments.

Added to this is the unnatural effect of the Power Law curve. Not all online posts about accomplishments are equally popular. We tend to focus on those that are outstanding — those that are set apart from the average. These online examples, representing the extreme upper limits of success and achievement, take their place at the head of the Power Law curve, drawing a dramatically bigger audience. We ignore the commonplace, which lives somewhere in the Long Tail. Our own quest for the remarkable (humans never gossip about average, everyday topics) leads us to focus on the unrealistic.

So the more access we have to the achievements of others, the more skewed our idea of success becomes. What we don’t realize, however, is that we’re measuring ourselves against the very highest percentile of the human population.

Take salaries, for example. What would be a yearly amount that would make you happy? Economists Angus Deaton and Daniel Kahnemann asked that very question — and it turns out that $75,000 a year is the magic number. Below that number, the day-to-day stress of just getting by leads to chronic unhappiness. But above that number, people seem to feel more fulfilled and are generally in a more positive frame of mind. But after you get past that general threshold for happiness, more money doesn’t seem to always equate to increased happiness. Millionaires and billionaires are not that much happier than the rest of us.

Yet if I asked you how much you wanted to make, I suspect the number would be higher than $75,000. And I doubt that it would have much to do with happiness. It would be because we know of people making more than us — much more. We have no idea if those high wage earners are happy or not, but we do know they pull down a much bigger paycheck than we do. So we believe we should aspire to that standard, whether it’s realistic or not, in the mistaken belief that it will make us happier. It won’t, by the way. We humans are notoriously bad at forecasting our own happiness.

This is one of those strange Darwinian detours that evolution has saddled us with. In our original adaptive environment, doing better than our neighbors was a pretty sure bet for superior gene propagation. We’re hardwired to not just be envious but to strive to compete. That made sense when our target was the person we were competing against for food, shelter or sexual access.  It doesn’t make sense when our competition is a far removed, sometimes fictitious ideal propagated by the media and the viral force of social sharing.

Somewhere, a resetting of expectations is required before we self-destruct because of hyper competitiveness in trying to reach an unreachable goal. To end on a gratuitous pop culture quote, courtesy of Sheryl Crow: “It’s not having what you want, It’s wanting what you got.”

The Straw that Broke the Market’s Back

First published May 9, 2013 in Mediapost’s Search Insider

Customers are fickle — and I suspect they’re getting more fickle.  Perhaps they’re even feeling a little entitled.A recent survey shows that customers tend to bail on a company not because of a big time screw-up, but because of the accumulation of a lot of little annoyances. Soon, their frustration reaches a tipping point and they look elsewhere.

It would be easy to point the finger at the companies and demand that they get their collective acts together. But I suspect there’s more at play here. It would be my guess that customers are getting harder to please.  And I would further guess that the Web is largely to blame. I think it comes down to a constant rise in our collective expectations, while the reality of our experiences fall behind.

The balance between our expectations and the actual experience determines our loyalty to any course of action. If we have low expectations and a poor experience, we aren’t really surprised, which dampens our subsequent disappointment and leaves us more willing to forgive and forget.  If we have low expectations but a good experience, we’re pleasantly surprised, making us more apt to return. If we have high expectations and a good experience, we get a double hit of happiness. First, we enjoy the anticipation, then we appreciate that the experience actually lives up to our expectations. For a vendor, the scariest scenario is the last of the four: high expectations but a poor experience. In this case, we walk away disappointed and frustrated.

Now, balancing expectations and experience wouldn’t be that difficult for any moderately competent company if those expectations were realistic. But I suspect that more and more of us are entering into our respective experiences with unrealistic expectations. We’re setting our vendors up to fail.

Expectations are set partly based on our past experiences, but they’re also set by the experiences of others. We create our expectation set points based, in part, on what we hear from others.

The Web has created an open, accessible market of experiences and hearsay. We hear about the bad, a feedback loop that increasingly is calling out poor customer service. But we also hear about the good.  Correction – we hear about the exceptional. The “good” is not remarkable. It generally falls within our expectations and so goes without comment. But either the very good or the very bad is exceptional, and we are more apt to comment on it online. Not only do we comment, we also embellish, accentuating the plusses and minuses to make it a better story. Therefore, what we hear from others sets either a very low or very high bar. We steer clear of the low bars, but the high bars stick with us, contributing to the setting of future expectations.

The other thing the Web has done is create expectations that overlap domains.  Previously, when our expectations were set based on our own experiences, they tended to stay domain-specific. We had an expectation of what it would be like to buy a car, stay at a hotel, eat at a restaurant or purchase a new pair of shoes. With the Web, cross-pollination between domains is increasingly common. A head marketer for a well-known industrial manufacturer once said to me, “When it comes to online experience, my competitors are not the traditional ones. I’m competing against Amazon and eBay. That type of experience is what people expect.”

This “nudging up” of expectations is done without much rational consideration. We don’t care much for the reality of operational logistics in any particular domain. We just want our expectations to be met, no matter where those expectations might come from. And when they’re not, we pull the plug on that particular vendor, assuming another vendor can do better in meeting our inflated expectations. The Web has also engendered a virulent “grass is always greener” view of the world. We know a competitor is just a click away (whether or not that vendor is any better than the incumbent).

I’ll be the first to call out a bad customer experience, but when it comes to the increasing fickleness of customers, we should remember that there are two sides to this particular story.

Anchoring and Search

First published in Mediapost’s Search Insider – April 25, 2013

A few columns back, I talked about psychological priming and how it could play out in a search environment. In today’s column, I’d like to talk about a related concept: value anchoring.

Given almost every product category, with the exception of those things we buy very frequently (in my case, chocolate bars, beer and books), we don’t really know what the current going price would be. Either we don’t buy them frequently enough, or the price is subject to market volatility. We may have a rough idea of prices, but we need to adjust this price estimate to the current market conditions.

We need a pricing framework because, as consumers, we need to establish in our own minds what a “fair” price would be. This concept of fairness taps into some pretty deep emotional triggers — ones that vendors should be aware of. I’ll explain in a minute how these concepts of fairness can play out in a typical purchase journey.

Remember, our determination of what price is fair is totally arbitrary. It’s not as if we know objectively what the “fair” price for a carton of eggs, a big-screen TV, or a hotel room in San Francisco is. We make these pricing decisions based on comparisons to available information. And it just so happens that the first piece of information that is available to us tends to play a significantly bigger role than any of the subsequent information that we may come upon. That first price we’re exposed to anchors our heuristic comparisons and tends to linger in our subconscious, triggering emotions that drive our perceptions of fairness.

If we have to adjust our pricing expectations upwards, because the first price baseline is too low, we feel frustrated and taken advantage of.  Our brain’s warning signals go off and we suddenly feel anxious and go on the defensive. Our mood takes a turn for the worst.

If, on the other hand, we are able to adjust our pricing expectations downward because we’re finding prices substantially lower than the first price encountered, we’re almost euphoric. The reward center of our brain is telling us we’re getting a great deal and the resulting dopamine hit gives us a buying high.

Once again, these feelings are based on nothing more than us grasping at the first number we see, and then judging all subsequent pricing information against it. But the fact that this is nothing more than a gut call is exactly the point; its lack of rationality does nothing to diminish its emotional punch.

Now, let’s look at how this might play out in search. Remember, there’s a pretty good likelihood that many consumer journeys may start with a search engine. It’s also likely that many search advertisers might advertise the lowest price possible in order to capture the click. Given this, it wouldn’t be surprising to see that the initial benchmark price could be a very low one. There’s nothing wrong with this, as long as the prices the buyer will eventually pay will land in the same ballpark.

But, as is often the case, if prices start rising quickly because of the inevitable “fine print” exclusions, conditions and lack of availability, the advertiser is going to trigger all the wrong emotional reactions in the prospect. Rather than “hooking” them by dangling an unobtainable low price as bait, they instead unleash a wave of negative emotions. Even if they end up still capturing the sale (due to the competition not being able to beat the inflated price) they will not be engendering any brand “love.”

This is yet another example of focusing on the end result without thinking about the journey. If we become myopically focused on conversion rates, for example, to the exclusion of all else, we might be ignorant of the long-term brand damage we might be causing by capturing those clicks through a digital version of the classic bait-and-switch con.

Read more: http://www.mediapost.com/publications/article/198937/anchoring-and-search.html#ixzz2SdvLkwGB

Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.