Revisiting Entertainment vs Usefulness

brain-cogsSome time ago, I did an extensive series of posts on the psychology of entertainment. My original goal, however, was to compare entertainment and usefulness in how effective they were in engendering long-term loyalty. How do our brains process both? And, to return to my original intent, in that first post almost 4 years ago, how does this impact digital trends and their staying power?

My goal is to find out why some types of entertainment have more staying power than other types. And then, once we discover the psychological underpinnings of entertainment, lets look at how that applies to some of the digital trends I disparaged: things like social networks, micro-blogging, mobile apps and online video. What role does entertainment play in online loyalty? How does it overlap with usefulness? How can digital entertainment fads survive the novelty curse and jump the chasm to a mainstream trends with legs?

In the previous set of posts, I explored the psychology of entertainment extensively, ending up with a discussion of the evolutionary purpose of entertainment. My conclusion was that entertainment lived more in the phenotype than the genotype. To save you going back to that post, I’ll quickly summarize here: the genotype refers to traits actually encoded in our genes through evolution – the hardwired blueprint of our DNA. The phenotype is the “shadow” of these genes – behaviors caused by our genetic blueprints. Genotypes are directly honed by evolution for adaptability and gene survival. Phenotypes are by-products of this process and may confer no evolutionary advantage. Our taste for high-fat foods lives in the genotype – the explosion of obesity in our society lives in the phenotype.

This brings us to the difference between entertainment and usefulness – usefulness relies on mechanisms that predominately live in the genotype.  In the most general terms, it’s the stuff we have to do to get through the day. And to understand how we approach these things on our to-do list, it’s important to understand the difference between autotelic and exotelic activities.

Autotelic activities are the things we do for the sheer pleasure of it. The activity is it’s own reward. The word autotelic is Greek for “self + goal” – or “having a purpose in and not apart from itself.” We look forward to doing autotelic things. All things that we find entertaining are autotelic by nature.

Exotelic activities are simply a necessary means to an end. They have no value in and of themselves.  They’re simply tasks – stuff on our to do list.

The brain, when approaching these two types of activities, treats them very differently. Autotelic activities fire our reward center – the nucleus accumbens. They come with a corresponding hit of dopamine, building repetitive patterns. We look forward to them because of the anticipation of the reward. They typically also engage the prefrontal medial cortex, orchestrating complex cognitive behaviors and helping define our sense of self. When we engage in an autotelic activity, there’s a lot happening in our skulls.

Exotelic activities tend to flip the brain onto its energy saving mode. Because there is little or no neurological reward in these types of activities (other than a sense of relief once they’re done) they tend to rely on the brain’s ability to store and retrieve procedures. With enough repetition, they often become habits, skipping the brain’s rational loop altogether.

In the next post, we’ll look at how the brain tends to process exotelic activities, as it provides some clues about the loyalty building abilities of useful sites or tools. We’ll also look at what happens when something is both exotelic and autotelic.

Our Brain on Books

Brain-on-BooksHere’s another neuroscanning study out of Emory University showing the power of a story.

Lead researcher Gregory Burns and his team wanted to “understand how stories get into your brain, and what they do to it.” Their findings seem to indicate that stories, in this case a historical fiction novel about Pompeii, caused a number of changes in the participants brain, at least in the short term. Over time, some of these changes decayed, but more research is required to determine how long lasting the changes are.

One would expect reading to alter related parts of the brain and this was true in the Emory study. The left temporal cortex, a section of the brain that handles language reception and interpretation showed signs of heightened connectivity for a period of time after reading the novel. This is almost like the residual effects of exercise on a muscle, which responds favorably to usage.

What was interesting, however, was that the team also saw increased connectivity in the areas of the brain that control representations of sensation for the body. This relates to Antonio Damasio’s “Embodied Semantics” theory where the reading of metaphors, especially those relating specifically to tactile images, activate the same parts of the brain that control the corresponding physical activity. The Emory study (and Damasio’s work) seems to show that if you read a novel that depicts physical activity, such as running through the streets of Pompeii as Vesuvius erupts, your brain is firing the same neurons as it would if you were actually doing it!

There are a number of interesting aspects to consider here, but what struck me is the multi-prong impact a story has on us. Let’s run through them:

Narratives have been shown to be tremendously influential frameworks for us to learn and update our sense of the world, including our own belief networks. Books have been a tremendously effect agent for meme transference and propagation. The structure of a story allows us to grasp concepts quickly, but also reinforces those concepts because it engages our brain in a way that a simple recital of facts could not. We relate to protagonists and see the world through their eyes. All our socially tuned, empathetic abilities kick into action when we read a story, helping to embed new information more fully. Reading a story helps shape our world view.

Reading exercises the language centers of our brain, heightening the neural connectivity and improving the effectiveness. Neurologists call this “shadow activity” – a concept similar to muscle memory.

Reading about physical activity fires the same neurons that we would use to do the actual activity. So, if you read an action thriller, even through you’re lying flat on a sofa, your brain thinks you’re the one racing a motorcycle through the streets of Istanbul and battling your arch nemesis on the rooftops of Rome. While it might not do much to improve muscle tone, it does begin to create neural pathways. It’s the same concept of visualization used by Olympic athletes.

For Future Consideration

As we learn more about the underlying neural activity of story reading, I wonder how we can use this to benefit ourselves? The biggest question I have is if a story in written form has this capacity to impact us at all the aforementioned levels, what would  more sense-engaged media like television or video games do? If reading about a physical activity tricks the brain into firing the corresponding sensory controlling neurons, what would happen if we are simulating that activity on an action controlled gaming system like Microsoft’s X Box? My guess would be that the sensory motor connections would obviously be much more active (because we’re physically active). Unfortunately, research in the area of embodied semantics is still at an early stage, so many of the questions have yet to be answered.

However, if our stories are conveyed through a more engaging sensory experience, with full visuals and sound, do we lose some opportunity for abstract analysis? The parts of our brain we use to read depend on relatively slow processing loops. I believe much of the power of reading lies in the requirements it places on our imagination to fill in the sensory blanks. When we read about a scene in Pompeii we have to create the visuals, the soundtrack and the tactile responses. In all this required rendering, does it more fully engage our sense-making capabilities, giving us more time to interpret and absorb?

The Death and Rebirth of Google+

google_plus_logoGoogle Executive Chairman Eric Schmidt has come out with his predictions for 2014 for Bloomberg TV. Don’t expect any earth-shaking revelations here. Schmidt plays it pretty safe with his prognostications:

Mobile has won – Schmidt says everyone will have a smartphone. “The trend has been mobile was winning..it’s now won.” Less a prediction than stating the obvious.

Big Data and Machine Intelligence will be the Biggest Disruptor – Again, hardly a leap of intuitive insight. Schmidt foresees the evolution of an entirely new data marketplace and corresponding value chain. Agreed.

Gene Sequencing Has Promise in Cancer Treatments – While a little fuzzier than his other predictions, Schmidt again pounces on the obvious. If you’re looking for someone willing to bet the house on gene sequencing, try LA billionaire Patrick Soon-Shiong.

See Schmidt’s full clip:

The one thing that was interesting to me was an admission of failure with Google+:

The biggest mistake that I made was not anticipating the rise of the social networking phenomenon.  Not a mistake we’re going to make again. I guess in our defense we were busy working on many other things, but we should have been in that area and I take responsibility for that.

I always called Google+ a non-starter, despite a deceptively encouraging start. But I think it’s important to point out that we tend to judge Google+ against Facebook or other social destinations. As Google+ Vice President of Product Bradley Horowitz made clear in an interview last year with Dailytech.com, Google never saw this as a “Facebook killer.”

I think in the early going there was a lot of looking for an alternative [to Facebook, Twitter, etc.],” said Horowitz. “But I think increasingly the people who are using Google+ are the people using Google. They’re not looking for an alternative to anything, they’re looking for a better experience on Google.

social-networkAnd this highlights a fundamental change in how we think about online social activity – one that I think is more indicative of what the future holds. Social is not a destination, social is a paradigm. It’s a layer of connectedness and shared values that acts as a filter, a lens  – a way we view reality. That’s what social is in our physical world. It shapes how we view that world. And Horowitz is telling us that that’s how Google looks at social too. With the layering of social signals into our online experience, Google+ gives us an enhanced version of our online experience. It’s not about a single destination, no matter how big that destination might be. It’s about adding richness to everything we do online.

Because humans are social animals our connections and our perception of ourselves as part of an extended network literally shape every decision we make and everything we do, whether we’re conscious of the fact or not. We are, by design, part of a greater whole. But because online, social originated as distinct destinations, it was unable to impact our entire online experience. Facebook, or Pinterest, act as a social gathering place – a type of virtual town square – but social is more than that. Google+ is closer to this more holistic definition of “social.”

I’m not  sure Google+ will succeed in becoming our virtual social lens, but I do agree that as our virtual sense of social evolves, it will became less about distinct destinations and more about a dynamic paradigm that stays with us constantly, helping to shape, sharpen, enhance and define what we do online. As such, it becomes part of the new way of thinking about being online – not going to a destination but being plugged into a network.

360 Degrees of Seperation

First published December 5, 2013 in Mediapost’s Search Insider

IMT_iconsIn the past two decades or so, a lot of marketers talked about gaining a 360-degree view of their customers.  I’m not exactly sure what this means, so I looked it up.  Apparently, for most marketers, it means having a comprehensive record of every touch point a customer has had with a company. Originally, it was the promise of CRM vendors, where anyone in an organization, at any time, can pull up a complete customer history.

So far, so good.

But like many phrases, it’s been appropriated by marketers and its meaning has become blurred. Today, it’s bandied about in marketing meetings, where everyone nods knowingly, confident in the fact that they are firmly ensconced in the customer’s cranium and have all things completely under control. “We have a 360-degree view of our customers,” the marketing manager beams, and woe to anyone that dares question it.

But there are no standard criteria that you have to meet before you use the term. There is no rubber-meets-the-road threshold you have to climb over. No one knows exactly what the hell it means. It sure sounds good, though!

If a company is truly striving to build as complete a picture of their customers as possible, they probably define 360 degrees as the total scope of a customer’s interaction with their company. This would follow the original CRM definition. In marketing terms, it would mean every marketing touch point and would hopefully extend through the customer’s entire relationship with that company. This would be 360-degrees as defined by Big Data.

But is it actually 360 degrees? If we envision this as a Venn diagram, we have one 360-degree sphere representing the mental model of customers, including all the things they care about. We have another 360-degree sphere representing the footprint of the company and all the things they do. What we’re actually looking at then, even in an ideal world, is where those two spheres intersect. At best, we’re looking at a relatively small chunk of each sphere.

So let’s flip this idea on its head. What if we redefine 360 degrees as understanding the customer’s decision space? I call this the Buyersphere. The traditional view of 360 degrees is from the inside looking out, from the company’s perspective. The Buyersphere moves the perspective to that of the customer, looking from the outside in. It expands the scope to include the events that lead to consideration, the competitive comparisons, the balancing of buying factors, interactions with all potential candidates and the branches of the buying path itself.  What if you decide to become the best at mapping that mental space?  I still wouldn’t call it a 360-degree view, but it would be a view that very few of your competitors would have.

One of the things that I believe is holding Big Data back is that we don’t have a frame within which to use Big Data. Peter Norvig, chief researcher for Google, outlined 17 warning signs in experimental design and interpretation. One was lack of a specific hypothesis, and the other was a lack of a theory. You need a conceptual frame from which to construct a theory, and then, from that theory, you can decide on a specific hypothesis for validation. It’s this construct that helps you separate signal from noise. Without the construct, you’re relying on serendipity to identify meaningful patterns, and we humans have a nasty tendency to mistake noise for patterns.

If we look at opportunities for establishing a competitive advantage, redefining what we mean by understanding our customers is a pretty compelling one. This is a construct that can provide a robust and testable space within which to use Big Data and other, more qualitative, approaches. It’s relatively doable for any organization to consolidate its data to provide a fairly comprehensive “inside-out” view of customer’s touch points. Essentially, it’s a logistical exercise. I won’t say it’s easy, but it is doable.  But if we set our goal a little differently, working to achieve a true “outside-in” view of our company, that sets the bar substantially higher.

360 degrees? Maybe not. But it’s a much broader view than most marketers have.

Google’s Etymological Dream Come True

First published November 14, 2013 in Mediapost’s Search Insider

Yesterday’s Search Insider column caught my eye. Aaron Goldman explained how search ads were the original native ads. He also explained why native ads work. This is backed up by research we did about 5 years ago, showing how contextual relevance substantially boosted ad effectiveness (but not, ironically, ad awareness). I did a fairly long blog post on the concept of “aligned” intent, if you really want to roll up your sleeves and dive in.

The funny thing was, I was struck by the use of the word “native” itself. For some reason, the use of the term in today’s more politically charged world struck a note of immediate uneasiness. On a gut level, it reminded me of the insensitivity of Daniel Snyder, owner of the Washington Redskins. There’s nothing immoral about the term itself, but it is currently tied to an emotionally charged issue.

As I often do, I decided to check the etymological roots of “native” and immediately noticed something different on the Google search page.  There, at the top, was an etymological time line, showing the root of “native” is the Latin “nasci” – meaning born. So, it was entirely appropriate, given Aaron’s assertion that “native” advertising was “born” on the search page. But it was at the bottom, where a downwards arrow promised “more,” that I hit etymological pay dirt.

Google showed me the typical dictionary entries, but at the bottom, it gave me a chart from it’s nGram viewer showing usage of “native” in books and publications over the past 200 years. Interestingly, the term has been in slow decline over the past 200 hundred years, with a bit of a resurgence over the last 25 years. When I clicked on the graph it broke it down further, showing that small-n “native” has been used less and less, but big-N “Native” took a jump in popularity in the mid-80’s, accounting for the mild bump.

Google’s nGram isn’t new, but its capabilities have been recently beefed up, providing a fascinating visual tool for us “wordies” out there. With it, you can plot the popularity of words over 500 years in a body of over 5 million books. For example, a blog post at Informationisbeautiful.net shows several fascinating word trend charts in the English corpus, including drug trends (cocaine was a popular topic in Victorian times, slowed down in the 20’s and exploded again in the 80’s), the battle of religion vs science (the popularity cross over was in 1930, but the trend has reversed and we’re heading for another one) and interest in sex vs. marriage (sex was barely mentioned prior to 1800, stayed relatively constant until 1910 and grew dramatically in the 70s, but lately it’s dropped off a cliff. Marriage has had a spikier history but has remained fairly constant in the last 200 years.)

I tried a few charts of my own. Since 1885, “Evolution” has beaten “Creation,” but it took a noticeable drop during the 30’s. Since 1960 both have been on the rise.  In1980, Apple got off to an initial head start, but Microsoft passed it in 1992, never to look back (although it’s had a precipitous decline since 2000.)  Perhaps the most interesting chart is comparing “radio”, “television” and “internet” since 1900. Radio started growing in the 20’s and hit its popularity peak around 1945, but the cross-over with television would take another 40 years (about 1982.) Television would only enjoy a brief period of dominance. In 1990, the meteoric rise of the Internet started and it surpassed both radio and television around 1997.

tvradiointernet

My final chart was to see how Google fared in it’s own tool. Not surprisingly, Google has dominated the search space since 2001, and done so quite handily. Currently, it’s 6 times more popular than its rivals, Yahoo and Bing.  One caveat here though – Bing’s popularity started to climb in 1830, so I think they’re talking about either the cherry, Chinese people named Bing or a German company that used to make kitchen utensils.  Either that, or Microsoft has had their search engine in development a lot longer than anyone guessed.

googleyahoobing

Whom Would You Trust: A Human or an Algorithm?

First published October 31, 2013 in Mediapost’s Search Insider

I’vmindrobote been struggling with a dilemma.

Almost a year ago, I wrote a column asking if Big Data would replace strategy. That started a several-month journey for me, when I’ve been looking for a more informed answer to that query. It’s a massively important question that’s playing out in many arenas today, including medicine, education, government and, of course, finance.

In marketing, we’re well into the era of big data. Of course, it’s not just data we’re talking about. We’re talking about algorithms that use that data to make automated decisions and take action. Some time ago, MediaPost’s Steve Smith introduced us to a company called Persado, that takes an algorithmic approach to copy testing and optimization. As an ex-copywriter turned performance marketer I wasn’t sure how I felt about that. I understand the science of continuous testing but I have an emotional stake in the art of crafting an effective message. And therein lies the dilemma. Our comfort with algorithms seems to depend on the context in which we’re encountering them and the degree of automation involved.

Let me give you an example, from Ian Ayre’s book “Super Crunchers.” There’s a company called Epagogix that uses an algorithm to predict the box-office appeal of unproduced movie scripts. Producers can retain the service to help them decide which projects to fund. Epagogix will also help producers optimize their chosen scripts to improve box-office performance. The question here is, do we want an algorithm controlling the creative output of the movie industry? Would we be comfortable take humans out of the loop completely and see where the algorithm eventually takes us?

Now, you may counter that we could include feedback from audience responses. We could use social signals to continually improve the algorithm, a collaborative filtering approach that uses the power of Big Data to guide the film industry’s creative process. Humans are still in the loop in this approach, but only as an aggregated sounding board. We have removed the essentially human elements of creativity, emotion and intuition. Even with the most robust system imaginable, are you comfortable with us humans taking our hands off the wheel?

Here’s another example from Ayre’s book. There is substantial empirical evidence that shows algorithms are better at diagnosing medical conditions than clinical practitioners. In a 1989 study by Dawes, Faust and Meehl, a diagnosis algorithmic rule set was consistently more reliable than actual clinical doctors. They then tried a combination, where doctors were made aware of the outcomes of the algorithm but were the final judges. Again, doctors would have been better off going with the results of the algorithm. Their second-guessing increased their margin of error significantly.

But, even knowing this, would you be willing to rely completely on an automated algorithm the next time you need medical attention? What if there was no doctor involved at all, and you were diagnosed and treated by an algo-driven robot?

There is also mounting (albeit highly controversial) evidence showing that direct instruction produces better learning outcomes that traditional exploratory teaching methods. In direct instruction, scripted automatons could easily replace the teacher’s role. Test scores could provide self-optimizing feedback loops. Learning could be driven by algorithms and delivered at a distance. Classrooms, along with teachers, could disappear completely. Is this a school you’d sign your kid up for?

Let’s stoke the fires of this dilemma a little. In a frightening TED talk, Kevin Slavin talks about how algorithms rule the world and offers a few examples of how algorithms have gotten it wrong in the past. The pricing algorithms of Amazon priced an out-of-print book called “The Making of a Fly” at a whopping $23.6 million dollars. Surprisingly, there were no sales. And in financial markets, where we’ve largely abdicated control to algorithms, those same algorithms spun out of control in 2012 no fewer than 18,000 times. So far, these instances have been identified and corrected in milliseconds, but there’s always a Black Swan chance that one time, they’ll crash the economy just for the hell of it.

But should we humans feel too smug, let’s remember this sobering fact: 20% of all fatal diseases were misdiagnosed. In fact, misdiagnosis accounts for about one-third of all medical error. And we humans have no one but ourselves to blame but for that.

As I said – it’s a dilemma.

What Does Being “Online” Mean?

plugged-inFirst published October 24, 2013 in Mediapost’s Search Insider

If readers’ responses to my few columns about Google’s Glass can be considered a representative sample (which, for many reasons, it can’t, but let’s put that aside for the moment), it appears we’re circling the concept warily. There’s good reason for this. Privacy concerns aside, we’re breaking virgin territory here that may shift what it means to be online.

Up until now, the concept of online had a lot in common with our understanding of physical travel and acquisition. As Peter Pirolli and Stuart Card discovered, our virtual travels tapped into our evolved strategies for hunting and gathering. The analogy, which holds up in most instances, is that we traveled to a destination. We “went” online, to “go” to a website, where we “got” information. It was, in our minds, much like a virtual shopping trip. Our vehicle just happened to be whatever piece of technology we were using to navigate the virtual landscape of “online.”

As long as we framed our online experiences in this way, we had the comfort of knowing we were somewhat separate from whatever “online” was. Yes, it was morphing faster than we could keep up with, but it was under our control, subject to our intent. We chose when we stepped from our real lives into our virtual ones, and the boundaries between the two were fairly distinct.

There’s a certain peace of mind in this. We don’t mind the idea of online as long as it’s a resource subject to our whims. Ultimately, it’s been our choice whether we “go” online or not, just as it’s our choice to “go” to the grocery store, or the library, or our cousin’s wedding. The sphere of our lives, as defined by our consciousness, and the sphere of “online” only intersected when we decided to open the door.

As I said last week, even the act of “going” online required a number of deliberate steps on our part. We had to choose a connected device, frame our intent and set a navigation path (often through a search engine). Each of these steps reinforced our sense that we were at the wheel in this particular journey. Consider it our security blanket against a technological loss of control.

But, as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being “online” will cease to be about “going” and will become more about “being.”  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.

Being “online” will mean being “plugged in.” The lines between “online” and “ourselves” will become blurred, perhaps invisible, as technology moves at the speed of unconscious thought. We won’t be rationally choosing destinations, applications or devices. We won’t be keying in commands or queries. We won’t even be clicking on links. All the comforting steps that currently reinforce our sense of movement through a virtual space at our pace and according to our intent will fade away. Just as a light bulb doesn’t “go” to electricity, we won’t “go” online.  We will just be plugged in.

Now, I’m not suggesting a Matrix-like loss of control. I really don’t believe we’ll become feed sacs plugged into the mother of all networks. What I am suggesting is a switch from a rather slow, deliberate interface that operates at the speed of conscious thought to a much faster interface that taps into the speed of our subconscious cognitive processing. The impulses that will control the gateway of information, communication and functionality will still come from us, but it will be operating below the threshold of our conscious awareness. The Internet will be constantly reading our minds and serving up stuff before we even “know” we want it.

That may seem like neurological semantics, but it’s a vital point to consider. Humans have been struggling for centuries with the idea that we may not be as rational as we think we are. Unless you’re a neuroscientist, psychologist or philosopher, you may not have spent a lot of time pondering the nature of consciousness, but whether we actively think about it or not, it does provide a mental underpinning to our concept of who we are.  We need to believe that we’re in constant control of our circumstances.

The newly emerging definition of what it means to be “online” may force us to explore the nature of our control at a level many of us may not be comfortable with.

Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.

Bounded Rationality in a World of Information

First published October 11, 2013 in Mediapost’s Search Insider.  

Humans are not good data crunchers. In fact, we pretty much suck at it. There are variations to this rule, of course. We all fall somewhere on a bell curve when it comes to our sheer rational processing power. But, in general, we would all fall to the far left of even an underpowered laptop.

Herbert Simon

Herbert Simon

Herbert Simon recognized this more than a half century ago, when he coined the term “bounded rationality.”  In a nutshell, we can only process so much information before we become overloaded, when we fall back on much more human approaches, typically known as emotion and gut instinct.

Even when we think we’re being rational, logic-driven beings, our decision frameworks are built on the foundations of emotion and intuition. This is not bad. Intuition tends to be a masterful way to synthesize inputs quickly and efficiently, allowing us generally to make remarkably good decisions with a minimum of deliberation. Emotion acts to amplify this process, inserting caution where required and accelerating when necessary. Add to this the finely honed pattern recognition instincts we humans have, and it turns out the cogs of our evolutionary machinery work pretty well, allowing us to adequately function in very demanding, often overwhelming environments.

We’re pretty efficient; we’re just not that rational. There is a limit to how much information we can “crunch.”

So when information explodes around us, it raises a question – if we’re not very good at processing data, what happen when we’re inundated with the stuff? Yes, Google is doing its part by helpfully “organizing the world’s information,” allowing us to narrow down our search to the most relevant sources, but still, how much time are we willing to devote to wading through mounds of data? It’s as if we were all born to be dancers, and now we’re stuck being insurance actuaries. Unlike Heisenberg (sorry, couldn’t resist the “Breaking Bad” reference) – we don’t like it, we’re not very good at it, and it doesn’t make us feel alive.

To make things worse, we feel guilty if we don’t use the data. Now, thanks to the Web, we know it’s there. It used to be much easier to feign ignorance and trust our guts. There are few excuses now. For every decision we have to make, we know that there is information which, carefully analyzed, should lead us to a rational, logical conclusion. Or, we could just throw a dart and then go grab a beer. Life is too short as it is.

When Simon coined the term “bounded rationality,” he knew that the “bounds” were not just the limits on the information available but also the limits of our own cognitive processing power and the limits on our available time. Even if you removed the boundaries on the information available (as is now happening) those limits to cognition and time would remain.

I suspect we humans are developing the ability to fool ourselves that we are highly rational. For the decisions that count, we do the research, but often we filter that information through a very irrational web of biases, beliefs and emotions. We cherry-pick information that confirms our views, ignore contradictory data and blunder our way to what we believe is an informed decision.

But, even if we are stuck with the same brain and the same limitations, I have to admit that the explosion of available information has moved us all a couple of notches to the right on Simon’s “satisficing” curve. We may not crunch all the information available, but we are crunching more than we used to, simply because it’s available.  I guess this is a good thing, even if we’re a little delusional about our own logical abilities.

What is this “Online” You Speak Of?

First published September 12, 2013 in Mediapost’s Search Insider.

I was in an airport yesterday, and I was eavesdropping. That’s what I do in airports. It’s much more entertaining than watching the monitors. In this particular case, I was listening to a conversation between a well-dressed elderly gentleman, probably in his late ’80s, and what appeared to be his son. They were waiting for pre-boarding. The son was making that awkward small talk — you know, the conversation you have when you don’t really know your parent well enough anymore to be able to talk about what they’re really interested in, but you still feel the need to fill the silence. In this case, the son was talking to his dad about a magazine: “I used to get a copy every time I flew to London,” he said. “But they don’t publish it anymore. It’s all done online.”

The father, who had the look and appearance of a retired university professor, looked at his son quizzically for a few minutes. It’s as if the son had suddenly switched from English to Swahili midstream in his conversation.

“What’s ‘online’?”

“Online — on the Internet. It’s published electronically. There’s no print version anymore?”

The father grappled with the impact of this statement, then shook his head slowly and sadly. “That’s very sad. I suppose the mail service’s days are numbered too.”

The son replied, “Oh yes, I’m sure. No one mails things anymore.”

“But what will I do? I still buy things from catalogs.” It was as if the entire weight of the last two-and-a-half decades had suddenly settled on the frail gentleman’s shoulders.

At first, I couldn’t believe that anyone still alive didn’t know what “online” was. Isn’t that pretty much equivalent to oxygen or gravity now? Hasn’t it reached the point of ubiquity at which we all just take it for granted, no longer needing to think about it?

But then, because in the big countdown of life, I’m also on the downhill slope, closer to the end than to the beginning, I started thinking about how wrenching technological change has become. If you don’t keep up, the world you know is swept away, to be replaced with a world where your mail carrier’s days are numbered, the catalogs you depend on are within a few years of disappearing, and everything seems to be headed for the mysterious destination known as “online.”

As luck would have it, my seat on the airplane was close enough to this gentleman’s that I was able to continue my eavesdropping (if you see me at an airport, I advise you to move well out of earshot). You might have thought, as I first did, that he was in danger of losing his marbles. I assure you, nothing could be further from the truth. For over four hours, he carried on intelligent, informed conversations on multiple topics, made some amazing sketches in pencil, and generally showed every sign of being the man I hope to be when I’m approaching 90. This was not a man who had lost touch with reality; this was a man who is continually surprised (and, I would assume, somewhat frustrated) to find that reality seems to be a moving target.

We, the innovatively smug, may currently feel secure in our own technophilia, but our ability to keep up with the times may slip a little in the coming years. It’s human to feel secure with the world we grew up and functioned in. Our evolutionary environment was substantially more stable than the one we know today. As we step back from the hectic pace, don’t be surprised if we lose a little ground. Someday, when our children speak to us of the realities of their world, don’t be surprised if some of the terms they use sound a little foreign to our ears.