Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.

Bounded Rationality in a World of Information

First published October 11, 2013 in Mediapost’s Search Insider.  

Humans are not good data crunchers. In fact, we pretty much suck at it. There are variations to this rule, of course. We all fall somewhere on a bell curve when it comes to our sheer rational processing power. But, in general, we would all fall to the far left of even an underpowered laptop.

Herbert Simon

Herbert Simon

Herbert Simon recognized this more than a half century ago, when he coined the term “bounded rationality.”  In a nutshell, we can only process so much information before we become overloaded, when we fall back on much more human approaches, typically known as emotion and gut instinct.

Even when we think we’re being rational, logic-driven beings, our decision frameworks are built on the foundations of emotion and intuition. This is not bad. Intuition tends to be a masterful way to synthesize inputs quickly and efficiently, allowing us generally to make remarkably good decisions with a minimum of deliberation. Emotion acts to amplify this process, inserting caution where required and accelerating when necessary. Add to this the finely honed pattern recognition instincts we humans have, and it turns out the cogs of our evolutionary machinery work pretty well, allowing us to adequately function in very demanding, often overwhelming environments.

We’re pretty efficient; we’re just not that rational. There is a limit to how much information we can “crunch.”

So when information explodes around us, it raises a question – if we’re not very good at processing data, what happen when we’re inundated with the stuff? Yes, Google is doing its part by helpfully “organizing the world’s information,” allowing us to narrow down our search to the most relevant sources, but still, how much time are we willing to devote to wading through mounds of data? It’s as if we were all born to be dancers, and now we’re stuck being insurance actuaries. Unlike Heisenberg (sorry, couldn’t resist the “Breaking Bad” reference) – we don’t like it, we’re not very good at it, and it doesn’t make us feel alive.

To make things worse, we feel guilty if we don’t use the data. Now, thanks to the Web, we know it’s there. It used to be much easier to feign ignorance and trust our guts. There are few excuses now. For every decision we have to make, we know that there is information which, carefully analyzed, should lead us to a rational, logical conclusion. Or, we could just throw a dart and then go grab a beer. Life is too short as it is.

When Simon coined the term “bounded rationality,” he knew that the “bounds” were not just the limits on the information available but also the limits of our own cognitive processing power and the limits on our available time. Even if you removed the boundaries on the information available (as is now happening) those limits to cognition and time would remain.

I suspect we humans are developing the ability to fool ourselves that we are highly rational. For the decisions that count, we do the research, but often we filter that information through a very irrational web of biases, beliefs and emotions. We cherry-pick information that confirms our views, ignore contradictory data and blunder our way to what we believe is an informed decision.

But, even if we are stuck with the same brain and the same limitations, I have to admit that the explosion of available information has moved us all a couple of notches to the right on Simon’s “satisficing” curve. We may not crunch all the information available, but we are crunching more than we used to, simply because it’s available.  I guess this is a good thing, even if we’re a little delusional about our own logical abilities.

What is this “Online” You Speak Of?

First published September 12, 2013 in Mediapost’s Search Insider.

I was in an airport yesterday, and I was eavesdropping. That’s what I do in airports. It’s much more entertaining than watching the monitors. In this particular case, I was listening to a conversation between a well-dressed elderly gentleman, probably in his late ’80s, and what appeared to be his son. They were waiting for pre-boarding. The son was making that awkward small talk — you know, the conversation you have when you don’t really know your parent well enough anymore to be able to talk about what they’re really interested in, but you still feel the need to fill the silence. In this case, the son was talking to his dad about a magazine: “I used to get a copy every time I flew to London,” he said. “But they don’t publish it anymore. It’s all done online.”

The father, who had the look and appearance of a retired university professor, looked at his son quizzically for a few minutes. It’s as if the son had suddenly switched from English to Swahili midstream in his conversation.

“What’s ‘online’?”

“Online — on the Internet. It’s published electronically. There’s no print version anymore?”

The father grappled with the impact of this statement, then shook his head slowly and sadly. “That’s very sad. I suppose the mail service’s days are numbered too.”

The son replied, “Oh yes, I’m sure. No one mails things anymore.”

“But what will I do? I still buy things from catalogs.” It was as if the entire weight of the last two-and-a-half decades had suddenly settled on the frail gentleman’s shoulders.

At first, I couldn’t believe that anyone still alive didn’t know what “online” was. Isn’t that pretty much equivalent to oxygen or gravity now? Hasn’t it reached the point of ubiquity at which we all just take it for granted, no longer needing to think about it?

But then, because in the big countdown of life, I’m also on the downhill slope, closer to the end than to the beginning, I started thinking about how wrenching technological change has become. If you don’t keep up, the world you know is swept away, to be replaced with a world where your mail carrier’s days are numbered, the catalogs you depend on are within a few years of disappearing, and everything seems to be headed for the mysterious destination known as “online.”

As luck would have it, my seat on the airplane was close enough to this gentleman’s that I was able to continue my eavesdropping (if you see me at an airport, I advise you to move well out of earshot). You might have thought, as I first did, that he was in danger of losing his marbles. I assure you, nothing could be further from the truth. For over four hours, he carried on intelligent, informed conversations on multiple topics, made some amazing sketches in pencil, and generally showed every sign of being the man I hope to be when I’m approaching 90. This was not a man who had lost touch with reality; this was a man who is continually surprised (and, I would assume, somewhat frustrated) to find that reality seems to be a moving target.

We, the innovatively smug, may currently feel secure in our own technophilia, but our ability to keep up with the times may slip a little in the coming years. It’s human to feel secure with the world we grew up and functioned in. Our evolutionary environment was substantially more stable than the one we know today. As we step back from the hectic pace, don’t be surprised if we lose a little ground. Someday, when our children speak to us of the realities of their world, don’t be surprised if some of the terms they use sound a little foreign to our ears.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”

 

Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

Building a Better Meta-Me

First published February 14, 2013 in Mediapost’s Search Insider

Last week I forecast that Facebook would become irrelevant. Some of you disagreed. Ron Stitt called Facebook the “public square” or “crossroads” of social connection.

Andre Szykier pointed out a very real challenge with the successful socialization of online: “The problem is connecting the content from my social walled gardens into a virtual cloud point. Google+ is going about it a different way. They keep expanding their walled garden with search, mail, video, chat services along with social and app services that they provide, hoping you eventually will find their garden big and rich enough so everybody will migrate. While it helps them be the CyBorg of data, it makes people more uneasier (sic) to have all of that in one garden than spread across many. Time will tell which model will thrive.”

Thank you, SI readers. As you so often do, you challenged me to give this idea a little more thought. I still inherently believe that Facebook is being marginalized on the social periphery, but both Ron and Andre have nailed a fundamental concept here that I believe merits further discussion. What does the connection point between ourselves and online (I extend this beyond social alone) evolve into?

The problem, I believe, comes with control. Who controls the connection? Understandably, Facebook, Google, and a host of others want to control this critical territory. It’s an online land grab; they offer us destinations, and we go to them. In return, because the connection happens on their turf, they get to monetize that turf. It’s like an online Monopoly game, with everyone scrambling to own Park Place so they can put more hotels on it.

The problem is that to effectively monetize, all these destinations ask us to invest in letting them know who we are. This creates the problem of profiles – so many profiles to maintain, so little time. If I move to another square, I have to start all over again.

All this profile information is used to create a “meta” representation of us. It’s the online data handshake that enables successful connection.  The issue is that Facebook, Google and all the others want us to build the profile, but for them to own it. This means we have to build multiple “meta” profiles of ourselves. It’s terribly inefficient and requires us to do most of the heavy lifting. Also, as Andre points out, it raises an important question – why should Google (or anyone else) own the meta version of me? I think that’s something I should own.

This dynamic introduces another problem: In order to reduce the heavy lifting, these destinations use our own activity to help build the profile. The more we do, the more they can learn about us. This is fine, as long as the best way to do any of these things is the option offered by the destination that’s trying to build the profile. But even with the vast resources available to a Google or Facebook, it’s almost impossible for them to stay ahead of the constant evolution of online innovation. Sooner or later, there will be a better way to do something somewhere else.  At this point, we’re faced with a dilemma: Do we stick with the original destination, where we’ve invested in building a rich meta version of ourselves, or do we trade that for the better functionality offered by the new alternative, knowing that we have to start building yet again another meta-me?

Google and Facebook, as Ron and Andre point out, have both gone down the road of building a support platform for other innovators, hoping to at least share a significant slice of the territory with new alternatives. This allows us to use that version of our profile in more ways. But it’s still a territorial analogy, and ultimately, that creates a sustainable vulnerability in an environment as dynamic as online. It’s very difficult to successfully hold territory in our ever-expanding online world.

To me, there’s only one eventual answer. We have to own our own meta-selves. Our online profile must be rich and completely portable. When we choose a new destination, our meta-me immediately unlocks the full potential of the destination, tailored specifically for us. There are challenges to be overcome — primarily around issues of privacy — but this is the only sustainable path.

Up to now, the Internet has been all about who owns what territory. This is not surprising — it’s a natural extension of our existing worldview, one formed in a physical environment. Our minds need time to grapple and assimilate abstract concepts.  Up to now, we’ve “gone” to places online. But the evolved functionality of the Internet has expanded beyond this parochial mental scaffolding. It’s time to reimagine the possibilities, using our own concepts of consciousness as a new framework. We will live at the center, we define who we are and what we want — and the Internet will be a vast extension of our mental potential that we can call at will, without our having to “go” anywhere. We’ve seen hints of this in search already, conceptually fleshing out Wegner’s transactive memory.

Daunting? Yes. Kurzweilian (with all the negative and positive connotations that implies)? Probably.  Inevitable? I believe so.

Breaking Out of Facebook’s Walled Garden

First published February 7, 2013 in Mediapost’s Search Insider

According to PEW, 27% of us are looking to wean ourselves off the Facebook habit.

This is not particularly surprising. While Facebook can be incredibly distracting, it’s not really relevant to our lives. It has never been woven into the fabric of our day-to-day activities. It’s more like an awkward, albeit entertaining, interlude jammed into the long list of stuff we have to do today. That list represents our life. Facebook represents the stuff that lies on the periphery.

Here’s one way to think about it. What if Facebook went down today? Would it really matter? Sure, it might be a disappointment, but would it make us substantially change our plans?

Now consider if Google went down for the day. How many times in a day would you got to use it, then curse because it wasn’t there?

The problem is that our online social interactions are outgrowing the walled garden that is Facebook. It has failed to become essential in the way that Google has. I can go entire months without logging into my Facebook account. I have trouble going an hour without using Google. And when I need Google, I need it now.

Again, I turn to how we use language as a clue as to how we feel about things. To “search” is a verb. It’s an action that connects intents with outcomes. It’s something we have to do. And, if you’re loyal to Google as your search engine, it’s pretty easy to swap “googling” for “searching” and for everyone to know exactly what you mean.

But what, I ask, is social? It’s not a verb. It’s not even a noun. It’s an adjective, to describe someone or something.  If I told you I “Facebooked” someone, you probably wouldn’t know what I meant. And that’s an important distinction. “Social” is tied to who we are. It isn’t tied to any single destination. Social travels with us.

When Facebook came on the scene, it did do a good job of showing us how online could be used to keep better track of our extended social networks. But now there are other ways to do that. An informal poll by Macquarie Securites also found that Instagrams are a quickly growing way to connect, especially among Facebook’s core market of 18- to 25-year-olds.

Facebook can’t own social in the same way Google can own search. We own social, because we are social. And we will use multiple tools to allow us to be social.

Facebook envisioned a social ecosystem that could then be monetized with targeted advertising. But as the PEW study points out, Facebook just couldn’t contain all our social activity. Many of us are thinking that we should probably spend less time on Facebook, as we find other ways to connect online. While Facebook has never been essential, it now also risks becoming irrelevant.

Weighing Positive and Negative Impacts on Users

First published January 31, 2013 in Mediapost’s Search Insider

We humans hate loss. In fact, we seem to value losing something about twice as high as gaining something. For example, imagine I gave you a coffee cup and then offered to buy it back from you. That’s scenario 1. In scenario 2, I ask you to buy the same coffee cup from me. The price you assign to the coffee cup in the first scenario will be, on the average, about twice as much as in the second. And yes, there’s research to back this up.

When it comes to winning and losing, it’s been proven that “loss looms larger than gains.” It’s just one of the weird glitches in our logical circuitry.  We tend to be hardwired to look at glasses as half empty.

Recently, I was reviewing an academic study done in 2008, with this scintillating title: “Procedural Priming and Consumer Judgment: Effects on the Impact of Positively and Negatively Valenced Information” by Shen and Wyer. If you can get beyond the rather dry title, you find a treasure trove of tidbits to consider when crafting your online user experience.

For example, when we evaluate a product for potential purchase, we may run across both positive and negative information. The order we run into this information can have a dramatic impact on what we do downstream from that interaction. To use psychological terms, it “primes” our mental framework.  And, because we tend to focus on negatives, less favorable information has a greater impact on our decision than positive information.

But it’s not just that we pay more attention to bad news than good news. It’s that bad news can hijack the entire consideration process. According to Shen and Wyer, if we run into negative information, it can change our information-seeking strategies, leading us down further negatively biased channels to confirm the initial information we saw. Bad news tends to lead to more bad news.

Also, we can get “bad news” hangovers. If we compare negatives in one decision process, that negative mental framework can carry over to an entirely different decision that has nothing to do with the first, giving us a heightened awareness of negative information in the new situation.

Here’s another interesting finding. If we’re rushed for time, this preoccupation with the negatives will dramatically affect the decision we make. But, if we have all the time in the world, the impact is relatively insignificant. Given time, we seem to cancel out our inherently negative biases.

All this news is not bad for marketers, however. It seems that simply getting users to state their preference for one feature over another, even though they’re not actively considering purchase at that time, leads to a much greater likelihood of purchase in the future. It seems that if you can get users to compare alternatives — and, more importantly, to commit to saying they prefer one alternative over another — they clear the mental hurdle of deciding “will I buy?” and instead start considering  “what will I buy?”

Finally, there is also a recency effect, especially if prospects had ample time to consider all their alternatives. Shen and Wyer found that the last information considered seemed to have the greatest effect on the buyer.  So, if information was both positive and negative, it was good to get the least favorable information in front of the prospect early, and then move to the most favorable information. Again, this is true only if the user had plenty of time to weigh the options. If they were rushed, the opposite was true.

All in all, these are all intriguing concepts to consider when crafting an ideal online user experience. They also underscore the importance of first impressions, especially negative ones.

McLuhan 50 Years Later

First published December 20, 2012 in Mediapost’s Search Insider

My daughter, who is in her senior year of high school, recently wrote an essay on Marshall McLuhan. She asked me to give my thoughts on McLuhan’s theories of media. To be honest, I hadn’t given McLuhan much thought since my college days, when I had packed away “Understanding Media: The Extensions of Man” for what I thought would likely be forever. I always found the title ironic. This book does many things, but promoting “understanding” is not one of them. It’s one of the more incomprehensible texts I’ve ever encountered.

My daughter’s essay caused me to dig up my half-formed understanding of what McLuhan was trying to say. I also tried to update that understanding from the early ‘60s, when it was written, to a half-century later, in the world we currently live in.

Consider this passage from McLuhan, written exactly 50 years ago: The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual’s encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.

(See, I told you it was incomprehensible!)

The key thing to understand here is that McLuhan foretold something that I believe is unfolding before our eyes: The media we interact with are changing our patterns of cognition – not the message, but the medium itself. We are changing how we think. And that, in turn, is changing our society. While we focus on the messages we receive, we fail to notice that the ways we receive those messages are changing everything we know, forever. Twitter, Facebook, Google, the xBox and Youtube – all are co-conspirators in a wholesale rewiring of our world.

Now, to borrow from McLuhan’s own terminology, no one in our Global Village could ignore the horrific unfolding of events in Connecticut last week. But the channels we received the content through also affected our intellectual and visceral connection with that content. Watching parents search desperately for their children on television was a very different experience from catching the latest CNN update delivered via my iPhone.

When we watched through “hot” media, we connected at an immediate and emotional level. When the message was delivered through “cool” media, we stood somewhat apart, framing the messaging and interpreting it, abstracted at some length from the sights and sounds of what was unfolding. Because of the emotional connection afforded by the “hot” media, the terror of Newtown was also our own.

McLuhan foretold this as well: Unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. […] Terror is the normal state of any oral society, for in it everything affects everything all the time.

My daughter is graduating next June. The world she will inherit will bear little resemblance to the one I stepped into, fresh from my own graduation in 1979. It is smaller, faster, more connected and, in many ways, more terrifying. But, has the world changed as much as it seems, or is it just the way we perceive that world? And, in that perception, are we the ones unleashing the change?