What a Social Media “Like” Should Really Mean

Originally posted in Mediapost’s Search Insider on October 3, 2013

Italy’s Agriturismo program has been a success by any measure you might want to use. Since the initial legislation was passed in 1985, thousands of small farms through Italy, teetering on the edge of extinction, have been thrown a financial lifeline by letting operators supplement their income  welcoming tourists to “stay on the farm.” The program includes one-time renovation grants and an ongoing marketing program. Today, there are almost 3,500 agriturismos throughout Italy. Many of these have sprung up just in the past decade. The program brings the market directly to the farm, allowing onsite sales of products to guests and showcasing the homegrown produce in the agriturismo’s restaurant.

The program’s success, however, has superheated the competition for tourism among the operators. In Tuscany, where I stayed at one such farm, there are 1,000 agriturismos, almost one third of the total number in Italy. You literally can’t throw a Tuscan stone without hitting some type of tourist-targeted operation. This competitive environment is made even more fervent when you consider that almost every restaurant in Italy is also an independent operation. There are no big chains. All these businesses are literally mom and pop (sorry, Momma and Poppa) operations. They run on a shoestring. There is little to no money for advertising. If ever there was a test bed for guerilla marketing, this is it.

Here, online ratings are the currency of choice. A top spot in an online directory is the difference between life and death for these businesses. In this almost perfect but unflinchingly brutal adaptive environment, if you’re terrible, you die quickly. If you’re mediocre, you die slowly. If you’re good, you stumble along. And for a very few exceptions, if you’re excellent, you may do OK and even prosper, relatively speaking. I would put Fausto and Susanna in this last category. They run a small agriturismo just outside San Gimignano.

When it comes to the directories that matter, one towers above the rest. TripAdvisor wields the same power in this market that Google wields in our world of search. It is the ultimate arbitrator of life and death. And the smartest of the operators have taken this to heart. They “get” social media at a level that is humbling to this particular North American online marketing “expert.” It’s not just asking for a “like” or a good review. They know that the best way to get a glowing review is to utterly, undeniably, completely deserve it.  There’s no faint praise here; you have to blow your customer’s socks off.

It’s this intimate, person-to-person exchange that makes this the most efficient market possible. No money or marketing efforts are wasted on inefficient channels.  There are no middlemen. It all takes place directly between the host and the guest. It’s completely genuine. How many marketing campaigns can you say that about? They give you the experience of a lifetime, and you say a heartfelt thank you. TripAdvisor (and Facebook, and Yelp, etc.) is just there to make sure the world hears about it.

If Fausto and Susanna have understood the power of social media, Marina Pasquino is teaching a master’s class in it. In all my years of staying in hotels and consulting to businesses, I’m not sure I’ve ever seen a better-run business than Signora Pasquino’s small hotel on the Adriatic coast. My jaw dropped during check-in, and didn’t manage to snap back into place until we left seven awestruck days later.

The Hotel Belvedere, a tiny hotel in Riccione with less than 50 rooms, has blown TripAdvisor’s review algorithm to smithereens. It doesn’t just top the ratings for hotels in its area – it’s TripAdvisor’s number-one hotel in all of Italy, and one of the top 25 hotels in the world! Of the over 800 reviews it’s collected, 97% of them are effusive over-the-top odes to the hotel, its staff and the complete Belvedere experience.  The feedback is so overwhelming positive, posts sometimes get flagged for manual review to ensure they’re not fraudulent. They’re not, by the way. I mean, how many hotel staff actually hug you when you check in? Seriously.

Business is almost completely generated by word of mouth (both traditionally and digitally). Guests come back every single year. And they bring their friends. During our week, several groups (many from Canada, where I’m from) were at the hotel. And all this is fueled by a warm contact through social media after you leave. With the Belvedere, when you talk about friending and liking, you don’t have to put quotes around the words. In this case, those labels match your intention.

I’ve talked before about how rugged adaptive environments drive the evolution of new breeds of marketers. I can’t think of any environment more rugged than the tourism industry in today’s Italy. And here, the Faustos, the Susannas and the Marinas are showing that if you work your ass off to be amazing, we’ll return the favor by letting people know. I’m not sure what you would call this particular species, but I hope it prospers. We could certainly use more of them in the world.

What is this “Online” You Speak Of?

First published September 12, 2013 in Mediapost’s Search Insider.

I was in an airport yesterday, and I was eavesdropping. That’s what I do in airports. It’s much more entertaining than watching the monitors. In this particular case, I was listening to a conversation between a well-dressed elderly gentleman, probably in his late ’80s, and what appeared to be his son. They were waiting for pre-boarding. The son was making that awkward small talk — you know, the conversation you have when you don’t really know your parent well enough anymore to be able to talk about what they’re really interested in, but you still feel the need to fill the silence. In this case, the son was talking to his dad about a magazine: “I used to get a copy every time I flew to London,” he said. “But they don’t publish it anymore. It’s all done online.”

The father, who had the look and appearance of a retired university professor, looked at his son quizzically for a few minutes. It’s as if the son had suddenly switched from English to Swahili midstream in his conversation.

“What’s ‘online’?”

“Online — on the Internet. It’s published electronically. There’s no print version anymore?”

The father grappled with the impact of this statement, then shook his head slowly and sadly. “That’s very sad. I suppose the mail service’s days are numbered too.”

The son replied, “Oh yes, I’m sure. No one mails things anymore.”

“But what will I do? I still buy things from catalogs.” It was as if the entire weight of the last two-and-a-half decades had suddenly settled on the frail gentleman’s shoulders.

At first, I couldn’t believe that anyone still alive didn’t know what “online” was. Isn’t that pretty much equivalent to oxygen or gravity now? Hasn’t it reached the point of ubiquity at which we all just take it for granted, no longer needing to think about it?

But then, because in the big countdown of life, I’m also on the downhill slope, closer to the end than to the beginning, I started thinking about how wrenching technological change has become. If you don’t keep up, the world you know is swept away, to be replaced with a world where your mail carrier’s days are numbered, the catalogs you depend on are within a few years of disappearing, and everything seems to be headed for the mysterious destination known as “online.”

As luck would have it, my seat on the airplane was close enough to this gentleman’s that I was able to continue my eavesdropping (if you see me at an airport, I advise you to move well out of earshot). You might have thought, as I first did, that he was in danger of losing his marbles. I assure you, nothing could be further from the truth. For over four hours, he carried on intelligent, informed conversations on multiple topics, made some amazing sketches in pencil, and generally showed every sign of being the man I hope to be when I’m approaching 90. This was not a man who had lost touch with reality; this was a man who is continually surprised (and, I would assume, somewhat frustrated) to find that reality seems to be a moving target.

We, the innovatively smug, may currently feel secure in our own technophilia, but our ability to keep up with the times may slip a little in the coming years. It’s human to feel secure with the world we grew up and functioned in. Our evolutionary environment was substantially more stable than the one we know today. As we step back from the hectic pace, don’t be surprised if we lose a little ground. Someday, when our children speak to us of the realities of their world, don’t be surprised if some of the terms they use sound a little foreign to our ears.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”

 

Comparing and Contrasting the Classes of ’79 and ’13

First published July 2, 2013 in Mediapost’s Search Insider

My youngest daughter just graduated from high school. I graduated from my high school a third of a century ago. The things you read about every day here at MediaPost have made the world a much different place for her than it was for me.

Or have they?

I was actually struck these past few months with how her grad experience didn’t seem all that much different than mine. The biggest difference, it seemed, was in how she connected with her friends. But the “why” – the topics of those connections – seems very familiar.

She is graduating from a small school, with a grad class of just over 50. I graduated from a small-town high school in Alberta in a class of 70. Like me, she has gone to school with most of her class from kindergarten right through to grade 12 – so the social dynamics in both cases were fairly tightly woven.

Both classes, the class of ‘13 and the class of ’79, were under the temporary euphoria of youthful confidence. All things seem possible when you’re 18. The world is not a grinding gristmill of monthly mortgage payments, day-to-day job-related drudgery, vague yet persistent aches and pains and innumerable other nagging details that suck the life out of you. It’s a lion waiting to be tamed, a journey begging to be taken or an adventure still to be had. Is there any more optimistic time in your life than graduation? I wish that it could last forever, but I know better.

Both classes had their inevitable run-ins with authority that seemed unreasonable and inflexible. In both cases, said “run-ins” arose from social “traditions” that ran afoul of scheduled class time. Both times, the phrases “can’t condone” and “set a precedent” was used a lot by the school administration. Of course, such nuances don’t mean much to you when you’re 18. “Party” is a word with much more meaning.

Speaking of parties, both classes had their share. The biggest difference between ’79 and ’13 was in how word of these parties propagated through the grad social network. In 1979, “viral” meant hanging out at the main intersection of town (I told you I grew up in a small town) waiting for familiar trucks (I told you I grew up in Alberta) to go by, so you could ask where the party was. Today’s approach seems much more efficient.

Style also played a major role in both events. In many cases, it’s our first experience with formal wear, which means a lot of time is devoted to dress and/or suit shopping. My daughter has been wearing high heels in the house for the past week, hoping to master the trick of locomotion without severe injury. Of course, in my case it was a very stylish dark brown velvet tuxedo with matching bowtie. Hey, it was ’79, and my fashion influences were “The Love Boat” and Jack Tripper from “Three’s Company.” Cut me some slack! There were people who went in blue jeans (remember – rural Alberta).

Another major theme was, and is, “Who’s going with who (sic)” to graduation. For those of us who were less precocious in our experience with the opposite sex, a lot of pressure came with graduation. We had to get a date, or be labeled as “the guy who went stag.” This meant you had a lot of socially inept teenagers going through the trauma of a first date at the same time, in the same place. All the technology in the world can’t improve person-to-person communication in this scenario.

It seems to me that though the way the class of ’13 negotiated through their grad experience may have changed since 1979, the actual things that make up that experience seem remarkably familiar. It’s still about transition: whether it be in relationships, opportunities, routines or responsibility.  It’s that awesome experience of sitting on the cusp, when all things seem possible. It’s believing that you own the world – and that  the world is an essentially good place. Whether you express that on Facebook, Instagram or while leaning on the side of a Chevy pick-up at the “Four Corners” in Sundre, Alberta — the “how” may have changed, but the “why” has remained the same.

The Stress of Hyper-Success

Last week, I talked about the inflation of expectations. In that case, it was the vendors we deal with that were the victims of that inflation. But we don’t only have inflated expectations about others. Increasingly, we measure ourselves against our own expectations. And that is leading us down a dangerous path.

The problem is that success is a relative thing. We can only judge it by looking at others. This creates a problem, because increasingly, we’re looking at extreme outliers as our baseline for expectations.

Take social media, for instance. Women feel more stressed than satisfied after spending time on Pinterest, according to a recent survey. “Pinterest stress” is the official label for feelings of inadequacy in trying to measure up against the unrealistic examples of domestic perfection shared on the female-dominated social network.

But it’s not just women and Pinterest. One-third of Facebook users feel worse after visiting the site. Why? Because we feel envious after going through the pictures of someone else’s dream vacation. Social media invites comparison. We try to measure ourselves up to the achievements of others in our social circle. There are two problems with that: we are naturally jealous of our neighbors, and our neighbors tend to lie (or at least embellish) when they post of their own accomplishments.

Added to this is the unnatural effect of the Power Law curve. Not all online posts about accomplishments are equally popular. We tend to focus on those that are outstanding — those that are set apart from the average. These online examples, representing the extreme upper limits of success and achievement, take their place at the head of the Power Law curve, drawing a dramatically bigger audience. We ignore the commonplace, which lives somewhere in the Long Tail. Our own quest for the remarkable (humans never gossip about average, everyday topics) leads us to focus on the unrealistic.

So the more access we have to the achievements of others, the more skewed our idea of success becomes. What we don’t realize, however, is that we’re measuring ourselves against the very highest percentile of the human population.

Take salaries, for example. What would be a yearly amount that would make you happy? Economists Angus Deaton and Daniel Kahnemann asked that very question — and it turns out that $75,000 a year is the magic number. Below that number, the day-to-day stress of just getting by leads to chronic unhappiness. But above that number, people seem to feel more fulfilled and are generally in a more positive frame of mind. But after you get past that general threshold for happiness, more money doesn’t seem to always equate to increased happiness. Millionaires and billionaires are not that much happier than the rest of us.

Yet if I asked you how much you wanted to make, I suspect the number would be higher than $75,000. And I doubt that it would have much to do with happiness. It would be because we know of people making more than us — much more. We have no idea if those high wage earners are happy or not, but we do know they pull down a much bigger paycheck than we do. So we believe we should aspire to that standard, whether it’s realistic or not, in the mistaken belief that it will make us happier. It won’t, by the way. We humans are notoriously bad at forecasting our own happiness.

This is one of those strange Darwinian detours that evolution has saddled us with. In our original adaptive environment, doing better than our neighbors was a pretty sure bet for superior gene propagation. We’re hardwired to not just be envious but to strive to compete. That made sense when our target was the person we were competing against for food, shelter or sexual access.  It doesn’t make sense when our competition is a far removed, sometimes fictitious ideal propagated by the media and the viral force of social sharing.

Somewhere, a resetting of expectations is required before we self-destruct because of hyper competitiveness in trying to reach an unreachable goal. To end on a gratuitous pop culture quote, courtesy of Sheryl Crow: “It’s not having what you want, It’s wanting what you got.”

The Straw that Broke the Market’s Back

First published May 9, 2013 in Mediapost’s Search Insider

Customers are fickle — and I suspect they’re getting more fickle.  Perhaps they’re even feeling a little entitled.A recent survey shows that customers tend to bail on a company not because of a big time screw-up, but because of the accumulation of a lot of little annoyances. Soon, their frustration reaches a tipping point and they look elsewhere.

It would be easy to point the finger at the companies and demand that they get their collective acts together. But I suspect there’s more at play here. It would be my guess that customers are getting harder to please.  And I would further guess that the Web is largely to blame. I think it comes down to a constant rise in our collective expectations, while the reality of our experiences fall behind.

The balance between our expectations and the actual experience determines our loyalty to any course of action. If we have low expectations and a poor experience, we aren’t really surprised, which dampens our subsequent disappointment and leaves us more willing to forgive and forget.  If we have low expectations but a good experience, we’re pleasantly surprised, making us more apt to return. If we have high expectations and a good experience, we get a double hit of happiness. First, we enjoy the anticipation, then we appreciate that the experience actually lives up to our expectations. For a vendor, the scariest scenario is the last of the four: high expectations but a poor experience. In this case, we walk away disappointed and frustrated.

Now, balancing expectations and experience wouldn’t be that difficult for any moderately competent company if those expectations were realistic. But I suspect that more and more of us are entering into our respective experiences with unrealistic expectations. We’re setting our vendors up to fail.

Expectations are set partly based on our past experiences, but they’re also set by the experiences of others. We create our expectation set points based, in part, on what we hear from others.

The Web has created an open, accessible market of experiences and hearsay. We hear about the bad, a feedback loop that increasingly is calling out poor customer service. But we also hear about the good.  Correction – we hear about the exceptional. The “good” is not remarkable. It generally falls within our expectations and so goes without comment. But either the very good or the very bad is exceptional, and we are more apt to comment on it online. Not only do we comment, we also embellish, accentuating the plusses and minuses to make it a better story. Therefore, what we hear from others sets either a very low or very high bar. We steer clear of the low bars, but the high bars stick with us, contributing to the setting of future expectations.

The other thing the Web has done is create expectations that overlap domains.  Previously, when our expectations were set based on our own experiences, they tended to stay domain-specific. We had an expectation of what it would be like to buy a car, stay at a hotel, eat at a restaurant or purchase a new pair of shoes. With the Web, cross-pollination between domains is increasingly common. A head marketer for a well-known industrial manufacturer once said to me, “When it comes to online experience, my competitors are not the traditional ones. I’m competing against Amazon and eBay. That type of experience is what people expect.”

This “nudging up” of expectations is done without much rational consideration. We don’t care much for the reality of operational logistics in any particular domain. We just want our expectations to be met, no matter where those expectations might come from. And when they’re not, we pull the plug on that particular vendor, assuming another vendor can do better in meeting our inflated expectations. The Web has also engendered a virulent “grass is always greener” view of the world. We know a competitor is just a click away (whether or not that vendor is any better than the incumbent).

I’ll be the first to call out a bad customer experience, but when it comes to the increasing fickleness of customers, we should remember that there are two sides to this particular story.

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

Building a Better Meta-Me

First published February 14, 2013 in Mediapost’s Search Insider

Last week I forecast that Facebook would become irrelevant. Some of you disagreed. Ron Stitt called Facebook the “public square” or “crossroads” of social connection.

Andre Szykier pointed out a very real challenge with the successful socialization of online: “The problem is connecting the content from my social walled gardens into a virtual cloud point. Google+ is going about it a different way. They keep expanding their walled garden with search, mail, video, chat services along with social and app services that they provide, hoping you eventually will find their garden big and rich enough so everybody will migrate. While it helps them be the CyBorg of data, it makes people more uneasier (sic) to have all of that in one garden than spread across many. Time will tell which model will thrive.”

Thank you, SI readers. As you so often do, you challenged me to give this idea a little more thought. I still inherently believe that Facebook is being marginalized on the social periphery, but both Ron and Andre have nailed a fundamental concept here that I believe merits further discussion. What does the connection point between ourselves and online (I extend this beyond social alone) evolve into?

The problem, I believe, comes with control. Who controls the connection? Understandably, Facebook, Google, and a host of others want to control this critical territory. It’s an online land grab; they offer us destinations, and we go to them. In return, because the connection happens on their turf, they get to monetize that turf. It’s like an online Monopoly game, with everyone scrambling to own Park Place so they can put more hotels on it.

The problem is that to effectively monetize, all these destinations ask us to invest in letting them know who we are. This creates the problem of profiles – so many profiles to maintain, so little time. If I move to another square, I have to start all over again.

All this profile information is used to create a “meta” representation of us. It’s the online data handshake that enables successful connection.  The issue is that Facebook, Google and all the others want us to build the profile, but for them to own it. This means we have to build multiple “meta” profiles of ourselves. It’s terribly inefficient and requires us to do most of the heavy lifting. Also, as Andre points out, it raises an important question – why should Google (or anyone else) own the meta version of me? I think that’s something I should own.

This dynamic introduces another problem: In order to reduce the heavy lifting, these destinations use our own activity to help build the profile. The more we do, the more they can learn about us. This is fine, as long as the best way to do any of these things is the option offered by the destination that’s trying to build the profile. But even with the vast resources available to a Google or Facebook, it’s almost impossible for them to stay ahead of the constant evolution of online innovation. Sooner or later, there will be a better way to do something somewhere else.  At this point, we’re faced with a dilemma: Do we stick with the original destination, where we’ve invested in building a rich meta version of ourselves, or do we trade that for the better functionality offered by the new alternative, knowing that we have to start building yet again another meta-me?

Google and Facebook, as Ron and Andre point out, have both gone down the road of building a support platform for other innovators, hoping to at least share a significant slice of the territory with new alternatives. This allows us to use that version of our profile in more ways. But it’s still a territorial analogy, and ultimately, that creates a sustainable vulnerability in an environment as dynamic as online. It’s very difficult to successfully hold territory in our ever-expanding online world.

To me, there’s only one eventual answer. We have to own our own meta-selves. Our online profile must be rich and completely portable. When we choose a new destination, our meta-me immediately unlocks the full potential of the destination, tailored specifically for us. There are challenges to be overcome — primarily around issues of privacy — but this is the only sustainable path.

Up to now, the Internet has been all about who owns what territory. This is not surprising — it’s a natural extension of our existing worldview, one formed in a physical environment. Our minds need time to grapple and assimilate abstract concepts.  Up to now, we’ve “gone” to places online. But the evolved functionality of the Internet has expanded beyond this parochial mental scaffolding. It’s time to reimagine the possibilities, using our own concepts of consciousness as a new framework. We will live at the center, we define who we are and what we want — and the Internet will be a vast extension of our mental potential that we can call at will, without our having to “go” anywhere. We’ve seen hints of this in search already, conceptually fleshing out Wegner’s transactive memory.

Daunting? Yes. Kurzweilian (with all the negative and positive connotations that implies)? Probably.  Inevitable? I believe so.

Breaking Out of Facebook’s Walled Garden

First published February 7, 2013 in Mediapost’s Search Insider

According to PEW, 27% of us are looking to wean ourselves off the Facebook habit.

This is not particularly surprising. While Facebook can be incredibly distracting, it’s not really relevant to our lives. It has never been woven into the fabric of our day-to-day activities. It’s more like an awkward, albeit entertaining, interlude jammed into the long list of stuff we have to do today. That list represents our life. Facebook represents the stuff that lies on the periphery.

Here’s one way to think about it. What if Facebook went down today? Would it really matter? Sure, it might be a disappointment, but would it make us substantially change our plans?

Now consider if Google went down for the day. How many times in a day would you got to use it, then curse because it wasn’t there?

The problem is that our online social interactions are outgrowing the walled garden that is Facebook. It has failed to become essential in the way that Google has. I can go entire months without logging into my Facebook account. I have trouble going an hour without using Google. And when I need Google, I need it now.

Again, I turn to how we use language as a clue as to how we feel about things. To “search” is a verb. It’s an action that connects intents with outcomes. It’s something we have to do. And, if you’re loyal to Google as your search engine, it’s pretty easy to swap “googling” for “searching” and for everyone to know exactly what you mean.

But what, I ask, is social? It’s not a verb. It’s not even a noun. It’s an adjective, to describe someone or something.  If I told you I “Facebooked” someone, you probably wouldn’t know what I meant. And that’s an important distinction. “Social” is tied to who we are. It isn’t tied to any single destination. Social travels with us.

When Facebook came on the scene, it did do a good job of showing us how online could be used to keep better track of our extended social networks. But now there are other ways to do that. An informal poll by Macquarie Securites also found that Instagrams are a quickly growing way to connect, especially among Facebook’s core market of 18- to 25-year-olds.

Facebook can’t own social in the same way Google can own search. We own social, because we are social. And we will use multiple tools to allow us to be social.

Facebook envisioned a social ecosystem that could then be monetized with targeted advertising. But as the PEW study points out, Facebook just couldn’t contain all our social activity. Many of us are thinking that we should probably spend less time on Facebook, as we find other ways to connect online. While Facebook has never been essential, it now also risks becoming irrelevant.