Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”

 

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

A Look at the Future through Google Glasses?

First published June 7, 2012 in Mediapost’s Search Insider

“A wealth of information creates a poverty of attention.” — Herbert Simon

Last week, I explored the dark recesses of the hyper-secret Google X project.  Two X Projects in particular seem poised to change our world in very fundamental ways: Google’s Project Glass and the “Web of Things.”

Let’s start with Project Glass. In a video entitled “One Day…,” the future seen through the rose-colored hue of Google Glasses seems utopian, to say the least. In the video, we step into the starring role, strolling through our lives while our connected Google Glasses feed us a steady stream of information and communication — a real-time connection between our physical world and the virtual one.

In theory, this seems amazing. Who wouldn’t want to have the world’s sum total of information available instantly, just a flick of the eye away?

Couple this with the “Web of Things,” another project said to be in the Google X portfolio.  In the Web of Things, everything is connected digitally. Wearable technology, smart appliances, instantly findable objects — our world becomes a completely inventoried, categorized and communicative environment.

Information architecture expert Peter Morville explored this in his book “Ambient Findability.”  But he cautions that perhaps things may not be as rosy as you might think after drinking the Google X Kool-Aid. This excerpt is from a post he wrote on Ambient Findability:  “As information becomes increasingly disembodied and pervasive, we run the risk of losing our sense of wonder at the richness of human communication.”

And this brings us back to the Herbert Simon quote — knowing and thinking are not the same thing. Our brains were not built on the assumption that all the information we need is instantly accessible. And, if that does become the case through advances in technology, it’s not at all clear what the impact on our ability to think might be. Nicholas Carr, for one, believes that the Internet may have the long-term effect of actually making us less intelligent. And there’s empirical evidence he might be right.

In his book “Thinking, Fast and Slow,”Noble laureate Daniel Kahneman says that while we have the ability to make intuitive decisions in milliseconds (Malcolm Gladwell explored this in “Blink”), humans also have a nasty habit of using these “fast” mental shortcuts too often, relying on gut calls that are often wrong (or, at the very least, biased) when we should be using the more effortful “slow” and rational capabilities that tend to live in the frontal part of our brain. We rely on beliefs, instincts and habits, at the expense of thinking. Call it informational instant gratification.

Kahneman recounts a seminal study in psychology, where four-year-old children were given a choice: they could have one Oreo immediately, or wait 15 minutes (in a room with the offered Oreo in front of them, with no other distractions) and have two Oreos. About half of the children managed to wait the 15 minutes. But it was the follow-up study, where the researchers followed what happened to the children 10 to 15 years later, that yielded the fascinating finding:

“A large gap had opened between those who had resisted temptation and those who had not. The resisters had higher measures of executive control in cognitive tasks, and especially the ability to reallocate their attention effectively. As young adults, they were less likely to take drugs. A significant difference in intellectual aptitude emerged: the children who had shown more self-control as four year olds had substantially higher scores on tests of intelligence.”

If this is true for Oreos, might it also be true for information? If we become a society that expects to have all things at our fingertips, will we lose the “executive control” required to actually think about things? Wouldn’t it be ironic if Google, in fulfilling its mission to “organize the world’s information” inadvertently transgressed against its other mission, “don’t be evil,” by making us all attention-deficit, intellectual-diminished, morally bankrupt dough heads?

A Search History of TED

First published March 10, 2011 in Mediapost’s Search Insider

I always find it interesting to look at a cultural phenomenon through the lens of search. Search provides a fascinating and quantitative look at the growth of interest in a particular topic. Having spent all last week immersed in the cult that is TED (I was at TEDActive in Palm Springs, Calif.) I thought that this was as good a subject as any to analyze.

TED’s Back Story

The TED story, for those of you not familiar with it, is pretty amazing. TED was originally held in Monterey, Calif. in 1984, the brainchild of Richard Saul Wurman and Harry Marks. Some of the content on that first TED stage? The unveiling of the Mac, a rep from Sony demonstrating the compact disc, Benoit Mandelbrot talking about fractals and Marvin Minsky speculating on the possibilities of artificial intelligence. Due to its proximity to Silicon Valley, the conference had a decidedly tech-heavy focus. The first one lost money, and Wurman didn’t attempt another one until 1990. It was then held annually in Monterey.

In 2001, Chris Anderson took over the show and broadened the focus, adopting a more philanthropic approach. Technology still figured prominently on the TED stage, but the conference became an intellectual smorgasbord of content, with a single session known to veer from musicians to world adventurers, scientists to CEOs.

Probably the biggest change in the fortunes of TED, however, came in 2006 when the world was invited to share what happened on the TED stage. The talks were videotaped and made freely available online. And it’s here where our search story begins.

TED:TSI (TED Search Investigation)

If you use Google Insights (as I did), you see something interesting begin to happen in the search activity surrounding TED. Through 2004, 2005 and 2006, most of the search activity for TED was about the conference. There were peaks every February when the conference took place, but other than this, the volumes were pretty consistent. There was little year-over-year growth. TED remained an exclusive club for the intellectually elite. The rest of the world had never heard of it.

In 2006, when the videos were launched, a new trend began. By the end of the year, more people were using search to find the TED talks themselves than to find out about the conference. The gap continued to widen until in 2011, the search popularity of the Talks themselves is almost 3 times as much as query volume for the conference. But volumes for both have seen impressive growth. The conference rode the wave of the popularity of the videos, with query volumes over 10 times the levels seen in 2006. The videos fueled the growth of TED, making it the must see conference of the year.

The Global Mapping of TED

Another interesting trend has been to see how TED has become a global phenomenon. TED talks are most popular in Canada, followed by New Zealand, the U.S. and South Africa. They’ve also shown impressive growth in South Africa, Singapore, Australia and India. And it’s this global popularity that led TED to announce TEDx, in 2009. These are independently organized shows held around the world, with some mentorship and guidance from the TED mother ship. They have been tremendously popular — and now search volumes for TEDx have surpassed queries for the main conference.  Epicenters of the TEDx tidal wave include the Netherlands, Portugal, Finland, India and Argentina.

If we drill down to the U.S., we find the greatest concentration of TEDsters (the official moniker of members of the TED community) in Oregon, Washington and Vermont. Surprisingly, California, where the conference is held, doesn’t even make the list of top TED states. Massachusetts, New York and Hawaii all beat it out. The top 10 TED states are all solidly blue (based on the last presidential election) — except for Montana.

And because Canada is such a TED hotbed (TED has an office in Vancouver) I’m proud to say that my home province of B.C. has perhaps the greatest concentration of TED fans in the world, followed by Manitoba, Alberta (which would be the Canadian equivalent of Montana) and Saskatchewan. According to Google, the TED world capital should be Victoria, B.C, which has the highest concentration of TED-related searches of any city, anywhere. The U.S. Capital? Portland, Ore. For some reason, TEDmania is very much alive and well here in the Pacific Northwest.

TED has legs!

Finally, you may ask if the wave of TED popularity is sustainable. I had this very conversation last week with another TEDster in Palm Springs. If you look at the growth of all search volumes so far in 2011, I would say the TED wave has barely begun. Volumes have skyrocketed this year in every category I looked at.  If you compare the query volume graphs to a typical S-shaped adoption curve, you would conclude that TED is just beginning a massive growth spurt.  Get used to hearing about TED, because that will be happening a lot in the future — especially if you’re visiting Victoria or Portland.

The Nobler Side of Social Media: Voices in a Choir

First published March 3, 2011 in Mediapost’s Search Insider

Last week, I took social media to task for making us less social. This week, I’m in Palm Springs for TED Active — and on day one, saw three very real examples of how the Internet is also connecting us in ways we never imagined before. They provided a compelling counterpoint to my original argument.

Eric Whitacre is a composer and conductor. In “Lux Aurumque (Light and Gold)” he conducts a choir singing his original composition. The choir, 185 strong, never sang together. They never met each other. They live in 12 different countries. Whitacre posted a video of himself conducting the piece, and every one of those 185 members of the choir submitted their individual parts through YouTube. The 247 separate tracks were combined into a rather amazing work that has been seen almost 2 million times. One of the contributors lived in a cabin in the remote Alaskan wilderness, 400 miles from the nearest town. Her satellite link was her only connection to the world.

The Johnny Cash Project is an equally amazing collaborative effort. Aaron Koblin and Chris Milk took archival film footage of Johnny Cash, dissected it frame by frame, and asked artists from around the world to redraw each frame. The contributions were stitched back together with Cash’s song, “Ain’t No Grave” as the soundtrack. The result is mesmerizing.

But perhaps the must stunning example of digital collaboration came not from art, but the very real world of the Middle East. Wadah Kanfar, the chief of Al-Jazeera, told us how the voices of many, amplified through technology, are bringing democracy and new hope to Egypt, Tunisia and Libya.

These examples speak of something much broader and powerful than just the typical applications of social media. And, like social media’s less attractive side, the impact of these new connections on society is yet to be determined. There is a social experiment being conducted in real time — but the results will only be fully realized through the lens of hindsight. Can true democracy be established in a place like Libya, even with the power of connection? Time alone will tell.

The new technology of connection releases things that are deeply human: the need to be part of the greater whole (for example, the choir member from Alaska); the need to contribute something of ourselves to the world (for example, the Johnny Cash Project); and the need for fairness and justice (as in the protests in the Middle East). In the last example, these connections illuminate the human condition in the darkest corners of the world and force accountability. Since the beginning of time, unfairness in the tribe has been punished. The difference now is that our human tribe extends around the world.

Kanfar told an amazing story that unfolded during the height of one of the protests. The demonstrators pleaded with Al-Jazeera to keep the cameras rolling through the night. “If you stop, we’re lost. But as long as you keep showing what’s happening, we have hope.”

Perhaps the true paradox of social media is not that we’re becoming less social, but that we’re becoming social in different ways. As we spend less time in our flesh and blood engagements, we spend more time establishing connections that were impossible before. In the ’70s, Mark Granovetter found that our social networks are composed of two distinct types of linkages, which he called strong and weak ties. The strong ties are the family and friends bonds that generally require both proximity and significant time together. The weak ties are the extended bonds that we might call acquaintances. As Granovetter found, it’s the weak ties that carry the surprising power of a community, especially when they’re mobilized for a common purpose. We rely on weak ties for referrals, favors and job offers. They extend beyond our immediate circle and provide important social capital when required.

Perhaps social media has had a negative impact on our strong ties, as I alluded to in my last column. But, as I was reminded today, it has dramatically increased our ability to form weak ties that align to concepts, interests and causes. And don’t let the name “weak ties” fool you. When they’re synchronized, they can be tremendously powerful. You might call them the harmonized voices of a global choir.

 

The World Out of Context

First published January 20, 2011 in Mediapost’s Search Insider

Did you see Ricky Gervais hosting the Golden Globes? No? Neither did I. Neither did about 98% of the population of North America, according to the ratings numbers. Yet I would bet in the past week that we all knew about it, and we all talked about it. But we’re basing our judgments, opinions and conversations on something we’ve, at best, read online, heard about through the network (virtual or otherwise) or seen on YouTube. We’re experiencing the simultaneous pleasure and pain of Ricky’s Golden Globe Roast through hearsay and sound bites.

This isn’t an isolated incident. More and more, our view of the world comes after the fact, often filtered through fragments found somewhere online. Most of our experience of the world is out of its original context. This phenomenon isn’t new. Gossip is as old as language. We all love to talk about what’s happened. But the prevalence of digital footprints throws a new spin on this inherently human tendency. The impact of that spin, I’m afraid, is still to be determined.

The World as I Remember It

Memories are funny things. We like to think of them as snapshots of the past, accurately recording where we’ve been. The truth is, memories aren’t all that reliable. We tend to remember high points and low points, removing much of the distracting noise in the middle that makes up the stuff of our everyday lives. It’s like a Reader’s Digest condensed version of our past, except we tend to rewrite the actual content to match our view of the world. And once we rewrite our memories to match our beliefs, we believe them to be true (see Danny Kahneman’s TED talk on remembered happiness). It’s a self-perpetuating cycle that helps maintain the consistency of our worldview, but it’s a far stretch from what actually happened. Even more disturbing, if you’re a fan of the truth, is that we can’t seem to resist tweaking the story to make it more interesting. We love to build memes that take on a life of their own, spreading virally across the social landscape.

I always maintain that technology doesn’t change behaviors; it allows behaviors to change. Technology can’t force us down a road we don’t want to go. This drive to tweak little tidbits of the past is something baked into the human psyche. But the vast tableau we now have available to share it on is something quite new. “Going viral” now raises gossip to a whole new level.  Just ask a dorky little kid that goes by BeenerKeeKee 19952 online. His strangely compelling lip syncs to popular songs have turned him into an instant celebrity. His cover of Katy Perry’s “Teenage Dream” has garnered close to 30 million views on YouTube, closing in on the popularity of the original video. He’s become so popular that 50 Cent popped into his bedroom to do a cameo recently.  But we know nothing of the kid behind the webcam. We don’t know the context of his life. We don’t know if he is bullied at school, has a life outside his bedroom or is good at baseball. All we know is what we can see in three minutes and 48 seconds.

Fool’s Gold

One recent example of this problem of context is Ted Williams, the homeless man with the golden voice who was plucked from the streets of Columbus, Ohio and placed on a world stage. The world judged the situation based on a 1 minute and 14 second Youtube clip. We saw what appeared to be injustice, and rushed to right the wrong. Job offers poured in. Williams became a celebrity. But it all happened without the context of the 53 years of an undeniably checkered past that preceded the fateful video clip. As it is turning out, as we gain the context, the real story is not nearly as simple or straightforward as we would like. Williams is already in rehab.

Acting on hearsay and secondhand information is nothing new. But as our communication abilities and our ability to archive history continue to expand, we get further and further from the true context of things. With the advent of online, word of mouth flows farther, faster and is more compelling than ever. More and more, we will act on little bits of information that are far removed from their true origins. We will pass judgment without the benefit of context. This will create more instant celebrities, basking in their 15 minutes of fame. And it will also create more viral sensations with self-destructive tendencies. There’s one thing about context – it may not lead to the instant gratification we crave, but it does tend to keep the egg off of one’s face.

Everyone’s a Critic: The Splinters of our Discontent

First published January 14, 2010 in Mediapost’s Search Insider

I had a bout of inbox convergence today. Just as I was speculating what this week’s Search Insider might cover, two separate emails surrounded a juicy little topic and delivered it to me on a platter. First, a post from Ad Age about how marketers are reluctant to use online conversations as a source of customer feedback: “‘Listening’ ostensibly has become the rage in consumer research, but the Advertising Research Foundation is finding that many marketers view what would seem one of the digital age’s biggest gifts to marketers — the torrent of unsolicited consumer opinion — as more of an added expense item than a blessing.”

And then, a small blog post on Echouser got me thinking: “It’s a concept for what an iPhone app designed to measure experiences (any experiences, from surfing a website to hopping on BART) could look like… Can you imagine if we were able to rate experiences on the fly, all day every day?”

Customers are Talking…

There’s been a lot of talk about the shift of control to the consumer and empowerment. As 2009 drew to a close, I talked about the shape of marketing to come. One of the key foundations I identified was participation — actively engaging in an ongoing conversation with customers. The two posts in my inbox start to get at the potential of this conversation.

In the first post, ARF laments advertisers’ reluctance to tap into ongoing online conversations as a source of customer feedback. Valid point, but I can understand their reluctance. This is unstructured content, making it qualitative, anecdotal and messy. Marketers balk at the heavy lifting required to mine and measure the collective mood. Some tools, such as Collective Intellect, are starting to take on the hard task of migrating online sentiment into a dashboard for marketers. The easier it gets, the more likely it will be for marketers to actually do it. Until then, we’re stuck with consumer surveys and comment cards.

…Anytime, Anywhere…

But it’s the second post that really got me thinking. Always-on connections have already given a voice to consumers, one that’s heard loud and clear. But what if we did indeed have a convenient and commonly structured way to provide feedback on every single interaction in our lives through mobile connections? What if marketers could know in real time what every single customer thought of them, based on the experience he or she just had? Some cringe at the thought. Others are eager for it. The second group will inevitably prevail.

Given the level of investment required on the part of the user, I suspect this channel would only be used in extremely negative and extremely positive circumstances. We don’t tend to take the time to comment on things that come reasonably close to meeting our expectations. But even so, it’s a powerful feedback channel to contemplate, giving the truly user-centric company everything they could ever wish for.

…So Listen!

Last week, I talked about the mother lode of consumer intent that exists in search query logs and how we’ve been slow to leverage it. This week, we have an equally valuable asset rapidly coming down the pipe — a real-time view of our customers’ sentiment.  That’s a one-two punch that could knock the competition out cold.

Grandma Via YouTube

First published June 25, 2009 in Mediapost’s Search Insider

This week we had a Webinar on Digital Immigrants and Digital Natives. We featured brain scanning images, survey results and the work of Marc Prensky, Gary Small and other researchers, showing how technology has created a generational divide between our kids and us. For me, though, it all came into sharper focus when I walked past our computer at home and saw my youngest daughter, Lauren, sitting there with crochet hooks in hand.

“What are you doing?” I asked.

“Learning to crochet.”

“On the computer?”

“Yes, there’s a video showing how on YouTube.”

“Really?”

“Yes, Dad, YouTube has now replaced Grandma.” (Smart mouth on that kid — not sure where she gets it from.)

Adapting With Our Plastic Brains

Prensky and Small have written extensively on how exposure to technology can literally change the way our brains are wired. Our brains are remarkably malleable in nature, continually changing to adapt to our environment. The impressive label for it is “neuroplasticity” — but we know it better simply as “learning.”  We now know that our brains continually adapt throughout our lives.  But there are two phases where the brain literally reforms itself in a massive restructuring: right around two years of age and again as teenagers. During these two periods, billions of new synaptic connections are formed, and billions are also “pruned” out of the way. All this happens as a direct response to our environments, helping us develop the capabilities to deal with the world.

These two spurts of neuroplasticity are essential development stages, but what happens when there are rapid and dramatic shifts in our environment from one generation to the next? What happens when our children’s brains develop to handle something we never had to deal with as children? Quite literally, their brains function differently than ours. This becomes particularly significant when the rate of adoption is very rapid, making a technology ubiquitous in a generation or less. The other factor is how much the technology becomes part of our daily lives. The more important it is, the more significant the generational divide.

Our Lives: As Seen on TV

The last adoption that met both conditions was the advent of television. There, 1960 to 1965 marked the divide where the first generation to be raised on television started to come of age. And the result was a massive social shift. In his book “Bowling Alone,”   Robert Putnam shows example after example of how our society took a U-turn in the ’60s, reversing a trend in building social capital.  We became more aware and ideologically tolerant, but we also spent less time with each other. This trend played out in everything from volunteering and voting to having dinner parties and joining bowling leagues. The single biggest cause identified by Putnam? Television. We are only now beginning to assess the impact of this technology on our society, a half-century after its introduction. It took that long for the ripples to be felt through the generations.

You Ain’t Seen Nuthin Yet.

That’s a sobering thought when we consider what’s happening today. The adoption rate of the Internet has been similar to that of television, but the impact on our daily lives is even more significant. Everything we touch now is different than it was when we were growing up.  If TV caused a seismic shift of such proportions that it took us 50 years to catalog the fall-out, what will happen 50 years from now?

Who will be teaching my great grandchildren how to crochet?