Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.

The Evolution of Google’s Golden Triangle

In search marketing circles, most everyone has heard of Google’s Golden Triangle. It even has it’s own Wikipedia entry (which is more than I can say). The “Triangle” is rapidly coming up to its 10th birthday (it was March of 2005 when Did It and Enquiro – now Mediative – first released the study). This year, Mediative conducted a new study to see if what we found a decade ago still continues to be true. Another study from the Institute of Communication and Media Research in Cologne, Germany also looked at the evolution of search user behaviors. I’ll run through the findings of both studies to see if the Golden Triangle still exists. But before we dive in, let’s look back at the original study.

Why We Had a Golden Triangle in the First Place

To understand why the Golden Triangle appeared in the first place, you have to understand about how humans look for relevant information. For this, I’m borrowing heavily from Peter Pirolli and Stuart Card at PARC and their Information Foraging Theory (by the way, absolutely every online marketer, web designer and usability consultant should be intimately familiar with this theory).

Foraging for Information

Humans “forage” for information. In doing so, they are very judicious about the amount of effort they go to find the available information. This is largely a subconscious activity, with our eyes rapidly scanning for cues of relevancy. Pirolli and Card refer to this as “information scent.” Picture a field mouse scrambling across a table looking for morsels to eat and you’ll have an appropriate mental context in which to understand the concept of information foraging. In most online contexts, our initial evaluation of the amount of scent on a page takes no more than a second or two. In that time, we also find the areas that promise the greatest scent and go directly to them. To use our mouse analogy, the first thing she does is to scurry quickly across the table and see where the scent of possible food is the greatest.

The Area of Greatest Promise

Now, Imagine that same mouse comes back day after day to the same table and every time she returns, she finds the greatest amount of food is always in the same corner. After a week or so, she learns that she doesn’t have to scurry across the entire table. All she has to do is go directly to that corner and start there. If, by some fluke, there is no food there, then the mouse can again check out the rest of the table to see if there are better offerings elsewhere. The mouse has been conditioned to go directly to the “Area of Greatest Promise” first.

Golden Triangle original

F Shaped Scanning

This was exactly the case when we did the first eye tracking study in 2005. Google had set a table of available information, but they always put the best information in the upper right corner. We became conditioned to go directly to the area of greatest promise. The triangle shape came about because of the conventions of how we read in the western world. We read top to bottom, left to right. So, to pick up information scent, we would first scan down the beginning of each of the top 4 or 5 listings. If we saw something that seemed to be a good match, we would scan across the title of the listing. If it was still a good match, we would quickly scan the description and the URL. If Google was doing it’s job right, there would be more of this lateral scanning on the top listing than there would be on the subsequent listings. This F shaped scanning strategy would naturally produce the Golden Triangle scanning pattern we saw.

Working Memory and Chunking

There was another behavior we saw that helped explain the heat maps that emerged. Our ability to actively compare options requires us to hold in our mind information about each of the options. This means that the number of options we can compare at any one time is restricted by the limits of our working memory. George Miller, in a famous paper in 1956, determined this to be 7 pieces of information, plus or minus two. The actual number depends on the type of information to be retained and the dimension of variability. In search foraging, the dimension is relevancy and the inputs to the calculation will be quick judgments of information scent based on a split second scan of the listing. This is a fairly complex assessment, so we found that the number of options to be compared at once by the user tends to max out about 3 or 4 listings. This means that the user “chunks” the page into groupings of 3 or 4 listings and determines if one of the listings is worthy of a click. If not, the user moves on to the next chunk. We also see this in the heat map shown. Scanning activity drops dramatically after the first 4 listings. In our original study, we found that over 80% of first clicks on all the results pages tested came from the top 4 listings. This is also likely why Google restricted the paid ads shown above organic to 3 at the most.

So, that’s a quick summary of our findings from the 2005 study. Next week, we’ll look how search scanning has changed in the past 9 years.

Note: Mediative and SEMPO will be hosting a Google+ Hang Out talking about their research on October 14th. Full details can be found here.

Learning about Big Data from Big Brother

icreach-search-illo-feature-hero-bYou may not have heard of ICREACH, but it has probably heard of you. ICREACH is the NSA’s own Google-like search engine.  And if Google’s mission is to organize the world’s information, ICREACH’s mission is to snoop on the world.  After super whistle blower Edward Snowden tipped the press off to the existence of ICREACH, the NSA fessed up last month. The amount of data we’re talking about is massive. According to The Intercept website, the tool can handle two to five billion new records every day, including data on the US’s emails, phone calls, faxes, Internet chats and text messages. It’s Big Brother meets Big Data.

I’ll leave aside for the moment the ethical aspect of this story.  What I’ll focus on is how the NSA deals with this mass of Big Data and what it might mean for companies who are struggling to deal with their own Big Data dilemmas.

Perhaps no one deals with more big data than the Intelligence Community. And Big Data is not new for them. They’ve been digging into data trying to find meaningful signals amongst the noise for decades. Finally, the stakes of successful data analysis are astronomically high here. Not only is it a matter of life and death – a failure to successfully connect the dots can lead to the kinds of nightmares that will haunt us for the rest of our lives. When the pressure is on to this extent, you can be sure that they’ve learned a thing or two. How the Intelligence community handles data is something I’ve been looking at recently. There are a few lessons to be learned here.

Owned Data vs Environmental Data

The first lesson is that you need different approaches for different types of data. The Intelligence Community has their own files, which include analyst’s reports, suspect files and other internally generated documentation. Then you have what I would call “Environmental” data. This includes raw data gathered from emails, phone calls, social media postings and cellphone locations. Raw data needs to be successfully crunched, screened for signals vs. noise and then interpreted in a way that’s relevant to the objectives of the organization. That’s where…

You Need to Make Sense of the Data – at Scale

Probably the biggest change in the Intelligence community has been to adopt an approach called “Sense making.”  Sense making really mimics how we, as humans, make sense of our environment. But while we may crunch a few hundred or thousand sensory inputs at any one time, the NSA needs to crunch several billion signals.

Human intuition expert Gary Klein has done much work in the area of sense making. His view of sense making relies on the existence of a “frame” that represents what we believe to be true about the world around us at any given time.  We constantly update that frame based on new environmental inputs.  Sometimes they confirm the frame. Sometimes they contradict the frame. If the contradiction is big enough, it may cause us to discard the frame and build a new one. But it’s this frame that allows us to not only connect the dots, but also to determine what counts as a dot. And to do this…

You Have to Be Constantly Experimenting

Crunching of the data may give you the dots, but there will be multiple ways to connect them. A number of hypothetical “frames” will emerge from the raw data. You need to test the validity of these hypotheses. In some cases, they can be tested against your own internally controlled data. Sometimes they will lie beyond the limits of that data. This means adopting a rigorous and objective testing methodology.  Objective is the key word here, because…

You Need to Remove Human Limitations from the Equation

When you look at the historic failures of Intelligence gathering, the fault usually doesn’t lie in the “gathering.” The signals are often there. Frequently, they’re even put together into a workable hypothesis by an analyst. The catastrophic failures in intelligence generally arise because some one, somewhere, made an intuitive call to ignore the information because they didn’t agree with the hypothesis. Internal politics in the Intelligence Community has probably been the single biggest point of failure. Finally…

Data Needs to Be Shared

The ICREACH project came about as a way to allow broader access to the information required to identify warning signals and test out hunches. ICREACH opens up this data pool to nearly two-dozen U.S. Government agencies.

Big Data shouldn’t replace intuition. It should embrace it. Humans are incredibly proficient at recognizing patterns. In fact, we’re too good at it. False positives are a common occurrence. But, if we build an objective way to validate our hypotheses and remove our irrational adherence to our own pet theories, more is almost always better when it comes to generating testable scenarios.

Technology is Moving Us Closer to a Perfect Market

I have two very different travel profiles. When I travel on business, I usually stick with the big chains, like Hilton or Starwood. The experience is less important to me than predictability. I’m not there for pleasure; I’m there to sleep. And, because I travel on business a lot (or used to), I have status with them. If something goes wrong, I can wave my Platinum or Diamond guest card around and act like a jerk until it gets fixed.

But, if I’m traveling for pleasure, I almost never stay in a chain hotel. In fact, more and more, I stay in a vacation rental house or apartment. It’s a little less predictable than your average Sheraton or Hampton Inn, but it’s almost always a better value. For example, if I were planning a last minute get away to San Francisco for Labor Day weekend, I’d be shelling out just under $400 for a fairly average hotel room at the Hilton by Union Square. But for about the same price, I could get an entire 4 bedroom house that sleeps 8 just two blocks from Golden Gate park. And that was with just a quick search on AirBnB.com. I could probably find a better deal with the investment of a few minutes of my time.

perfect_market_1Travel is just one of the markets that technology has made more perfect. And when I say “perfect” I use the term in its economic sense. A perfect market has perfect competition, which means that the barriers of entry have been lowered and most of the transactional costs have been eliminated. The increased competition lowers prices to a sustainable minimum. At that point, the market enters a state called the Pareto Optimal, which means that nothing can be changed without it negatively impacting some market participants and positively impacting others.

Whether a perfect market is a good thing or not depends on your perspective. If you’re a long-term participant in the market and your goal is to make the biggest profit possible, a perfect market is the last thing you want. If you’re a new entrant to the market, it’s a much rosier story – any shifts that take the market closer to a Pareto Optimal will probably be to your benefit. And if you’re a customer, you’re in the best position of all. Perfect markets lead inevitably to better value.

Since the advent of VRBO.com and, more recently, AirBnB.com, the travel marketplace has moved noticeably closer to being perfect. Sites like these, along with travel review aggregators like TripAdvisor.com, have significantly reduced the transaction costs of the travel industry. The first wave was the reduction of search costs. Property owners were able to publish listings in a directory that made it easy to search and filter options. Then, the publishing of reviews gave us the confidence we needed to stray beyond the predictably safe territory of the big chains.

But, more recently, a second wave has further reduced transaction costs independent vacation property owners. I was recently talking to a cousin who rents his flat in Dublin through AirBnB, which takes all the headaches of vacation property management away in return for a cut of the action. He was up and running almost immediately and has had no problem renting his flat during the weeks he makes it available. He found the barriers to entry to be essentially zero. A cottage industry of property managers and key exchange services has sprung up around the AirBnB model.

What technology has done to the travel industry is essentially turned it into a Long Tail business model. As Chris Anderson pointed out in his book, Long Tail markets need scale free networks. Scale free networks only work when transaction costs are eliminated and entry into the market is free of friction. When this happen, the Power Law distribution still stays in place but the tail becomes longer . The Long Tail of Tourism now includes millions of individually owned vacation properties. For example, AirBnB has almost 800 rentals available in Dublin alone. According to Booking.com, that’s about 7 times the total number of hotels in the city.

Another thing that happens is, over time, the Tail becomes fatter. More business moves from the head to the tail. The Pareto Principle states that in Power Law distributions, 20 % of the businesses get 80% of the business. Online, the ratio is closer to 72/28.

These shifts in the market are more than just interesting discussion topics for economists. They mark a fundamental change in the rules of the game. Markets that are moving towards perfection remove the advantages of size and incumbency and reward nimbleness and adaptability. They also, at least in this instance, make life more interesting for customers.

Why Cognitive Computing is a Big Deal When it comes to Big Data

IBM-Watson

Watson beating it’s human opponents at Jeopardy

When IBM’s Watson won against humans playing Jeopardy, most of the world considered it just another man against machine novelty act – going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that. As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning Jeopardy requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge. Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a Jeopardy champion. Watson learned as it went. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem solve. These are all things that Watson can do.

 

Peter Pirolli - PARC

Peter Pirolli – PARC

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines has been to “make sense” of situations and adapt accordingly. Remember, a few columns ago where I talked about narratives and Big Data, this is where Monitor360 uses a combination of humans and computers – computers to do the data crunching and humans to make sense of the results. But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to sense make in an unlimited variety of environmental contexts. We humans excel at quick and dirty sense making no matter what the situation. We’re not always correct in our conclusions but we’re far more flexible than machines. But computers are constantly narrowing the gap and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense making. But this flexibility has required a trade-off. We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it.

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

The Human Stories that Lie Within Big Data

storytelling-boardIf I wanted to impress upon you the fact that texting and driving is dangerous, I could tell you this:

In 2011, at least 23% of auto collisions involved cell phones. That’s 1.3 million crashes, in which 3331 people were killed. Texting while driving makes it 23 times more likely that you’ll be in a car accident.

Or, I could tell you this:

In 2009, Ashley Zumbrunnen wanted to send her husband a message telling him “I love you, have a good day.” She was driving to work and as she was texting the message, she veered across the centerline into oncoming traffic. She overcorrected and lost control of her vehicle. The car flipped and Ashley broke her neck. She is now completely paralyzed.

After the accident, Zumbrunnen couldn’t sit up, dress herself or bath. She was completely helpless. Now a divorced single mom, she struggles to look after her young daughter, who recently said to her “I like to go play with your friends, because they have legs and can do things.”

The first example gave you a lot more information. But the second example probably had more impact. That’s because it’s a story.

We humans are built to respond to stories. Our brains can better grasp messages that are in a narrative arc. We do much less well with numbers. Numbers are an abstraction and so our brains struggle with numbers, especially big numbers.

One company, Monitor360, is bringing the power of narratives to the world of big data. I chatted with CEO Doug Randall recently about Monitor360’s use of narratives to make sense of Big Data.

“We all have filters through which we see the world. And those filters are formed by our experiences, by our values, by our viewpoints. Those are really narratives. Those are really stories that we tell ourselves.”

For example, I suspect the things that resonated with you with Ashley’s story were the reason for the text – telling her husband she loved him – the irony that the marriage eventually failed after her accident and the pain she undoubtedly felt when her daughter said she likes playing with other moms who can still walk. All of those things, while they don’t really add anything to our knowledge about the incidence rate of texting and driving accidents, are all things that strike us at a deeply emotional level because we can picture ourselves in Ashley’s situation. We empathize with her. And that’s what a story is, a vehicle to help us understand the experiences of another.

Monitor360 uses narratives to tap into these empathetic hooks that lie in the mountain of information being generating by things like social media. It goes beyond abstract data to try to identify our beliefs and values. And then it uses narratives to help us make sense of our market. Monitor360 does this with a unique combination of humans and machines.

“A computer can collect huge amounts of data and the compute can even sort that data. But “sense making” is still very, very difficult for computers to do. So human beings go through that information, synthesize that information and pull out what the underlying narrative is.”

Monitor360 detects common stories in the noisy buzz of Big Data. In the stories we tell, we indicate what we care about.

“This is what’s so wonderful about Big Data. The Data actually tells us, by volume, what’s interesting. We’re taking what are the most often talked about subjects…the data is actually telling us what those subjects are. We then go in and determine what the underlying belief system in that is.”

Monitor360’s realization that it’s the narratives that we care about is an interesting approach to Big Data. It’s also encouraging to know that they’re not trying to eliminate human judgment from the equation. Empathy is still something we can trump computers at.

At least for now.

Two Views of the Promise of Technology

technologybrainIn the last two columns, I’ve looked at how technology may be making us intellectually lazy. The human brain tends to follow the path of least resistance and technology’s goal is to eliminate resistance. Last week, I cautioned that this may end up making us both more shallow in our thinking and more fickle in our social ties. We may become an attention deficit society, skipping across the surface of the world. But, this doesn’t necessarily have to be the case.

The debate is not a new one. Momentous technologies generally come complete with their own chorus of naysayers. Whether it’s the invention of writing, the printing press, electronic communication or digital media, the refrain is the same – this will be the end of the world as we know it. But if history has taught us anything, it’s that new technologies are seldom completely beneficial or harmful. Their lasting impact lies somewhere in the middle. With the good comes some bad.

The same will be true for the current digital technologies. The world will change, both for the positive and for the negative. The difference will come in how individuals use the technology. This will spread out along the inevitable bell curve.

watchingTVLook at television, for instance. A sociologist could make a pretty convincing case for the benefits of TV. A better understanding of the global community helped ease our xenophobic biases. Public demand lead to increased international pressure on repressive regimes. There was a sociological leveling that is still happening across cultures. Civil rights and sexual equality were propelled by the coverage they received. Atrocities still happen with far too much regularity, but I personally believe the world is a less savage and brutal place than it was 100 years ago, partially due to the spread of TV.

On the flip side, we have developed a certain laziness of spirit that is fed by TV’s never ending parade of entertainment to be passively consumed. We spend less time visiting our neighbors. We volunteer less. We’re less involved in our communities. Ironically, we’re  a more idealistic society but we make poorer neighbors.

The type of programming to be found on TV also shows that despite the passive nature of the medium, we didn’t become stupider en masse. Some of us use TV for enlightenment, and some of us use it to induce ourselves into a coma. At the end of the day, I think the positives and negatives of TV as a technology probably net out a little better than neutral.

I suspect the same thing is happening with digital media. Some of us are diving deeper and learning more than ever. Others are clicking their way through site after site of brain-porn. Perhaps there are universal effects that will show up over generations that will type the scale one way or the other, but we’re too early in the trend to see those yet. The fact is, digital technologies are not changing our brains in a vacuum. Our environment is also changing and perhaps our brains are just keeping up. The 13 year old who is frustrating the hell out of us today may be a much better match for the world 20 years from now.

I’ll wrap up by leaving three pieces of advice that seem to provide useful guides for getting the best out of new technologies.

First: A healthy curiosity is something we should never stop nurturing. In particular, I find it helpful to constantly ask “how” and “why.”

Second: Practice mindfulness. Be aware of your emotions and cognitive biases and recognize them for what they are. This will help you steer things back on track when they’re leading down an unhealthy path

Third: Move from consuming content to contributing something meaningful. The discipline of publishing tends to push you beyond the shallows.

If you embrace the potential of technology, you may still find yourself as an outlier, but technology has done much to allow a few outliers to make a huge difference.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

When Are Crowds Not So Wise?

the-wisdom-of-crowdsSince James Surowiecki published his book “The Wisdom of Crowds”, the common wisdom is – well – that we are commonly wise. In other words, if we average the knowledge of many people, we’ll be smarter than any of us would be individually. And that is true – to an extent. But new research suggests that there are group decision dynamics at play where bigger (crowds) may not always be better.

A recent study by Iain Couzin and Albert Kao at Princeton suggests that in real world situations, where information is more complex and spotty, the benefits of crowd wisdom peaks in groups of 5 to 20 participants and then decreases after that. The difference comes in how the group processes the information available to them.

In Surowiecki’s book, he uses the famous example of Sir Francis Galton’s 1907 observation of a contest where villagers were asked to guess the weight of an ox. While no individual correctly guessed the weight, the average of all the guesses came in just one pound short of the correct number. But this example has one unique characteristic that would be rare in the real world – every guesser had access to the same information. They could all see the ox and make their guess. Unless you’re guessing the number of jellybeans in a jar, this is almost never the case in actual decision scenarios.

Couzin and Kao say this information “patchiness” is the reason why accuracy tends to diminish as the crowd gets bigger. In most situations, there is commonly understood and known information, which the researchers refer to as “correlated information.” But there is also information that only some of the members of the group have, which is “uncorrelated information.” To make matters even more complex, the nature of uncorrelated information will be unique to each individual member. In real life, this would be our own experience, expertise and beliefs.  To use a technical term, the correlated information would be the “signal” and the uncorrelated information would be the “noise.” The irony here is that this noise is actually beneficial to the decision process.

In big groups, the collected “noise” gets so noisy it becomes difficult to manage and so it tends to get ignored. It drowns itself out. The collective focuses instead on the correlated information. In engineering terms this higher signal-to-noise ratio would seem to be ideal, but in decision-making, it turns out a certain amount of noise is a good thing. By focusing just on the commonly known information, the bigger crowd over-simplifies the situation.

Smaller groups, in contrast, tend to be more random in their make up. The differences in experiences, knowledge, beliefs and attitudes, even if not directly correlated to the question at hand, have a better chance of being preserved. They don’t get “averaged” out like they would in a bigger group. And this “noise” leads to better decisions if the situation involves imperfect information. Call it the averaging of intuition, or hunches. In a big group, the power of human intuition gets sacrificed in favor of the commonly knowable. But in a small group, it’s preserved.

In the world of corporate strategy, this has some interesting implications. Business decisions are almost always complex and involve imperfectly distributed information. This research seems to indicate that we should carefully consider our decision-making units. There is a wisdom of crowds benefit as long as the crowd doesn’t get too big. We need to find a balance where we have the advantage of different viewpoints and experiences, but this aggregate “noise” doesn’t become unmanageable.