The Evolution of Google’s Golden Triangle

In search marketing circles, most everyone has heard of Google’s Golden Triangle. It even has it’s own Wikipedia entry (which is more than I can say). The “Triangle” is rapidly coming up to its 10th birthday (it was March of 2005 when Did It and Enquiro – now Mediative – first released the study). This year, Mediative conducted a new study to see if what we found a decade ago still continues to be true. Another study from the Institute of Communication and Media Research in Cologne, Germany also looked at the evolution of search user behaviors. I’ll run through the findings of both studies to see if the Golden Triangle still exists. But before we dive in, let’s look back at the original study.

Why We Had a Golden Triangle in the First Place

To understand why the Golden Triangle appeared in the first place, you have to understand about how humans look for relevant information. For this, I’m borrowing heavily from Peter Pirolli and Stuart Card at PARC and their Information Foraging Theory (by the way, absolutely every online marketer, web designer and usability consultant should be intimately familiar with this theory).

Foraging for Information

Humans “forage” for information. In doing so, they are very judicious about the amount of effort they go to find the available information. This is largely a subconscious activity, with our eyes rapidly scanning for cues of relevancy. Pirolli and Card refer to this as “information scent.” Picture a field mouse scrambling across a table looking for morsels to eat and you’ll have an appropriate mental context in which to understand the concept of information foraging. In most online contexts, our initial evaluation of the amount of scent on a page takes no more than a second or two. In that time, we also find the areas that promise the greatest scent and go directly to them. To use our mouse analogy, the first thing she does is to scurry quickly across the table and see where the scent of possible food is the greatest.

The Area of Greatest Promise

Now, Imagine that same mouse comes back day after day to the same table and every time she returns, she finds the greatest amount of food is always in the same corner. After a week or so, she learns that she doesn’t have to scurry across the entire table. All she has to do is go directly to that corner and start there. If, by some fluke, there is no food there, then the mouse can again check out the rest of the table to see if there are better offerings elsewhere. The mouse has been conditioned to go directly to the “Area of Greatest Promise” first.

Golden Triangle original

F Shaped Scanning

This was exactly the case when we did the first eye tracking study in 2005. Google had set a table of available information, but they always put the best information in the upper right corner. We became conditioned to go directly to the area of greatest promise. The triangle shape came about because of the conventions of how we read in the western world. We read top to bottom, left to right. So, to pick up information scent, we would first scan down the beginning of each of the top 4 or 5 listings. If we saw something that seemed to be a good match, we would scan across the title of the listing. If it was still a good match, we would quickly scan the description and the URL. If Google was doing it’s job right, there would be more of this lateral scanning on the top listing than there would be on the subsequent listings. This F shaped scanning strategy would naturally produce the Golden Triangle scanning pattern we saw.

Working Memory and Chunking

There was another behavior we saw that helped explain the heat maps that emerged. Our ability to actively compare options requires us to hold in our mind information about each of the options. This means that the number of options we can compare at any one time is restricted by the limits of our working memory. George Miller, in a famous paper in 1956, determined this to be 7 pieces of information, plus or minus two. The actual number depends on the type of information to be retained and the dimension of variability. In search foraging, the dimension is relevancy and the inputs to the calculation will be quick judgments of information scent based on a split second scan of the listing. This is a fairly complex assessment, so we found that the number of options to be compared at once by the user tends to max out about 3 or 4 listings. This means that the user “chunks” the page into groupings of 3 or 4 listings and determines if one of the listings is worthy of a click. If not, the user moves on to the next chunk. We also see this in the heat map shown. Scanning activity drops dramatically after the first 4 listings. In our original study, we found that over 80% of first clicks on all the results pages tested came from the top 4 listings. This is also likely why Google restricted the paid ads shown above organic to 3 at the most.

So, that’s a quick summary of our findings from the 2005 study. Next week, we’ll look how search scanning has changed in the past 9 years.

Note: Mediative and SEMPO will be hosting a Google+ Hang Out talking about their research on October 14th. Full details can be found here.

Want to Be More Strategic? Stand Up!

article-1388357-0050C69D00000258-771_472x345One of the things that always frustrated me in my professional experience was my difficulty in switching from tactical to strategic thinking. For many years, I served on a board that was responsible for the strategic direction of an organization. A friend of mine, Andy Freed, served as an advisor to the board. He constantly lectured us on the difference between strategy and tactics:

“Strategy is your job. Tactics are mine. Stick to your job and I’ll stick to mine.”

Despite this constant reminder, our discussions always seemed to quickly spiral down to the tactical level. We all caught ourselves doing it. It seemed that as soon as we started thinking about what needed to be done and why, we automatically shifted gears and thought about how it should be done.

A recent study may have found the problem. We were sitting down. We should have stood up. Better yet, we should have taken the elevator to the top of the building (we actually did do this at one board retreat in Scottsdale, Arizona). Two researchers at the University of Toronto (home, I should point out, of what was the tallest free standing structure in the world for many years – the CN Tower), Pankaj Aggarwal and Min Zhao, found that a subject’s physical situation impacted how strategic they were. When subjects were physically higher up, say standing on a tall stool, they were more likely to look at the “big picture.”

Our physical context has more than a little impact on how we think. It’s a phenomenon called Mental Construal. And it’s not just restricted to how strategic our thinking is. It can impact thinks like social judgment as well. In a 2006 paper, University of Michigan professor Norbert Schwartz gave some examples that fall under the category called “situated concepts.” For example, the mental images you retrieve when I say “chair” might be different if we’re standing in a living room rather than an airplane or movie theatre. Another example, which unfortunately speaks to a darker side of human nature, is how you would respond to the face of a young African American when shown in the context of a church scene versus the context of a street corner scene.

Schwartz also talks about levels of construal. We’re more successful staying at strategic levels when our planning is trouble free. The minute we hit a problem, we tend to revert to finer grained tactical thinking. Again, in my board experience, the minute we started hitting problems we immediately tried to solve them, which effectively derailed any strategic discussion.

In his book, Creativity: Flow and the Psychology of Discovery and Invention, Mihaly Csikszentmihalyi found that physical contexts can also impact creativity. Physicist Freeman Dyson found that walking was essential to drive the creative process,

“Again, I never went to a class that (Richard) Feynman taught. I never had any official connection with him at all, in fact. But we went for walks. Most of the time that I spent with him was actually walking, like the old style of philosophers who used to walk around under the cloisters.”

In a study where subjects were given pagers and were signaled at random times of the day, they were asked to rate how creative they felt. It turned out the highest level of creativity came while they were walking, driving or swimming. Perhaps it was the physical stimulation, but it may have also been mental construal at work. Perhaps physical movement primed the brain for mental movement.

So, if you need to be strategic, find the highest vantage point possible, with room to walk around, preferably with the smartest person you know.

Two Views of the Promise of Technology

technologybrainIn the last two columns, I’ve looked at how technology may be making us intellectually lazy. The human brain tends to follow the path of least resistance and technology’s goal is to eliminate resistance. Last week, I cautioned that this may end up making us both more shallow in our thinking and more fickle in our social ties. We may become an attention deficit society, skipping across the surface of the world. But, this doesn’t necessarily have to be the case.

The debate is not a new one. Momentous technologies generally come complete with their own chorus of naysayers. Whether it’s the invention of writing, the printing press, electronic communication or digital media, the refrain is the same – this will be the end of the world as we know it. But if history has taught us anything, it’s that new technologies are seldom completely beneficial or harmful. Their lasting impact lies somewhere in the middle. With the good comes some bad.

The same will be true for the current digital technologies. The world will change, both for the positive and for the negative. The difference will come in how individuals use the technology. This will spread out along the inevitable bell curve.

watchingTVLook at television, for instance. A sociologist could make a pretty convincing case for the benefits of TV. A better understanding of the global community helped ease our xenophobic biases. Public demand lead to increased international pressure on repressive regimes. There was a sociological leveling that is still happening across cultures. Civil rights and sexual equality were propelled by the coverage they received. Atrocities still happen with far too much regularity, but I personally believe the world is a less savage and brutal place than it was 100 years ago, partially due to the spread of TV.

On the flip side, we have developed a certain laziness of spirit that is fed by TV’s never ending parade of entertainment to be passively consumed. We spend less time visiting our neighbors. We volunteer less. We’re less involved in our communities. Ironically, we’re  a more idealistic society but we make poorer neighbors.

The type of programming to be found on TV also shows that despite the passive nature of the medium, we didn’t become stupider en masse. Some of us use TV for enlightenment, and some of us use it to induce ourselves into a coma. At the end of the day, I think the positives and negatives of TV as a technology probably net out a little better than neutral.

I suspect the same thing is happening with digital media. Some of us are diving deeper and learning more than ever. Others are clicking their way through site after site of brain-porn. Perhaps there are universal effects that will show up over generations that will type the scale one way or the other, but we’re too early in the trend to see those yet. The fact is, digital technologies are not changing our brains in a vacuum. Our environment is also changing and perhaps our brains are just keeping up. The 13 year old who is frustrating the hell out of us today may be a much better match for the world 20 years from now.

I’ll wrap up by leaving three pieces of advice that seem to provide useful guides for getting the best out of new technologies.

First: A healthy curiosity is something we should never stop nurturing. In particular, I find it helpful to constantly ask “how” and “why.”

Second: Practice mindfulness. Be aware of your emotions and cognitive biases and recognize them for what they are. This will help you steer things back on track when they’re leading down an unhealthy path

Third: Move from consuming content to contributing something meaningful. The discipline of publishing tends to push you beyond the shallows.

If you embrace the potential of technology, you may still find yourself as an outlier, but technology has done much to allow a few outliers to make a huge difference.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

When Are Crowds Not So Wise?

the-wisdom-of-crowdsSince James Surowiecki published his book “The Wisdom of Crowds”, the common wisdom is – well – that we are commonly wise. In other words, if we average the knowledge of many people, we’ll be smarter than any of us would be individually. And that is true – to an extent. But new research suggests that there are group decision dynamics at play where bigger (crowds) may not always be better.

A recent study by Iain Couzin and Albert Kao at Princeton suggests that in real world situations, where information is more complex and spotty, the benefits of crowd wisdom peaks in groups of 5 to 20 participants and then decreases after that. The difference comes in how the group processes the information available to them.

In Surowiecki’s book, he uses the famous example of Sir Francis Galton’s 1907 observation of a contest where villagers were asked to guess the weight of an ox. While no individual correctly guessed the weight, the average of all the guesses came in just one pound short of the correct number. But this example has one unique characteristic that would be rare in the real world – every guesser had access to the same information. They could all see the ox and make their guess. Unless you’re guessing the number of jellybeans in a jar, this is almost never the case in actual decision scenarios.

Couzin and Kao say this information “patchiness” is the reason why accuracy tends to diminish as the crowd gets bigger. In most situations, there is commonly understood and known information, which the researchers refer to as “correlated information.” But there is also information that only some of the members of the group have, which is “uncorrelated information.” To make matters even more complex, the nature of uncorrelated information will be unique to each individual member. In real life, this would be our own experience, expertise and beliefs.  To use a technical term, the correlated information would be the “signal” and the uncorrelated information would be the “noise.” The irony here is that this noise is actually beneficial to the decision process.

In big groups, the collected “noise” gets so noisy it becomes difficult to manage and so it tends to get ignored. It drowns itself out. The collective focuses instead on the correlated information. In engineering terms this higher signal-to-noise ratio would seem to be ideal, but in decision-making, it turns out a certain amount of noise is a good thing. By focusing just on the commonly known information, the bigger crowd over-simplifies the situation.

Smaller groups, in contrast, tend to be more random in their make up. The differences in experiences, knowledge, beliefs and attitudes, even if not directly correlated to the question at hand, have a better chance of being preserved. They don’t get “averaged” out like they would in a bigger group. And this “noise” leads to better decisions if the situation involves imperfect information. Call it the averaging of intuition, or hunches. In a big group, the power of human intuition gets sacrificed in favor of the commonly knowable. But in a small group, it’s preserved.

In the world of corporate strategy, this has some interesting implications. Business decisions are almost always complex and involve imperfectly distributed information. This research seems to indicate that we should carefully consider our decision-making units. There is a wisdom of crowds benefit as long as the crowd doesn’t get too big. We need to find a balance where we have the advantage of different viewpoints and experiences, but this aggregate “noise” doesn’t become unmanageable.

The Power of Meta

meta111First published April 24, 2014 in Mediapost’s Search Insider

To the best of our knowledge, humans are the only species capable of thinking about thinking, even though most of us don’t do it very often. We use the Greek word “meta” to talk about this ability.  Basically, “meta” refers to a concept which is an abstraction of another concept –an instruction sheet for whatever the original thing is.

Because humans can grasp this concept, it can be a powerful way to overcome the limits of our genetic programming. Daniel Kahneman’s book, Thinking Fast and Slow, is essentially a meta-guide to the act of thinking – an owner’s guide for our minds. In it, he catalogs evolutions extensive list of cognitive “gotch-yas” that can way lay our rational reasoning.

In our digital world, we use the word “metadata” a lot. Essentially, metadata is a guide to the subject data. It sits above the data in question, providing essential information about it, such as sources, structure, indexing guides, etc. Increasingly, as we get data from more and more disparate sources, metadata will be required to use it. Ideally, it will provide universally understood implementation guide.  This, of course, requires a common schema for metadata; something that organizations like schema.org is currently working on.

Meta is a relatively new concept that has exploded in the last few decades. It’s one of those words we throw around, but we probably don’t stop to think about. It’s power lies in its ability to both “mark up” the complexity of real world, giving us another functional layer in which to operate. But it also allows us to examine ourselves and overcome some of the mental foibles we’re subject to.

According to Wikipedia, there are over 160 cognitive biases that can impact our ability to rationally choose the optimal path. They include such biases as the Cheerleader Effect, where individuals are more attractive in a group, the IKEA Effect, where we overvalue something we assemble ourselves, and the Google Effect, where we tend to forget information we know we can look up on Google. These are like little bugs in our operating software and most times, they impact our rational performance without us even being aware of them.  But if we have a meta-awareness of them, we can mitigate them to a large degree. We can step back from our decision process and see where biases may be clouding our judgment.

Meta also allows us to model and categorize complexity. It allows us to append data to data, exponentially increasing the value of the aggregated data set. This becomes increasingly important in the new era of Big Data. The challenge with Big Data is that it’s not only more data, because in this case, more is different. Big Data typically comes from multiple structured sources and when it’s removed from the guidance of it’s native contextual schema, it becomes unwieldy. A metadata layer gives us a Rosetta’s Stone with which we can integrate these various data sources. And it’s in the combining of data in new combinations that the value of Big Data can be found.

Perhaps the most interesting potential of meta is in how we might create a meta-model of ourselves. I’ve talked about this before in the context of social media.  Increasingly, our interactions with technology will gain value from personalization. Each of us will be generating reams of personal data. There needs to be an efficient connection between the two. We can’t invest the time required to train all these platforms, tools and apps to know us better. It makes sense to consolidate the most universally applicable data about us into a meta-profile of our goals, preferences and requirements. In effect, it will be a technologically friendly abstraction of who we are.  If we can agree on a common schema for these meta-profiles, the developers of technology can develop their various tools to recognize them and reconfigure their functionality tailor made for us.

As our world becomes more complex, the power of meta will become more and more important.

Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

The Psychology of Usefulness: A New Model for Technology Acceptance.

In the last post, I reviewed the various versions of the Technology Acceptance Model. Today, I’d like to introduce my own thoughts on the subject and a proposed new model.  But first, I’d like to introduce an entirely new model to the discussion.

Introduction of Sense Making

I like Gary Klein’s Theory of Sense Making – a lot! And in the area of technology acceptance, I think it has to be part of the discussion. It introduces a natural Bayesian rhythm to the process that I think provides a intuitive foundation for our decisions on whether or not we’ll accept a new technology.

Kleins-Data-Frame-Model-of-Sensemaking

Gary Klein et al – Sensemaking Model How Might “Transformational” Technologies and Concepts be Barriers to Sensemaking in Intelligence Analysis

Essentially, the Sense Making Model says that when we try to make sense of something new, we begin with some type of perspective, belief or viewpoint. In Bayesian terms, this would be our prior. In Klein’s model, he called it a frame.

Now, this frame doesn’t only give us a context in which to absorb new data, it actually helps define what counts as data. This is a critical concept to remember, because it dramatically impacts everything that follows. Imagine, for example, that you arrive on the scene of a car accident. If your frame was that of a non-involved bystander, the data you might seek in making sense of the situation would be significantly different than if your frame was that of a person who recognized one of the vehicles involved as belonging to your next-door neighbor.

In the case of technology acceptance, this initial frame will shape what types of data we would seek in order to further qualify our decision. If we start with a primarily negative attitude, we would probably seek data that would confirm our negative bias. The opposite would be true if we were enthusiastic about the adoption of technology. For this reason, I believe the creation of this frame should be a step in any proposed acceptance model.

But Sense Making also introduces the concept of iterative reasoning. After we create our frame, we do a kind of heuristic “gap analysis” on our frame. We prod and poke to see where the weaknesses are. What are the gaps in our current knowledge? Are there inconsistencies in the frame? What is our level of conviction on our current views and attitudes? The weaker the frame, the greater our need to seek new data to strengthen it. This process happens without a lot of conscious consideration. For most of us, this testing of the frame is probably a subconscious evaluation that then creates an emotional valence that will impact future behavior. On one extreme, it could be a strongly held conviction, on the other it would be a high degree of uncertainty.

If we decide we need more data, the Sense Making Model introduces another “Go/No Go” decision point. If the new data confirms our initial frame, we elaborate that frame, making it more complete. We fill in gaps, strengthen beliefs, discard non-aligned data and update our frame. If our sense making is in support of a potential action and we seem to be heading in the right direction with our data foraging, this can be an iterative process that continually updates our frame until it’s strong enough to push us over the threshold of executing that action.

But, if the new data causes serious doubt about our initial frame, we may need to consider “reframing,” in which case we’d have to seek new frames, compare against our existing one and potentially discard it in favor of one of the new alternatives. This essentially returns us to square one, where we need to find  data to elaborate the new frame. And there the cycle starts again.

This double loop learning process illustrates that a decision process, such as accepting a new technology, can loop back on itself at any point, and may do so at several points. More than this, it is always susceptible to a “reframing” incident, where new data may cause the existing frame to be totally discarded, effectively derailing the acceptance process.

Revisiting Goal Striving

I also like Bagozzi’s Goal Striving model, for reasons outlined in a previous post. I won’t rehash them here, except to say that this model introduces a broader context that is more aligned with the complexity of our typical decision process. In this case, our desire to achieve goals is a fundamental part of the creation of the original frame, which forms the starting point for our technology acceptance decision. In this case, the Goal Desire step, at the left side of the model, could effectively be the frame that then gets updated as we move from Goal Intention to Behavioral Desire and then once again as we move to Behavioral Intention. All the inputs shown in Bagozzi’s model, shown as both external factors (ie Group Norms) and internal factors (Emotions, etc) would serve as data in either the updating or reframing loops in Klein’s model.

Bagozzis-purchasing-behavior-adoption-model

A New Model

As the final step in this rather long process I’ve been dragging you through for the last several posts, I put forward a new proposed model for technology acceptance.

Slide1

I’ve attempted to include elements of Sense Making, Goal Striving and some of the more valuable elements from the original Technology Acceptance Models. I’ve also tried to show that this in an iterative journey – a series of data gathering and consideration steps, each one of which can result in either a decision to move forward (elaborate the frame) or move backwards to a previous step (reframe a frame). The entire model is shown below, but we’ll break it down into pieces to explore each step a little more deeply.

 

Setting the Frame

Gord Tam 1

The first step is to set the original frame, which is the Goal Intention. In this case, a goal is either presented to us, or we set the goal ourselves. The setting of this goal is the trigger to establish both a cognitive and emotional frame that sets the context for everything that follows. Factors that go into the creation of the Goal Intention can include both positive and negative emotions, our attitudes towards the success of the goal, how it will impact our current situation (affect towards the mean), and what we expect as far as outcomes. These factors will determine how  robust our Goal Intention is, which will factor heavily in any subsequent decisions that are made as part of this Goal Intention, including the decision to accept or reject any relevant technologies required to execute on our Goal Intention.

We can assume, because there is not an updating step shown here, that once the Goal Intention is formed, the person will move forward to the next step – the retrieval of internal information and the creation of our attitude towards the Goal to be achieved.

The Internal Update

Gord Tam 2

With the setting of the goal intention, we have our frame. Now, it’s up to us to update that frame. Again, our confidence in this initial frame will determine how much data we feel we need to connect to update our frame. This follows Herbert Simon’s heuristic rules of thumb for Bounded Rationality. If we’re highly confident in our frame (to the point where it’s entrenched as a belief) we’ll seek little or no data, and if we do, the data we seek will tend to be confirmatory. If we’re less confident in our frame, we’ll actively go and forage for more data, and we’ll probably be more objective in our judgement of that data. Again, remember, Klein’s Sense Making model says that our frame determines what we define as data.

The first update will be a heuristic and largely subconscious one. We’ll retrieve any relevant information from our own memory. This information, which may be positive or negative in nature, will be assembled into an “attitude” towards the technology. This is our first real conscious evaluation of the technology in question. This would be akin to a Bayesian “prior” – a starting point for subsequent evaluation. It also represents an updating of the original frame. We’ve moved from Goal Intention to a emotional judgement of the technology to be evaluated.

The creation of the “Attitude” also requires us to begin the Risk/Reward balancing, similar to Charnov’s Marginal Value Theorem used in optimal foraging. Negative items we retrieve increase risk, positive ones increase reward. The balance between the two determine our next action. From this point forward, each updating of the frame leads us to a new decision point. At this decision point, we have to decide whether we move forward (elaborate our frame) or return to an earlier point in the decision process, with the possibility that we may need to reframe at that point. Each of these represents a “friction point” in the decision process, with reward driving the process forward and risk introducing new friction. At the attitude state, excessive risk may cause us to go all the way back to reconsidering the goal intention. Does the goal as we understand it still seem like the best path forward, given the degree of risk we have now assigned to the execution of that goal?

Let’s assume we’ve decided to move forward. Now we have to take that Attitude and translate it into Desire. Desire brings social aspects into the decision. Will the adoption of the technology elevate our social status? Will it cause us to undertake actions that may not fit into the social norms of the organization, or square well with our own social ethics? These factors will have a moderating effect on our desire. Even if we agree that the technology in question may meet the goal, our desire may flag because of the social costs that go along with the adoption decision. Again, this represents a friction point, where our desire may be enough to carry us forward, or where it may not be strong enough, causing us to re-evaluate our attitude towards the technology. If we bump back to the “Attitude” stage, a sufficiently negative judgement may in turn bump us even further back to goal intention.

The External Update

Gord TAM 3

With the next stage, we’ve moved from Desire to Intention. Up to now the process has been primarily internal and also primarily either emotional or heuristic. There has been little to no rational deliberation about whether or not to accept the technology in question. The frame that has been created to this point is an emotional and attitudinal frame.

But now, assuming that this frame is open to updating with more information, the process becomes more open to external variables and also to the input of data gathered for the express purpose of rational consideration. We start openly canvassing the opinions of others (subjective norm) and evaluating the technology based on predetermined factors. In the language of marketing, this is the consumer’s “consideration” stage. We know the next step is Action – where our intention becomes translated into behavior. In the previous TAM models, this step was a foregone conclusion. Here, however, we see that it’s actually another decision friction point. If the data we gather doesn’t support our intention, action will not result. We will loop back to Goal Intention and start looking for alternatives. At the very least, this one stage may loop back on itself, resulting in iterative cycles of setting new data criteria, gathering this data and pushing towards either a “go” or “no go” decision. Only when there is sufficient forward momentum will we move to action.

Here, at the Action stage, our evaluation will rely on experiential feedback. At this point, we resurrect the concepts of “Ease of Use” and “Perceived Usefulness” from previous versions of TAM. In this case, the Intention stage would have constructed an assumed “prior” for each of these – a heuristic assessment of how easy it will be to use the technology and also the usefulness of it. This then gets compared to our actual use of the technology. If the bar of our expectations is not met, the degree of friction increases, holding us back from repeating the action, which is required to entrench it as a behavior. This will be a Charnovian balancing act. If the usefulness is sufficient, we will put up with a shortfall in the perceived ease of use. On the flip side, no matter how easy the tool is to use, if it doesn’t deliver on our expectation of usefulness, it will get rejected. Too much friction at this point will result in a loop back to the Intention stage (where we may reassess our evaluation of the technology to see if the fault lies with us or with the tool) and will possibly cause a reversion all the way to our Goal Intention.

If our experience meets our expectation, repetition will begin to create an organizational behavior. At this stage, we move from trial usage to embedding the technology into our processes. At this point, organizational feedback becomes the key evaluative criterion. Even if we love the technology, sufficient negative feedback from the organization will cause us to re-evaluation our intention. Finally, if the technology being evaluated successfully navigates past this chain of decision points without becoming derailed, it becomes entrenched. We then evaluate if it successfully plays its part in our attainment of our goals. This brings up full circle, back to the beginning of the process.

Summing Up

The original goal of the Technology Acceptance Model was to provide a testable model to predict adoption. My goal is somewhat different, showing Technology Adoption as a series of Sense Making and Goal Attainment decisions, each offering the opportunity to move forward to the next stage or loop back to a previous stage. In extreme cases, it may result in outright rejection of the technology. As far as testing for predictability, this is not the parsimonious model envisioned by Venkatesh, but then again, I suspect parsimony was sacrificed even by the Venkatesh and contributing authors somewhere between the multiple revisions that were offered.

This is a model of Bayesian decision making, and I believe it could be applied to many considered decision scenarios. One could map most higher end consumer purchases on the same decision path. The value of the model is in understanding each stage of the decision path and the factors that both introduce risk related friction and reward related momentum. Ideally, it would be fascinating to start to identify representative risk/reward thresholds at each point, so factors can be rebalanced to achieve a successful outcome.

As we talk about the friction in these decision points, it’s also important to remember that we will all have different set points about how we balance risk and reward. When it comes to technology acceptance, our set point will determine where we fall on Everett Roger’s Diffusion of Technology distribution curve.

 

Those with a high tolerance for risk and an enhanced ability to envision reward will fall to the far left of the curve, either as Innovators or Early Adopters. Rogers noted in The Diffusion of Innovation:

Innovators may…possess a type of mental ability that better enables them to cope with uncertainty and to deal with abstractions. An innovator must be able to conceptualize relatively abstract information about innovations and apply this new information to his or her own situation

Those with a low tolerance for risk and an inability to envision rewards will be to the far right, falling into the Laggard category. The rest of us, representing 68% of the general population, will fall  somewhere in between. So, in trying to predict the acceptance of any particular technology, it will be important to assess the innovativeness of the individual making the decision.

This hypothetical model represents a culmination of the behaviors I’ve observed in many B2B adoption decisions. I’ve always stressed the importance of understanding the risk/reward balance of your target customers. I’ve also mapped out how this can vary from role to role in organizational acceptance decisions.

This post, which is currently pushing 3000 words, is lengthy enough for today. In the next post, I’ll revisit what this new model might mean for our evaluation of usefulness and subsequent user loyalty.

The Psychology of Usefulness – Part Five: A Recap

0824_lifestyle_luddite_630x420In the past five posts, I’ve been looking at how we choose to accept new technologies. As part of that, we’ve had a fairly exhaustive review of the various versions of the Technology Acceptance Models proposed by Fred Davis, Richard Bagozzi and, most prolifically, Viswanath Venkatesh.

Before forging ahead, I’d like to provide a brief recap of primary thoughts behind the models.

In the first post, I explored the different between autotelic and exotelic activities. The first we do for the sheer enjoyment of the activity itself – our reward is inherent in the doing. Exotelic activities are the things we do because we have to. There is little to no reward in them. Generally, when we’re judging usefulness, it’s to complete an exotelic activity. In judging usefulness, the emotion most commonly invoked is an aversion to risk – so it carries a negative emotional valency, although a relatively mild one – typically invoking anxiety or concern rather than outright fear or dread. The degree of emotional valency is generally quite low – it’s more a calculation of the resources required vs the usefulness expected.

Next, in the second post, I explained why I believe that our judgement of usefulness is based on a fairly heuristic calculation of the brain. It’s similar to the same mechanisms we use when foraging for food. Because of that belief, I’ve borrowed heavily from the previous work done by Pirolli and Card on Information Foraging and also Eric Charnov’s work on Optimal Foraging and his Marginal Value Theorem.

Because there’s little emotional engagement, we also tend to make useful resources habits if the frequency is high enough.This is the ground I covered in posts Three and Four. First, using Google as an example, I looked at how habits are created and maintained. Then, in the next post, I looked at the factors that might disrupt a habit, forcing us to look for a viable alternative. The more the brain has to be involved in judging usefulness – the less loyal we tend to be.

Also, we will only seek new technologies if: a) our current technology no longer meets our expectations, which are often reset because; b) we’ve become aware of a new, superior technology.

Then, we have to decide whether or not to accept a new technology. There have been several attempts to create a model that can predict the acceptance of a new technology. Most relied on the same foundational assumptions:

  • Some mix of internal and external motivators will result in the creation of an attitude.
  • Depending on the valence of the attitude (either negative or positive) we may form an intention to use the technology.
  • Once this intention is formed, it leads to usage.

All the modifications to the model (5 revisions at last count) focused in the first two of these three assumptions, offering alternatives for the motivators that create the attitude. Some versions removed the attitude step completely and moved directly to intention. But none of them changed the assumed progression from intention to usage.

The useful parts of these models that I wanted to carry forward are:

  • Intentions are formed by a heuristic balancing of negative and positive factors in the adopter’s mind, often labeled Perceived Usefulness and Perceived Ease of Use.
  • External factors, such as the opinions of others, impact our decisions to adopt a technology.
  • The cognitive process involved roughly corresponds to a Bayesian analysis, where we set a “prior” – our original attitude, and update it based on new information gathered through the decision process.

The potentially flawed assumptions I would like to leave behind are:

  • The process is typically a linear one, moving from the left of the model (attitude) to the right of the model (usage).
  • There are no mediating factors between the intention and usage boxes in any of the models.

In 2007, one of the original authors of the first TAM, Bagozzi, said it was time for a paradigm shift in the thinking about technology acceptance. He brought in an entirely new context in which to think about the acceptance of technology – the striving for and achievement of goals. This created a more holistic view of the decision process, where the acceptance of technology wasn’t artificially isolated, but was part of a much broader frame where that acceptance was contingent on a hierarchy of goals and sub goals. What I particularly liked was the addition of “desire” as a step, and also the introduction of self regulation as a mediating factor. Bagozzi was the first to indicate that the process was possibly more recursive, an iterative cycle rather than a linear path.

Bagozzi’s inclusion of goal setting and achievement builds a context for adoption. This aligns with Everett Roger’s extensive work in innovation adoption, in which he said,

An important factor regarding the adoption rate of an innovation is its compatibility with the values, beliefs, and past experiences of individuals in the social system.

While the acceptance of a technology may be a personal decision, it is almost always set within a broader social context. All the versions   of Technology Acceptance Models I looked at included some type of social mediation in the acceptance process. But it was more of a factor in the creation of an initial attitude and a mediating factor in the progression from attitude to intention to behavior. In other words, if the acceptance of a technology made you socially unpopular, you would probably change your mind and reject it.

But when we choose to achieve a goal, there is a cognitive process that happens which creates a framework for acceptance. The goal becomes the primary evaluative topic and the technology generally becomes secondary to it. Bagozzi recognized that the two are interlinked and have to be evaluated together. We choose a goal, divide this into sub-goals and then seek how to execute against these goals.

Let’s use a personal example to see how goals and technology acceptance are intrinsically linked. Let’s say our goal is to get healthier. This breaks down into several sub-goals: Beginning a regular exercise routine, eating better, losing weight, drinking less, etc. Each of these then can be further divided into more specific goals. Let’s take eating better. It could involve tracking our calories, paying more attention to nutrition labels, including more fresh fruits and vegetables in our diet, cutting out sugar and avoiding processed foods. At this point, we may decide to use a tool like Livestrong‘s MyPlate Calorie tracker. If you were to use one of the various versions of TAM to predict the acceptance of this new technology, you would artificially divorce the act of acceptance from the broader goal hierarchy that precedes it. According to TAM, your acceptance of MyPlate would be determined by your evaluation of the ease of use and the expected usefulness. While undoubtedly important, these two factors are completely dependent on the mental scaffolding you’ve built around the idea of getting healthier. There are a myriad of factors that live beyond these that would have some impact on your eventual acceptance or rejection of the technology in question. For example, perhaps you decide that calorie counting is not the best path to eating better and so any tool that counts calories gets rejected out of hand. Or, perhaps you fall off the wagon with your eating plan and reject the tool not because it’s not useful, or easy to use, but simply because counting calories constantly reminds you how weak your will power is.

So, with the past posts recapped, next post we’ll forge forward with a proposed new Technology Acceptance Model.