The Maturity Continuum of Social Media

facebook-twitter-635Social channels will come and go. Why are we still surprised by this? Just last week, Catharine Taylor talked about the ennui that’s threatening to silence Twitter. Frankly, the only thing surprising about this is that Twitter has had as long a run as they have. Let’s face it. If every there was a social media one trick pony, it’s Twitter.

The fact is, if you are a player in the social media space, you have to accept that there’s a unique maturity evolution in usage patterns. It’s a much more fickle audience than you would find in something like content publishing or search. The channels we use to express ourselves socially are subject to an extraordinary amount of irrational behavior. We project our own beliefs about who we are and how we fit into our own social networks on them. This leaves them vulnerable to sudden shifts in usage, simply because large chunks of the audience may suddenly have changed their minds about what is socially acceptable. And this is what’s currently happening to Twitter.

This is compounded by the fact that we’re talking about technology here, so where we perceive ourselves to be on the technology acceptance curve will have an impact on the social channels we find acceptable to us. If we think we’re early adopters, we’ll be quicker to move to whatever is new. Not only this, we’ll be unduly influenced by what we see other early adopters doing.

The Maturity Continuum for Social is as follows:

It’s a Fad – You use it because everyone (in your circle of influence) else is doing it. Early adopters are particularly susceptible to this effect. They’ll be the ones to test out new channels and tools, simply because they are new. But that momentum doesn’t last long. New entrants will also have to prove that they have at least a certain amount of functionality, and, more importantly, something unique that users can identify with. If this is the case, they will transition to the second phase:

It’s a Statement – You use it because it makes a statement about who you are. And with technology, it’s usually about how cutting edge you are. This makes it particularly prone to abandonment. But there are other factors at play here. Is it all business (LinkedIn) or all fun (Snapchat)? A small percentage of the user base will stick in this phase, becoming brand loyalists. The majority, however, will move on to the third phase:

It’s a Tool – You use it because it’s the best tool for the job. Here, functionality trumps all. It’s in these last two phases where rationality finally takes hold. The sheen of the BSOS (Bright Shiny Object Syndrome) has faded and we’ll only continue using it if it provides better functionality for the task at hand than any of the other alternatives. The problem here is the functional supremacy is a never-ending arms race. Sooner or later, something better will come along (if it successfully navigates through the first two phases). This is typically the end of the road for most social media one-trick ponies, and this is what is currently staring Twitter in the face.

It’s a Platform – You use it because the landscape is familiar. Here you rely on becoming a habitual “stickiness” with users and something called UI Cognitive Lock in. Essentially, this is an online real estate play. If you’ve had a long run as a single purpose tool and have developed a large user base, you have to expand that into a familiar landscape before a new contender unseats you as the tool of choice. This is what Facebook and LinkedIn are currently trying to do. And, to survive, it’s what Twitter must do as well. By assembling a number of tools, you increase the cost of switching to the point where it doesn’t make sense for most users.

Each of these phases has different usage profiles, which directly impact their respective business models. More on that next week.

 

Two Views of the Promise of Technology

technologybrainIn the last two columns, I’ve looked at how technology may be making us intellectually lazy. The human brain tends to follow the path of least resistance and technology’s goal is to eliminate resistance. Last week, I cautioned that this may end up making us both more shallow in our thinking and more fickle in our social ties. We may become an attention deficit society, skipping across the surface of the world. But, this doesn’t necessarily have to be the case.

The debate is not a new one. Momentous technologies generally come complete with their own chorus of naysayers. Whether it’s the invention of writing, the printing press, electronic communication or digital media, the refrain is the same – this will be the end of the world as we know it. But if history has taught us anything, it’s that new technologies are seldom completely beneficial or harmful. Their lasting impact lies somewhere in the middle. With the good comes some bad.

The same will be true for the current digital technologies. The world will change, both for the positive and for the negative. The difference will come in how individuals use the technology. This will spread out along the inevitable bell curve.

watchingTVLook at television, for instance. A sociologist could make a pretty convincing case for the benefits of TV. A better understanding of the global community helped ease our xenophobic biases. Public demand lead to increased international pressure on repressive regimes. There was a sociological leveling that is still happening across cultures. Civil rights and sexual equality were propelled by the coverage they received. Atrocities still happen with far too much regularity, but I personally believe the world is a less savage and brutal place than it was 100 years ago, partially due to the spread of TV.

On the flip side, we have developed a certain laziness of spirit that is fed by TV’s never ending parade of entertainment to be passively consumed. We spend less time visiting our neighbors. We volunteer less. We’re less involved in our communities. Ironically, we’re  a more idealistic society but we make poorer neighbors.

The type of programming to be found on TV also shows that despite the passive nature of the medium, we didn’t become stupider en masse. Some of us use TV for enlightenment, and some of us use it to induce ourselves into a coma. At the end of the day, I think the positives and negatives of TV as a technology probably net out a little better than neutral.

I suspect the same thing is happening with digital media. Some of us are diving deeper and learning more than ever. Others are clicking their way through site after site of brain-porn. Perhaps there are universal effects that will show up over generations that will type the scale one way or the other, but we’re too early in the trend to see those yet. The fact is, digital technologies are not changing our brains in a vacuum. Our environment is also changing and perhaps our brains are just keeping up. The 13 year old who is frustrating the hell out of us today may be a much better match for the world 20 years from now.

I’ll wrap up by leaving three pieces of advice that seem to provide useful guides for getting the best out of new technologies.

First: A healthy curiosity is something we should never stop nurturing. In particular, I find it helpful to constantly ask “how” and “why.”

Second: Practice mindfulness. Be aware of your emotions and cognitive biases and recognize them for what they are. This will help you steer things back on track when they’re leading down an unhealthy path

Third: Move from consuming content to contributing something meaningful. The discipline of publishing tends to push you beyond the shallows.

If you embrace the potential of technology, you may still find yourself as an outlier, but technology has done much to allow a few outliers to make a huge difference.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

The Psychology of Social: Are We Hardwired to Use Social Media?

Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god. 

Aristotle

I’ve looked at online entertainment and I’ve looked at online tools, both in a quest to see where loyal and stable audiences might be found. But that leaves one huge part of the online landscape unexplored – online social media. In both my previous explorations, the scope of the quest quickly exploded into several posts. I think social media will be as difficult to restrict to a few posts, if not more so.

One thing that both entertainment and usefulness had in common was their foundation – our human drives. In any area I’ve explored up to now, I’ve always found our interactions with technology, as fickle as they may be, are layered over innate human drives with origins reaching back several thousands of generations. In entertainment, although the channels may have changed drastically in the past few decades (digital media, video games, virtual environments), our responses are predictably human. The things that make us cry, jump in our seats or laugh out loud really haven’t changed that much in many thousands of years. Humans adapt quickly to new technology, but our tastes remain reliably consistent.

Usefulness is a little different. In this case, our expectations of utility and the ever-rising bar of technology form somewhat of an arms race, with each upping the ante for the other. New tools allow us to do new things, which reset our expectations. These reset expectations cause us to periodically review the tools we use, and if they no longer match our expectations, we go looking for new tools. But even if we’re on the hunt for increased usefulness, we still use strategies that appear to have evolved hundreds of thousands of years ago on the savannah. I believe we forage for and evaluate useful technologies the same way we forage for food. This means that while technologies may change quickly, our behaviors towards them are remarkably predictable.

20090921_social_connectionsSo, what should we expect as we explore how the human need for society plays out in new online arenas? Again, I think it’s safe to say that our behaviors will be driven by innate human needs and strategies. So that seems to be as good a place as any to start.

In their book “Driven, How Human Nature Shapes Our Choices,” Harvard professors Paul Lawrence and Nitin Nohria tried to reduce human nature down to the lowest possible number of non-redundant factors. They came up with four irreducible drives:

  • The Need to Acquire
  • The Need to Bond
  • The Need to Learn
  • The Need to Defend

All human actions, all cultural trends, all societal behaviors will be driven by one or a combination of these factors. If Lawrence and Nohria are right, then the usage of social media should be no exception. Let’s look at the four to see how they might map onto social media usage.

The Need to Bond

I’ll start with the most obvious one – the need to Bond. Social media is all about bonding. This hits squarely at the heart of our social nature. As Aristotle said, we’re not built to be alone. Humans thrive in herds. And social media provides us a digitally mediated way to bond.

The complexity of our social bonds are staggering. It’s amazing to think of all the dimensions we impose on our social relationships. Things like status, gossip, empathy, reciprocity, jealousy, xenophobia, admiration, loyalty, love, hate and so many other emotionally charged factors constantly occupy our mind as we try to navigate the stormy waters of our social connections. One might be tempted to throw up our hands in frustration and live in social isolation, but we don’t. Why? Because evolution has proven conclusively that we’re better together than apart. That strategy has been hardwired into our genes. As much as maintaining a social network is a complete pain in the ass sometimes, it’s a necessary part of the human experience. Most times, the benefits outweigh the drawbacks.

The challenge, however, is that all this baggage will be hauled over to whatever new platforms we use to connect with others. This includes online social media. To be effective and engaging, a social media tool has to allow us to do the things we have always done to survive and thrive in our respective herd – whether it’s to increase the frequency of connection with family, gossip in real-time, brag more effectively to all of our acquaintances at once or reconnect with those that lie in the more out flung regions of our networks. While they’re all very human, these activities, when brought on to a publishing platform (which is a major feature of all social media) introduces a significant signal to noise issue.

The Need to Acquire

While we don’t usually acquire physical things through social media, we sure as hell use it to brag about the things we do acquire in the real world. A unhealthy proportion of social media activity is devoted to the acquisition of new cars, clothes, jewelry, trips, houses, boats – you name it, we tweet (or Facebook, or Instagram) about it. The arms race of social status is being waged daily on social media.

The Need to Learn

One of the biggest reasons why humans became social animals is that it was a much more efficient way to learn. In a herd, we don’t have to learn every lesson ourselves – we can learn from the experiences of other. Of course, that requires a way for lessons to spread throughout our networks. Stories, gossip, rumors – these are all social forms of information transmission. And they have all migrated onto our digital social media platforms.

The Need to Defend

This is probably the least social of Nohria and Lawrence’s Four Drives, at least as it might apply to use of social media. Humans need to defend ourselves, our kin, our community (or tribe, or nation) our possessions, our reputation, our status, our beliefs and our security. But, like all the drives, the need to defend, especially the defense of our beliefs, status or reputation, does play out in the online forum as well.

When looked at in the context of these four innate drives, it’s clear that the use of social media aligns well with our evolved requirements. It is just another channel we can use to let our pre-wired social tendencies play out. So, it passes the first gut-test. This is something we would do naturally, with or without the tools of social media. The next question is, how might our social activities change, for the good and the bad, when they’re mediated through digital channels? I’ll come back here in the next post.

 

The Power of Meta

meta111First published April 24, 2014 in Mediapost’s Search Insider

To the best of our knowledge, humans are the only species capable of thinking about thinking, even though most of us don’t do it very often. We use the Greek word “meta” to talk about this ability.  Basically, “meta” refers to a concept which is an abstraction of another concept –an instruction sheet for whatever the original thing is.

Because humans can grasp this concept, it can be a powerful way to overcome the limits of our genetic programming. Daniel Kahneman’s book, Thinking Fast and Slow, is essentially a meta-guide to the act of thinking – an owner’s guide for our minds. In it, he catalogs evolutions extensive list of cognitive “gotch-yas” that can way lay our rational reasoning.

In our digital world, we use the word “metadata” a lot. Essentially, metadata is a guide to the subject data. It sits above the data in question, providing essential information about it, such as sources, structure, indexing guides, etc. Increasingly, as we get data from more and more disparate sources, metadata will be required to use it. Ideally, it will provide universally understood implementation guide.  This, of course, requires a common schema for metadata; something that organizations like schema.org is currently working on.

Meta is a relatively new concept that has exploded in the last few decades. It’s one of those words we throw around, but we probably don’t stop to think about. It’s power lies in its ability to both “mark up” the complexity of real world, giving us another functional layer in which to operate. But it also allows us to examine ourselves and overcome some of the mental foibles we’re subject to.

According to Wikipedia, there are over 160 cognitive biases that can impact our ability to rationally choose the optimal path. They include such biases as the Cheerleader Effect, where individuals are more attractive in a group, the IKEA Effect, where we overvalue something we assemble ourselves, and the Google Effect, where we tend to forget information we know we can look up on Google. These are like little bugs in our operating software and most times, they impact our rational performance without us even being aware of them.  But if we have a meta-awareness of them, we can mitigate them to a large degree. We can step back from our decision process and see where biases may be clouding our judgment.

Meta also allows us to model and categorize complexity. It allows us to append data to data, exponentially increasing the value of the aggregated data set. This becomes increasingly important in the new era of Big Data. The challenge with Big Data is that it’s not only more data, because in this case, more is different. Big Data typically comes from multiple structured sources and when it’s removed from the guidance of it’s native contextual schema, it becomes unwieldy. A metadata layer gives us a Rosetta’s Stone with which we can integrate these various data sources. And it’s in the combining of data in new combinations that the value of Big Data can be found.

Perhaps the most interesting potential of meta is in how we might create a meta-model of ourselves. I’ve talked about this before in the context of social media.  Increasingly, our interactions with technology will gain value from personalization. Each of us will be generating reams of personal data. There needs to be an efficient connection between the two. We can’t invest the time required to train all these platforms, tools and apps to know us better. It makes sense to consolidate the most universally applicable data about us into a meta-profile of our goals, preferences and requirements. In effect, it will be a technologically friendly abstraction of who we are.  If we can agree on a common schema for these meta-profiles, the developers of technology can develop their various tools to recognize them and reconfigure their functionality tailor made for us.

As our world becomes more complex, the power of meta will become more and more important.

#Meaningless #Crap

First published April 10, 2014 in Mediapost’s Search Insider

hashtagEverybody should have a voice – I get that. Thank goodness that the web and social media have democratized publication. Because of that, the power to say what’s on our mind is just a click away. From this power, great things have and will continue to come – the overthrow of tyrants, the quest for truth, freedom from oppression. I’m pretty sure those are all good things. Important things.

But I’m also pretty sure the signal to noise ratio in social media content is infinitesimal – verging on undetectable. For every post that moves humanity incrementally forward, there are thousands that drive us over the brink into mind numbing mediocrity.

For example, Justin Bieber has 51 million followers, and has tweeted 26,508 times. That, in case you’re wondering, has produced 1.35 trillion “Bieberisms,” or 193 little Bieber-tweets for every man, woman and child on planet Earth. Here’s one of his finest: “Put your heart into everything you do”. Perhaps the Biebs would be better served by using his head a little bit too. But no matter, he tweets on, sharing his special brand of wisdom. No wonder over 70% of all tweets never get read.

And, for God’s sake – stop hashtagging everything! First of all, it only belongs on Twitter and Instagram. It’s not a universal punctuation mark. And it doesn’t belong in front of every word of your post! If you’re writing about something that falls under a topic category that people actually care about – then by all means slip a hashtag in there. For example:

“Witnessing special forces retaking capital building in Kiev – #ukrainecrisis”

Or:

“Just discovered key gene in early detection of Alzheimer’s – #alzheimerresearch”

See how it works? You’re adding key content to a topic that people care about and may actually be searching for on Twitter. This is how not to use hashtags:

“Off to a funeral #selfie #zebra #sunglasses #bling #hairdown #polo #countrygirl #aero #dodge #ram #cute”

All I can say is #shoot #me.

The other problem is that with this diarrheic explosion of content flooding online, it becomes impossible to sift through all of it to find things that are truly important. Generally, most content filters use one of two criteria – recency or popularity. Recency is fine if you’re looking for breaking news. It’s a clearly understood parameter. Popularity, however, has some issues. The theory here is that the wisdom of crowds can be relied on to push the best content to the top. But that’s not really how the wisdom of crowds works. Just because something is popular doesn’t necessarily mean it’s good. And it certainly doesn’t mean it’s important. All too often, it just means that it panders to the lowest common denominator. Do we really want that to be our filtering criteria? Should Kanye West and Keeping Up with the Kardashians mark our cultural high water mark?

One last rant. “Epic” is not the right adjective to apply to concert tickets, Saturday nights at the club, bowls of chili or, when incorrectly combined with the verb “fail”, your company’s Christmas party. According to this post,

“the word epic should only be used to describe two or three things, ever. In fact, here’s a comprehensive list of all things epic: 1. Oceans 2. Lengthy Narratives 3. The Cosmos.”

That’s it.

Feel free to retweet if you wish. Or not. No one will read it anyway.

Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

Letting the Foxes into Journalism’s Hen(Hedgehog) House

First published March 27, 2014 in Mediapost’s Search Insider

fanhI am rooting for Nate Silver and fivethirtyeight.com, his latest attempt to introduce a little data-driven veracity into the murky and anecdotal world of journalism. But I may be one of the few, at least if we take the current backlash as a non-scientific, non-quantitative sample:

I have long been a fan of Nate Silver, but so far I don’t think this is working. – Tyler Cowen, Marginal Revolution

Nate Silver’s new venture may become yet another outlet for misinformation when it comes to the issue of human-caused climate change, Michael Mann, director of the Earth System Science Center at Pennsylvania State University.

Here’s hoping that Nate Silver and company up their game, soon. – Paul Krugman, NY Times

Krugman also states:

You can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking.

Now..Nate Silver doesn’t disagree with this. In fact, he says pretty much the same thing in his book, The Signal and the Noise:

The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.

But he goes on,

Like Caesar, we may construe them in self-serving ways that are detached from their objective reality.

And it’s this construal that Silver is hoping to nip in the bud with FiveThirtyEight. In essence, he wants to do it by being a Fox, to borrow from Isaiah Berlin’s analogy.

‘The fox knows many things, but the hedgehog knows one big thing.’ We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.

Silver thinks the media’s preoccupation with punditry is a dangerous thing. Pundits, whether they’re coming from the right or left, are Hedgehogs. They get paid for their expertise on “one big thing.” And the more controversial their stand, the more attention they get. This can lead to a dangerous spiral, as researcher Philip Tetlock found out:

What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox—those who “know many little things,” draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life.

Tetlock was researching how expertise correlated with the ability to make good predictions. What he found was that it was actually an inverse relationship. The higher the degree of expertise, the more likely the person in question was a hedgehog. Media pundits are usually extreme versions of hedgehogs, which not only have one worldview, but also love to talk about it. Nate Silver believes that to get an objective view of world events, you need to be a fox, first, but secondly; you should be a fox that’s good at sifting through data:

Conventional news organizations on the whole are lacking in data journalism skills, in my view. Some of this is a matter of self-selection. Students who enter college with the intent to major in journalism or communications have above-average test scores in reading and writing, but below-average scores in mathematics.

So, all this makes sense. The problem in Silver’s approach is that journalism is the way it is because that’s the way humans want it. While I applaud Silver’s determination to change it, he may be trying to push water up hill. Pundits exist not just because the media keeps pushing them in front of us – they exist because we keep listening. Humans like opinions and anecdotes. We’re not hardwired to process data and objectively rationalize. We connect with stories and we’re drawn to decisive opinion leaders. Silver will have to find some middle ground here, and that seems to be where the problems arise. The minute writers add commentary to data; they have to impose an ideological viewpoint. It’s impossible not to. And when you do that, you introduce a degree of abstraction.

The backlash against Fivethirtyeight.com generally falls into two camps: Foxes like Silver that have no problem with the approach but disagree with the specific data put forward and Hedgehogs that just don’t like the entire concept. The first camp may come onside as Silver and his team work out the inevitable hiccups in their approach. The second, which, it should be noted, have a large number of pundits in their midst, will never become fans of Silver and his foxlike approach.

In the end though, it really doesn’t matter what columnists and journalists think. It’s up to the consumers of news media. We’ll decide what we like better – hedgehogs or foxes.

The Psychology of Usefulness: A New Model for Technology Acceptance.

In the last post, I reviewed the various versions of the Technology Acceptance Model. Today, I’d like to introduce my own thoughts on the subject and a proposed new model.  But first, I’d like to introduce an entirely new model to the discussion.

Introduction of Sense Making

I like Gary Klein’s Theory of Sense Making – a lot! And in the area of technology acceptance, I think it has to be part of the discussion. It introduces a natural Bayesian rhythm to the process that I think provides a intuitive foundation for our decisions on whether or not we’ll accept a new technology.

Kleins-Data-Frame-Model-of-Sensemaking

Gary Klein et al – Sensemaking Model How Might “Transformational” Technologies and Concepts be Barriers to Sensemaking in Intelligence Analysis

Essentially, the Sense Making Model says that when we try to make sense of something new, we begin with some type of perspective, belief or viewpoint. In Bayesian terms, this would be our prior. In Klein’s model, he called it a frame.

Now, this frame doesn’t only give us a context in which to absorb new data, it actually helps define what counts as data. This is a critical concept to remember, because it dramatically impacts everything that follows. Imagine, for example, that you arrive on the scene of a car accident. If your frame was that of a non-involved bystander, the data you might seek in making sense of the situation would be significantly different than if your frame was that of a person who recognized one of the vehicles involved as belonging to your next-door neighbor.

In the case of technology acceptance, this initial frame will shape what types of data we would seek in order to further qualify our decision. If we start with a primarily negative attitude, we would probably seek data that would confirm our negative bias. The opposite would be true if we were enthusiastic about the adoption of technology. For this reason, I believe the creation of this frame should be a step in any proposed acceptance model.

But Sense Making also introduces the concept of iterative reasoning. After we create our frame, we do a kind of heuristic “gap analysis” on our frame. We prod and poke to see where the weaknesses are. What are the gaps in our current knowledge? Are there inconsistencies in the frame? What is our level of conviction on our current views and attitudes? The weaker the frame, the greater our need to seek new data to strengthen it. This process happens without a lot of conscious consideration. For most of us, this testing of the frame is probably a subconscious evaluation that then creates an emotional valence that will impact future behavior. On one extreme, it could be a strongly held conviction, on the other it would be a high degree of uncertainty.

If we decide we need more data, the Sense Making Model introduces another “Go/No Go” decision point. If the new data confirms our initial frame, we elaborate that frame, making it more complete. We fill in gaps, strengthen beliefs, discard non-aligned data and update our frame. If our sense making is in support of a potential action and we seem to be heading in the right direction with our data foraging, this can be an iterative process that continually updates our frame until it’s strong enough to push us over the threshold of executing that action.

But, if the new data causes serious doubt about our initial frame, we may need to consider “reframing,” in which case we’d have to seek new frames, compare against our existing one and potentially discard it in favor of one of the new alternatives. This essentially returns us to square one, where we need to find  data to elaborate the new frame. And there the cycle starts again.

This double loop learning process illustrates that a decision process, such as accepting a new technology, can loop back on itself at any point, and may do so at several points. More than this, it is always susceptible to a “reframing” incident, where new data may cause the existing frame to be totally discarded, effectively derailing the acceptance process.

Revisiting Goal Striving

I also like Bagozzi’s Goal Striving model, for reasons outlined in a previous post. I won’t rehash them here, except to say that this model introduces a broader context that is more aligned with the complexity of our typical decision process. In this case, our desire to achieve goals is a fundamental part of the creation of the original frame, which forms the starting point for our technology acceptance decision. In this case, the Goal Desire step, at the left side of the model, could effectively be the frame that then gets updated as we move from Goal Intention to Behavioral Desire and then once again as we move to Behavioral Intention. All the inputs shown in Bagozzi’s model, shown as both external factors (ie Group Norms) and internal factors (Emotions, etc) would serve as data in either the updating or reframing loops in Klein’s model.

Bagozzis-purchasing-behavior-adoption-model

A New Model

As the final step in this rather long process I’ve been dragging you through for the last several posts, I put forward a new proposed model for technology acceptance.

Slide1

I’ve attempted to include elements of Sense Making, Goal Striving and some of the more valuable elements from the original Technology Acceptance Models. I’ve also tried to show that this in an iterative journey – a series of data gathering and consideration steps, each one of which can result in either a decision to move forward (elaborate the frame) or move backwards to a previous step (reframe a frame). The entire model is shown below, but we’ll break it down into pieces to explore each step a little more deeply.

 

Setting the Frame

Gord Tam 1

The first step is to set the original frame, which is the Goal Intention. In this case, a goal is either presented to us, or we set the goal ourselves. The setting of this goal is the trigger to establish both a cognitive and emotional frame that sets the context for everything that follows. Factors that go into the creation of the Goal Intention can include both positive and negative emotions, our attitudes towards the success of the goal, how it will impact our current situation (affect towards the mean), and what we expect as far as outcomes. These factors will determine how  robust our Goal Intention is, which will factor heavily in any subsequent decisions that are made as part of this Goal Intention, including the decision to accept or reject any relevant technologies required to execute on our Goal Intention.

We can assume, because there is not an updating step shown here, that once the Goal Intention is formed, the person will move forward to the next step – the retrieval of internal information and the creation of our attitude towards the Goal to be achieved.

The Internal Update

Gord Tam 2

With the setting of the goal intention, we have our frame. Now, it’s up to us to update that frame. Again, our confidence in this initial frame will determine how much data we feel we need to connect to update our frame. This follows Herbert Simon’s heuristic rules of thumb for Bounded Rationality. If we’re highly confident in our frame (to the point where it’s entrenched as a belief) we’ll seek little or no data, and if we do, the data we seek will tend to be confirmatory. If we’re less confident in our frame, we’ll actively go and forage for more data, and we’ll probably be more objective in our judgement of that data. Again, remember, Klein’s Sense Making model says that our frame determines what we define as data.

The first update will be a heuristic and largely subconscious one. We’ll retrieve any relevant information from our own memory. This information, which may be positive or negative in nature, will be assembled into an “attitude” towards the technology. This is our first real conscious evaluation of the technology in question. This would be akin to a Bayesian “prior” – a starting point for subsequent evaluation. It also represents an updating of the original frame. We’ve moved from Goal Intention to a emotional judgement of the technology to be evaluated.

The creation of the “Attitude” also requires us to begin the Risk/Reward balancing, similar to Charnov’s Marginal Value Theorem used in optimal foraging. Negative items we retrieve increase risk, positive ones increase reward. The balance between the two determine our next action. From this point forward, each updating of the frame leads us to a new decision point. At this decision point, we have to decide whether we move forward (elaborate our frame) or return to an earlier point in the decision process, with the possibility that we may need to reframe at that point. Each of these represents a “friction point” in the decision process, with reward driving the process forward and risk introducing new friction. At the attitude state, excessive risk may cause us to go all the way back to reconsidering the goal intention. Does the goal as we understand it still seem like the best path forward, given the degree of risk we have now assigned to the execution of that goal?

Let’s assume we’ve decided to move forward. Now we have to take that Attitude and translate it into Desire. Desire brings social aspects into the decision. Will the adoption of the technology elevate our social status? Will it cause us to undertake actions that may not fit into the social norms of the organization, or square well with our own social ethics? These factors will have a moderating effect on our desire. Even if we agree that the technology in question may meet the goal, our desire may flag because of the social costs that go along with the adoption decision. Again, this represents a friction point, where our desire may be enough to carry us forward, or where it may not be strong enough, causing us to re-evaluate our attitude towards the technology. If we bump back to the “Attitude” stage, a sufficiently negative judgement may in turn bump us even further back to goal intention.

The External Update

Gord TAM 3

With the next stage, we’ve moved from Desire to Intention. Up to now the process has been primarily internal and also primarily either emotional or heuristic. There has been little to no rational deliberation about whether or not to accept the technology in question. The frame that has been created to this point is an emotional and attitudinal frame.

But now, assuming that this frame is open to updating with more information, the process becomes more open to external variables and also to the input of data gathered for the express purpose of rational consideration. We start openly canvassing the opinions of others (subjective norm) and evaluating the technology based on predetermined factors. In the language of marketing, this is the consumer’s “consideration” stage. We know the next step is Action – where our intention becomes translated into behavior. In the previous TAM models, this step was a foregone conclusion. Here, however, we see that it’s actually another decision friction point. If the data we gather doesn’t support our intention, action will not result. We will loop back to Goal Intention and start looking for alternatives. At the very least, this one stage may loop back on itself, resulting in iterative cycles of setting new data criteria, gathering this data and pushing towards either a “go” or “no go” decision. Only when there is sufficient forward momentum will we move to action.

Here, at the Action stage, our evaluation will rely on experiential feedback. At this point, we resurrect the concepts of “Ease of Use” and “Perceived Usefulness” from previous versions of TAM. In this case, the Intention stage would have constructed an assumed “prior” for each of these – a heuristic assessment of how easy it will be to use the technology and also the usefulness of it. This then gets compared to our actual use of the technology. If the bar of our expectations is not met, the degree of friction increases, holding us back from repeating the action, which is required to entrench it as a behavior. This will be a Charnovian balancing act. If the usefulness is sufficient, we will put up with a shortfall in the perceived ease of use. On the flip side, no matter how easy the tool is to use, if it doesn’t deliver on our expectation of usefulness, it will get rejected. Too much friction at this point will result in a loop back to the Intention stage (where we may reassess our evaluation of the technology to see if the fault lies with us or with the tool) and will possibly cause a reversion all the way to our Goal Intention.

If our experience meets our expectation, repetition will begin to create an organizational behavior. At this stage, we move from trial usage to embedding the technology into our processes. At this point, organizational feedback becomes the key evaluative criterion. Even if we love the technology, sufficient negative feedback from the organization will cause us to re-evaluation our intention. Finally, if the technology being evaluated successfully navigates past this chain of decision points without becoming derailed, it becomes entrenched. We then evaluate if it successfully plays its part in our attainment of our goals. This brings up full circle, back to the beginning of the process.

Summing Up

The original goal of the Technology Acceptance Model was to provide a testable model to predict adoption. My goal is somewhat different, showing Technology Adoption as a series of Sense Making and Goal Attainment decisions, each offering the opportunity to move forward to the next stage or loop back to a previous stage. In extreme cases, it may result in outright rejection of the technology. As far as testing for predictability, this is not the parsimonious model envisioned by Venkatesh, but then again, I suspect parsimony was sacrificed even by the Venkatesh and contributing authors somewhere between the multiple revisions that were offered.

This is a model of Bayesian decision making, and I believe it could be applied to many considered decision scenarios. One could map most higher end consumer purchases on the same decision path. The value of the model is in understanding each stage of the decision path and the factors that both introduce risk related friction and reward related momentum. Ideally, it would be fascinating to start to identify representative risk/reward thresholds at each point, so factors can be rebalanced to achieve a successful outcome.

As we talk about the friction in these decision points, it’s also important to remember that we will all have different set points about how we balance risk and reward. When it comes to technology acceptance, our set point will determine where we fall on Everett Roger’s Diffusion of Technology distribution curve.

 

Those with a high tolerance for risk and an enhanced ability to envision reward will fall to the far left of the curve, either as Innovators or Early Adopters. Rogers noted in The Diffusion of Innovation:

Innovators may…possess a type of mental ability that better enables them to cope with uncertainty and to deal with abstractions. An innovator must be able to conceptualize relatively abstract information about innovations and apply this new information to his or her own situation

Those with a low tolerance for risk and an inability to envision rewards will be to the far right, falling into the Laggard category. The rest of us, representing 68% of the general population, will fall  somewhere in between. So, in trying to predict the acceptance of any particular technology, it will be important to assess the innovativeness of the individual making the decision.

This hypothetical model represents a culmination of the behaviors I’ve observed in many B2B adoption decisions. I’ve always stressed the importance of understanding the risk/reward balance of your target customers. I’ve also mapped out how this can vary from role to role in organizational acceptance decisions.

This post, which is currently pushing 3000 words, is lengthy enough for today. In the next post, I’ll revisit what this new model might mean for our evaluation of usefulness and subsequent user loyalty.