Social Media: Matching Maturity to the Right Business Model

socialmediaLast week, I talked about the maturity continuum of social media. This week, I’d like to recap and look at the business model implications of each phase

Phase One – It’s a Fad. Here, we use a new social media tool simply because it is new. This is a classic early adopter model. The business goal here is to drive adoption as fast and far as possible, hoping that acceptance will go viral. There is no revenue opportunity at this point, as you don’t want to do anything to slow adoption. It’s all about getting it into as many hands as possible.

Phase Two – It’s a statement. You use the tool because it says something about who you are. Revenue opportunities are still limited, but this is the time for cross-promotion with brands that make a similar statement. Messaging and branding become essential at this point. You have to carve a unique niche for yourself and hope that it resonates with segments of your market. The goal is to create an emotional connection with your audience to help shore up loyalty in the next phase. This is the time to start laying the foundations of an user community.

Phase Three – It’s a tool. You use it because it offers the best functionality for a particular task. Here, things have to get more practical. This is where user testing and new feature development has to move as quickly as possible. Revenue opportunities at this point are possible, depending on the usage profile of your app. If there’s high frequency of usage, advertising sponsorship is a possibility. But be aware that this will bring inevitable push back from your users, especially if there has been no advertising up to this point. This shakes the loyalty of the “Statement” users, as they feel you’re selling out. The functionality will have to be rock solid to prevent attrition of your user base during this phase. Essentially, it will have to be good enough to “lock out” the competition. But there’s another goal here as well. Introducing new functionality allows you to move beyond being a one-trick pony. This is where you have to start moving from being a tool to the next phase…

Phase Four – It’s a Platform. If you’ve successfully transitioned to being a social media platform, you should have the opportunity to finally turn a profit. The stability of the revenue model will be wholly dependent on how high you’ve been able to raise the cost of switching. The more “sticky” your platform is, the more stable your revenue will be. But, be aware that using advertising as your revenue channel is fraught with issues in the world of social media. Unlike search, where we are used to dealing with a crystal clear indication of consumer interest, social media usage seldom comes tied to clear buyer intent. You have to worry about modality and social norms, along with the erosion of your “cool” factor.

In the last two phases, the best revenue opportunities should be directly tied to functionality and intent. The closer you can align your advertising message to the intent of the users “in the moment” the more stable your revenue model will be. In fact, if you can introduce tools that are focused on users when they are in social modes where commercial messaging is appropriate, you will find revenue opportunities dropping into your lap. For example, if users use LinkedIn to crowdsource opinions on B2B purchases, you have a natural monetization opportunity. If they’re using your app to post pictures of their cat playing a xylophone, you’re going to find it much harder to make a buck. Not impossible, but pretty damned difficult.

The Maturity Continuum of Social Media

facebook-twitter-635Social channels will come and go. Why are we still surprised by this? Just last week, Catharine Taylor talked about the ennui that’s threatening to silence Twitter. Frankly, the only thing surprising about this is that Twitter has had as long a run as they have. Let’s face it. If every there was a social media one trick pony, it’s Twitter.

The fact is, if you are a player in the social media space, you have to accept that there’s a unique maturity evolution in usage patterns. It’s a much more fickle audience than you would find in something like content publishing or search. The channels we use to express ourselves socially are subject to an extraordinary amount of irrational behavior. We project our own beliefs about who we are and how we fit into our own social networks on them. This leaves them vulnerable to sudden shifts in usage, simply because large chunks of the audience may suddenly have changed their minds about what is socially acceptable. And this is what’s currently happening to Twitter.

This is compounded by the fact that we’re talking about technology here, so where we perceive ourselves to be on the technology acceptance curve will have an impact on the social channels we find acceptable to us. If we think we’re early adopters, we’ll be quicker to move to whatever is new. Not only this, we’ll be unduly influenced by what we see other early adopters doing.

The Maturity Continuum for Social is as follows:

It’s a Fad – You use it because everyone (in your circle of influence) else is doing it. Early adopters are particularly susceptible to this effect. They’ll be the ones to test out new channels and tools, simply because they are new. But that momentum doesn’t last long. New entrants will also have to prove that they have at least a certain amount of functionality, and, more importantly, something unique that users can identify with. If this is the case, they will transition to the second phase:

It’s a Statement – You use it because it makes a statement about who you are. And with technology, it’s usually about how cutting edge you are. This makes it particularly prone to abandonment. But there are other factors at play here. Is it all business (LinkedIn) or all fun (Snapchat)? A small percentage of the user base will stick in this phase, becoming brand loyalists. The majority, however, will move on to the third phase:

It’s a Tool – You use it because it’s the best tool for the job. Here, functionality trumps all. It’s in these last two phases where rationality finally takes hold. The sheen of the BSOS (Bright Shiny Object Syndrome) has faded and we’ll only continue using it if it provides better functionality for the task at hand than any of the other alternatives. The problem here is the functional supremacy is a never-ending arms race. Sooner or later, something better will come along (if it successfully navigates through the first two phases). This is typically the end of the road for most social media one-trick ponies, and this is what is currently staring Twitter in the face.

It’s a Platform – You use it because the landscape is familiar. Here you rely on becoming a habitual “stickiness” with users and something called UI Cognitive Lock in. Essentially, this is an online real estate play. If you’ve had a long run as a single purpose tool and have developed a large user base, you have to expand that into a familiar landscape before a new contender unseats you as the tool of choice. This is what Facebook and LinkedIn are currently trying to do. And, to survive, it’s what Twitter must do as well. By assembling a number of tools, you increase the cost of switching to the point where it doesn’t make sense for most users.

Each of these phases has different usage profiles, which directly impact their respective business models. More on that next week.

 

Two Views of the Promise of Technology

technologybrainIn the last two columns, I’ve looked at how technology may be making us intellectually lazy. The human brain tends to follow the path of least resistance and technology’s goal is to eliminate resistance. Last week, I cautioned that this may end up making us both more shallow in our thinking and more fickle in our social ties. We may become an attention deficit society, skipping across the surface of the world. But, this doesn’t necessarily have to be the case.

The debate is not a new one. Momentous technologies generally come complete with their own chorus of naysayers. Whether it’s the invention of writing, the printing press, electronic communication or digital media, the refrain is the same – this will be the end of the world as we know it. But if history has taught us anything, it’s that new technologies are seldom completely beneficial or harmful. Their lasting impact lies somewhere in the middle. With the good comes some bad.

The same will be true for the current digital technologies. The world will change, both for the positive and for the negative. The difference will come in how individuals use the technology. This will spread out along the inevitable bell curve.

watchingTVLook at television, for instance. A sociologist could make a pretty convincing case for the benefits of TV. A better understanding of the global community helped ease our xenophobic biases. Public demand lead to increased international pressure on repressive regimes. There was a sociological leveling that is still happening across cultures. Civil rights and sexual equality were propelled by the coverage they received. Atrocities still happen with far too much regularity, but I personally believe the world is a less savage and brutal place than it was 100 years ago, partially due to the spread of TV.

On the flip side, we have developed a certain laziness of spirit that is fed by TV’s never ending parade of entertainment to be passively consumed. We spend less time visiting our neighbors. We volunteer less. We’re less involved in our communities. Ironically, we’re  a more idealistic society but we make poorer neighbors.

The type of programming to be found on TV also shows that despite the passive nature of the medium, we didn’t become stupider en masse. Some of us use TV for enlightenment, and some of us use it to induce ourselves into a coma. At the end of the day, I think the positives and negatives of TV as a technology probably net out a little better than neutral.

I suspect the same thing is happening with digital media. Some of us are diving deeper and learning more than ever. Others are clicking their way through site after site of brain-porn. Perhaps there are universal effects that will show up over generations that will type the scale one way or the other, but we’re too early in the trend to see those yet. The fact is, digital technologies are not changing our brains in a vacuum. Our environment is also changing and perhaps our brains are just keeping up. The 13 year old who is frustrating the hell out of us today may be a much better match for the world 20 years from now.

I’ll wrap up by leaving three pieces of advice that seem to provide useful guides for getting the best out of new technologies.

First: A healthy curiosity is something we should never stop nurturing. In particular, I find it helpful to constantly ask “how” and “why.”

Second: Practice mindfulness. Be aware of your emotions and cognitive biases and recognize them for what they are. This will help you steer things back on track when they’re leading down an unhealthy path

Third: Move from consuming content to contributing something meaningful. The discipline of publishing tends to push you beyond the shallows.

If you embrace the potential of technology, you may still find yourself as an outlier, but technology has done much to allow a few outliers to make a huge difference.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

When Are Crowds Not So Wise?

the-wisdom-of-crowdsSince James Surowiecki published his book “The Wisdom of Crowds”, the common wisdom is – well – that we are commonly wise. In other words, if we average the knowledge of many people, we’ll be smarter than any of us would be individually. And that is true – to an extent. But new research suggests that there are group decision dynamics at play where bigger (crowds) may not always be better.

A recent study by Iain Couzin and Albert Kao at Princeton suggests that in real world situations, where information is more complex and spotty, the benefits of crowd wisdom peaks in groups of 5 to 20 participants and then decreases after that. The difference comes in how the group processes the information available to them.

In Surowiecki’s book, he uses the famous example of Sir Francis Galton’s 1907 observation of a contest where villagers were asked to guess the weight of an ox. While no individual correctly guessed the weight, the average of all the guesses came in just one pound short of the correct number. But this example has one unique characteristic that would be rare in the real world – every guesser had access to the same information. They could all see the ox and make their guess. Unless you’re guessing the number of jellybeans in a jar, this is almost never the case in actual decision scenarios.

Couzin and Kao say this information “patchiness” is the reason why accuracy tends to diminish as the crowd gets bigger. In most situations, there is commonly understood and known information, which the researchers refer to as “correlated information.” But there is also information that only some of the members of the group have, which is “uncorrelated information.” To make matters even more complex, the nature of uncorrelated information will be unique to each individual member. In real life, this would be our own experience, expertise and beliefs.  To use a technical term, the correlated information would be the “signal” and the uncorrelated information would be the “noise.” The irony here is that this noise is actually beneficial to the decision process.

In big groups, the collected “noise” gets so noisy it becomes difficult to manage and so it tends to get ignored. It drowns itself out. The collective focuses instead on the correlated information. In engineering terms this higher signal-to-noise ratio would seem to be ideal, but in decision-making, it turns out a certain amount of noise is a good thing. By focusing just on the commonly known information, the bigger crowd over-simplifies the situation.

Smaller groups, in contrast, tend to be more random in their make up. The differences in experiences, knowledge, beliefs and attitudes, even if not directly correlated to the question at hand, have a better chance of being preserved. They don’t get “averaged” out like they would in a bigger group. And this “noise” leads to better decisions if the situation involves imperfect information. Call it the averaging of intuition, or hunches. In a big group, the power of human intuition gets sacrificed in favor of the commonly knowable. But in a small group, it’s preserved.

In the world of corporate strategy, this has some interesting implications. Business decisions are almost always complex and involve imperfectly distributed information. This research seems to indicate that we should carefully consider our decision-making units. There is a wisdom of crowds benefit as long as the crowd doesn’t get too big. We need to find a balance where we have the advantage of different viewpoints and experiences, but this aggregate “noise” doesn’t become unmanageable.

The Era of Amplification

First published in Mediapost’s Search Insider, May 1, 2014

AmandaToddVideoMediapost columnist Joseph Jaffe wrote a great piece Tuesday on the Death of Anonymity. He shows how anonymity in the era of digital has become both a blessing and a curse, leading to an explosion of cowardly, bone-headed comments and cyber-bullying.  This reinforces something I’ve said repeated: technology doesn’t change human behavior; it just enables it in new ways. Heroes will find new ways to be heroes, and idiots will find new ways to be idiots.

But there is something important happening here. It’s not that technology is making us meaner, more cowardly or more stupid. I grew up with bullies, my father grew up with bullies and his father grew up with bullies. You could trace a direct line of bullies going back to the first time our ancestors walked erect, and probably further than that. So what’s different today? Why do we now need laws against cyber-bullying?

It’s because we now live in a time of increased amplification. The waves that spread from an individual’s actions go farther than ever before.  Technology increases the consequences of those actions.  A heroic act can spread through a network and activate other heroes, creating a groundswell of heroism. Unfortunately, the flip side is also true – bullying can begat more bullying. The viral spread of bullying that technology enables can make the situation hopeless for the victim.

Consider the case of Amanda Todd, a grade 10 student from Port Coquitlam, BC, Canada. Todd had been bullied for over a year by a guy who wanted “a show”. She finally relented and flashed her breasts. While not advisable, Todd’s actions were not that unusual. She wasn’t the first 15 year-old to experiment with a little sexual promiscuity after prolonged male pleading. It certainly shouldn’t have turned into a death sentence for Todd. But it did – because of amplification.

First of all, Todd’s tormentor was a man who lived thousands of miles away, in Holland. They never met. Secondly, Todd’s indiscretion was captured in a digital picture and was soon circulated worldwide. As teen-agers have been since time began, Todd was mercilessly teased. But it wasn’t just at the hands of a small circle of bullies at her high school. Taunts from around the world came from jerks who jumped on the bandwagon. A teen-ager’s psyche is typically a fragile thing, and the amplitude of that teasing was psychologically crushing for Todd. Desperate for escape, she first recorded a plea for understanding that she posted online, and then took her own life. The act that started all this should have been added to that pile of minor regrets we all assemble in our adolescence. It should not have ended the way it did. Unfortunately, Todd was a victim of amplification.

My wife and I have two daughters, one of which is about the same age as Todd. Because they grew up in the era of Amplification, we pounded home the fact that anything captured online can end up anywhere. You just can’t be careless, not even for the briefest of moments. But, of course, teenagers are occasionally careless. It’s part of the job description. They’re testing the world as a place to live in – experimenting with what it means to be an adult –  and mistakes are inevitable. Unfortunately, the potential price to be paid for those mistakes has been raised astronomically.

Here’s perhaps the most frightening thing about this. Todd’s Youtube video has been seen over 17 million times, so it too has been amplified by technology. Amanda’s story has spread through the world online. The vast majority of comments are those you would hope to see – expressions of sympathy, support, understanding and caring. But there are a handful of hateful comments of the sort that drove Todd to suicide. Technology allows us to sort and filter for negativity. In other words, technology allows bullies to connect to bullies.

In social networks, there is something called “threshold-limited spreading.” Essentially, it means that for something to spread through a network, the number of incidences needs to reach a certain threshold. In the case of bullying, as in the case of rioting or social movements, the threshold depends on the connections between like-minded individuals. If bullies can connect in a cluster, they draw courage from each other. This can then trigger a cascade effect, encouraging those “on the margin” to also engage in bullying. Technology, because of its unique ability to enable connections between those who think alike, can trigger these cascades of bullying. It doesn’t matter if the ratio of positive to negative is ten to one or even one hundred to one. All that matters is there are a sufficient number of negative comments for the would-be bully to feel that he or she has support.

I don’t know what the lasting impact of the Era of Amplification will be.  I do know that Technology has made the world a much more promising place than it was when I was born. I also know it’s made it much crueler and more frightening.  And it’s not because of any changes in who we are. It’s because the ripples of our actions now can spread further than we can even imagine.

The Psychology of Social: Are We Hardwired to Use Social Media?

Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god. 

Aristotle

I’ve looked at online entertainment and I’ve looked at online tools, both in a quest to see where loyal and stable audiences might be found. But that leaves one huge part of the online landscape unexplored – online social media. In both my previous explorations, the scope of the quest quickly exploded into several posts. I think social media will be as difficult to restrict to a few posts, if not more so.

One thing that both entertainment and usefulness had in common was their foundation – our human drives. In any area I’ve explored up to now, I’ve always found our interactions with technology, as fickle as they may be, are layered over innate human drives with origins reaching back several thousands of generations. In entertainment, although the channels may have changed drastically in the past few decades (digital media, video games, virtual environments), our responses are predictably human. The things that make us cry, jump in our seats or laugh out loud really haven’t changed that much in many thousands of years. Humans adapt quickly to new technology, but our tastes remain reliably consistent.

Usefulness is a little different. In this case, our expectations of utility and the ever-rising bar of technology form somewhat of an arms race, with each upping the ante for the other. New tools allow us to do new things, which reset our expectations. These reset expectations cause us to periodically review the tools we use, and if they no longer match our expectations, we go looking for new tools. But even if we’re on the hunt for increased usefulness, we still use strategies that appear to have evolved hundreds of thousands of years ago on the savannah. I believe we forage for and evaluate useful technologies the same way we forage for food. This means that while technologies may change quickly, our behaviors towards them are remarkably predictable.

20090921_social_connectionsSo, what should we expect as we explore how the human need for society plays out in new online arenas? Again, I think it’s safe to say that our behaviors will be driven by innate human needs and strategies. So that seems to be as good a place as any to start.

In their book “Driven, How Human Nature Shapes Our Choices,” Harvard professors Paul Lawrence and Nitin Nohria tried to reduce human nature down to the lowest possible number of non-redundant factors. They came up with four irreducible drives:

  • The Need to Acquire
  • The Need to Bond
  • The Need to Learn
  • The Need to Defend

All human actions, all cultural trends, all societal behaviors will be driven by one or a combination of these factors. If Lawrence and Nohria are right, then the usage of social media should be no exception. Let’s look at the four to see how they might map onto social media usage.

The Need to Bond

I’ll start with the most obvious one – the need to Bond. Social media is all about bonding. This hits squarely at the heart of our social nature. As Aristotle said, we’re not built to be alone. Humans thrive in herds. And social media provides us a digitally mediated way to bond.

The complexity of our social bonds are staggering. It’s amazing to think of all the dimensions we impose on our social relationships. Things like status, gossip, empathy, reciprocity, jealousy, xenophobia, admiration, loyalty, love, hate and so many other emotionally charged factors constantly occupy our mind as we try to navigate the stormy waters of our social connections. One might be tempted to throw up our hands in frustration and live in social isolation, but we don’t. Why? Because evolution has proven conclusively that we’re better together than apart. That strategy has been hardwired into our genes. As much as maintaining a social network is a complete pain in the ass sometimes, it’s a necessary part of the human experience. Most times, the benefits outweigh the drawbacks.

The challenge, however, is that all this baggage will be hauled over to whatever new platforms we use to connect with others. This includes online social media. To be effective and engaging, a social media tool has to allow us to do the things we have always done to survive and thrive in our respective herd – whether it’s to increase the frequency of connection with family, gossip in real-time, brag more effectively to all of our acquaintances at once or reconnect with those that lie in the more out flung regions of our networks. While they’re all very human, these activities, when brought on to a publishing platform (which is a major feature of all social media) introduces a significant signal to noise issue.

The Need to Acquire

While we don’t usually acquire physical things through social media, we sure as hell use it to brag about the things we do acquire in the real world. A unhealthy proportion of social media activity is devoted to the acquisition of new cars, clothes, jewelry, trips, houses, boats – you name it, we tweet (or Facebook, or Instagram) about it. The arms race of social status is being waged daily on social media.

The Need to Learn

One of the biggest reasons why humans became social animals is that it was a much more efficient way to learn. In a herd, we don’t have to learn every lesson ourselves – we can learn from the experiences of other. Of course, that requires a way for lessons to spread throughout our networks. Stories, gossip, rumors – these are all social forms of information transmission. And they have all migrated onto our digital social media platforms.

The Need to Defend

This is probably the least social of Nohria and Lawrence’s Four Drives, at least as it might apply to use of social media. Humans need to defend ourselves, our kin, our community (or tribe, or nation) our possessions, our reputation, our status, our beliefs and our security. But, like all the drives, the need to defend, especially the defense of our beliefs, status or reputation, does play out in the online forum as well.

When looked at in the context of these four innate drives, it’s clear that the use of social media aligns well with our evolved requirements. It is just another channel we can use to let our pre-wired social tendencies play out. So, it passes the first gut-test. This is something we would do naturally, with or without the tools of social media. The next question is, how might our social activities change, for the good and the bad, when they’re mediated through digital channels? I’ll come back here in the next post.

 

The Power of Meta

meta111First published April 24, 2014 in Mediapost’s Search Insider

To the best of our knowledge, humans are the only species capable of thinking about thinking, even though most of us don’t do it very often. We use the Greek word “meta” to talk about this ability.  Basically, “meta” refers to a concept which is an abstraction of another concept –an instruction sheet for whatever the original thing is.

Because humans can grasp this concept, it can be a powerful way to overcome the limits of our genetic programming. Daniel Kahneman’s book, Thinking Fast and Slow, is essentially a meta-guide to the act of thinking – an owner’s guide for our minds. In it, he catalogs evolutions extensive list of cognitive “gotch-yas” that can way lay our rational reasoning.

In our digital world, we use the word “metadata” a lot. Essentially, metadata is a guide to the subject data. It sits above the data in question, providing essential information about it, such as sources, structure, indexing guides, etc. Increasingly, as we get data from more and more disparate sources, metadata will be required to use it. Ideally, it will provide universally understood implementation guide.  This, of course, requires a common schema for metadata; something that organizations like schema.org is currently working on.

Meta is a relatively new concept that has exploded in the last few decades. It’s one of those words we throw around, but we probably don’t stop to think about. It’s power lies in its ability to both “mark up” the complexity of real world, giving us another functional layer in which to operate. But it also allows us to examine ourselves and overcome some of the mental foibles we’re subject to.

According to Wikipedia, there are over 160 cognitive biases that can impact our ability to rationally choose the optimal path. They include such biases as the Cheerleader Effect, where individuals are more attractive in a group, the IKEA Effect, where we overvalue something we assemble ourselves, and the Google Effect, where we tend to forget information we know we can look up on Google. These are like little bugs in our operating software and most times, they impact our rational performance without us even being aware of them.  But if we have a meta-awareness of them, we can mitigate them to a large degree. We can step back from our decision process and see where biases may be clouding our judgment.

Meta also allows us to model and categorize complexity. It allows us to append data to data, exponentially increasing the value of the aggregated data set. This becomes increasingly important in the new era of Big Data. The challenge with Big Data is that it’s not only more data, because in this case, more is different. Big Data typically comes from multiple structured sources and when it’s removed from the guidance of it’s native contextual schema, it becomes unwieldy. A metadata layer gives us a Rosetta’s Stone with which we can integrate these various data sources. And it’s in the combining of data in new combinations that the value of Big Data can be found.

Perhaps the most interesting potential of meta is in how we might create a meta-model of ourselves. I’ve talked about this before in the context of social media.  Increasingly, our interactions with technology will gain value from personalization. Each of us will be generating reams of personal data. There needs to be an efficient connection between the two. We can’t invest the time required to train all these platforms, tools and apps to know us better. It makes sense to consolidate the most universally applicable data about us into a meta-profile of our goals, preferences and requirements. In effect, it will be a technologically friendly abstraction of who we are.  If we can agree on a common schema for these meta-profiles, the developers of technology can develop their various tools to recognize them and reconfigure their functionality tailor made for us.

As our world becomes more complex, the power of meta will become more and more important.

#Meaningless #Crap

First published April 10, 2014 in Mediapost’s Search Insider

hashtagEverybody should have a voice – I get that. Thank goodness that the web and social media have democratized publication. Because of that, the power to say what’s on our mind is just a click away. From this power, great things have and will continue to come – the overthrow of tyrants, the quest for truth, freedom from oppression. I’m pretty sure those are all good things. Important things.

But I’m also pretty sure the signal to noise ratio in social media content is infinitesimal – verging on undetectable. For every post that moves humanity incrementally forward, there are thousands that drive us over the brink into mind numbing mediocrity.

For example, Justin Bieber has 51 million followers, and has tweeted 26,508 times. That, in case you’re wondering, has produced 1.35 trillion “Bieberisms,” or 193 little Bieber-tweets for every man, woman and child on planet Earth. Here’s one of his finest: “Put your heart into everything you do”. Perhaps the Biebs would be better served by using his head a little bit too. But no matter, he tweets on, sharing his special brand of wisdom. No wonder over 70% of all tweets never get read.

And, for God’s sake – stop hashtagging everything! First of all, it only belongs on Twitter and Instagram. It’s not a universal punctuation mark. And it doesn’t belong in front of every word of your post! If you’re writing about something that falls under a topic category that people actually care about – then by all means slip a hashtag in there. For example:

“Witnessing special forces retaking capital building in Kiev – #ukrainecrisis”

Or:

“Just discovered key gene in early detection of Alzheimer’s – #alzheimerresearch”

See how it works? You’re adding key content to a topic that people care about and may actually be searching for on Twitter. This is how not to use hashtags:

“Off to a funeral #selfie #zebra #sunglasses #bling #hairdown #polo #countrygirl #aero #dodge #ram #cute”

All I can say is #shoot #me.

The other problem is that with this diarrheic explosion of content flooding online, it becomes impossible to sift through all of it to find things that are truly important. Generally, most content filters use one of two criteria – recency or popularity. Recency is fine if you’re looking for breaking news. It’s a clearly understood parameter. Popularity, however, has some issues. The theory here is that the wisdom of crowds can be relied on to push the best content to the top. But that’s not really how the wisdom of crowds works. Just because something is popular doesn’t necessarily mean it’s good. And it certainly doesn’t mean it’s important. All too often, it just means that it panders to the lowest common denominator. Do we really want that to be our filtering criteria? Should Kanye West and Keeping Up with the Kardashians mark our cultural high water mark?

One last rant. “Epic” is not the right adjective to apply to concert tickets, Saturday nights at the club, bowls of chili or, when incorrectly combined with the verb “fail”, your company’s Christmas party. According to this post,

“the word epic should only be used to describe two or three things, ever. In fact, here’s a comprehensive list of all things epic: 1. Oceans 2. Lengthy Narratives 3. The Cosmos.”

That’s it.

Feel free to retweet if you wish. Or not. No one will read it anyway.