Why Our Brains Love TV

brain-TV-e1318029026863Forrester Research analyst Shar VanBoskirk has pegged 2019 as the year when digital ad spend will surpass TV, topping the $100 billion mark. This is momentous in a number of ways, but not really surprising. If you throw all digital marketing in a single bucket, it was a question of when, not if, it would finally surpass TV. What is more surprising to me is how resilient TV has proven to be as an advertising medium. After all, we’re only a little more than a decade away from the 100th anniversary of broadcast TV (which started in 1928). TV has been the king of the media mountain for a long time.

So, what is it about TV that has so captured us for so long? What is it about the medium that allows our brains to connect to it so easily?

The Two Most Social Senses – Sight and Sound

Even as digital overtakes broadcast and cable television, we’re still mesmerized by the format of TV. Our interaction with the medium has shifted in a few interesting ways, notably time shifting, new platforms to consume it on and binge watching, but our actual interaction with the format itself hasn’t changed very much, save for the continual improvements in fidelity. It’s still sight and sound delivered electronically. And for us, that seems to be a very compelling combination. Despite some thus-far failed attempts to introduce another sense or dimension into the sight/sound duopoly, our brains seem to naturally default back to a relatively stable format of sound and two-dimensional images.

It’s no coincidence that these are the same two senses we rely on most heavily to connect with the outside world. They allow us to scan our environments “at-a-distance,” picking up cues of potential threats or rewards that we can then use our other senses to interact with more intimately. Smell, taste and touch are usually “close-up” senses that are relied on only when sight and sound have given the “all-clear” signal to our brains. For this reason, our brains have some highly developed mechanisms that allow us to parse the world through sight and sound – particularly sight. For example, the fusiform gyrus is a part of our brain that is dedicated to categorizing forms we see and fitting them into categories our brain recognizes. It’s this part of our brain that allows us to recognize faces and fit them into understandable categories such as friends, enemies, family, celebrities, etc.

These are also the two senses we use most often in social settings. If it weren’t for sight and sound, our ability to interact with each other would be severely curtailed. This offers another clue. Television is a good fit with our need to socialize. Sight and sound are the channel inputs to empathy. Our mirror neurons are activated when we see somebody else doing something. That’s why the saying is “Monkey See, Monkey Do,” and not “Monkey Taste, Monkey Do.” These two senses are all we really need to build a fairly rich representation of the world and create emotional connections to it.

We want Immersion, But Not Too Much immersion

So, if the combination of sight and sound seems to be a good match with our mechanisms for understanding the world – why has “more” not proven to be “better?” Why, for instance, has 3D and Interactive TV not caught on to the extent forecast?

I think we’ve developed a comfortable balance with TV. Remember, sight and sound are generally used as “at-a-distance” parsers of our world. Because of the sheer volume of visual and auditory information coming through these channels, the brain has learned to filter input and only alert us when further engagement is required. If our brain had to process all the visual information available to it, it would overload to the point of breakdown. So while we want to be engaged in whatever we’re watching on TV, we aren’t looking to be totally immersed in it. This is why we have the multi-screen/multi-tasking behaviors emerging that are quickly becoming the norm while we watch TV. 3D or Interactive TV both add a dimension of focal attention that isn’t necessary to enjoy a TV show.

The Concept of “Durable” Media

It’s interesting that as technology advances, every so often a media format emerges that is what I would call “durable.” It’s information or entertainment presented in a format that is a good cognitive match for our preferences and abilities. Even if technology is capable of adding “more” to these media, over time it turns out that “more” isn’t perceived as “better.”

Books are perhaps the most durable of media. The basic format of a book has been digitized, but our interaction with a book doesn’t look much different than it did in Guttenberg’s day. It’s still printed words on a page. Television also appears to be a durable medium. The format itself is fairly stable. It’s the revenue models that are built around it that will evolve as time goes on.

Facebook at Work – Stroke of Genius or Act of Desperation?

facebookworkSo, with the launching of Facebook at Work, Facebook wants to become your professional networking platform of choice, does it? Well, speaking as a sample of one, I don’t think so. And it all comes down to one key reason that I’ve talked about in the past, but for some reason, Facebook doesn’t seem to get – social modality.

Social modality is not a tough concept to understand. I’m one person in my office, another on the couch. The things that interest me in the office have little overlap with the things that interest me when I’m “sofa-tose” (nodding into a state of minimal consciousness on overstuffed furniture). But it’s not just about interests. It’s about context. I think differently. I act differently. I react differently. And I want to keep those two states as separate as possible.

Facebook seems to understand the need for separation. They’re building out Facebook at Work as a separate entity. But it’s still Facebook, and when I’ve got my business persona on, I don’t even think of Facebook. Neither, apparently, does anyone else. In 2010, BranchOut tried to build a professional network layer on top of Facebook. Last summer, it changed its business model. The reason? A lack of users. When you think of work, you just don’t think of Facebook. If fact, there’s almost an instinctual revulsion to the idea. Mixing Facebook and work is a cultural taboo.

When we look at the technologies we use to mediate our social activities, different rules apply. It’s not just about features or functionality – it’s about what instinctively feels right. Facebook is trying to create a monolithic platform for social connecting and that doesn’t seem to be where we’re heading. Rather than consolidating our social activity, it’s splintering over different tools and platforms. One reason is functionality. The other is that socially; we’re much too complex to fit into any one particular technological mold. I wrote a few months ago about the maturity continuum of social media. The final stage was to become a platform, which is exactly what Facebook is trying to do. But perhaps becoming a social media platform – at least in the sense that Facebook is attempting – isn’t possible. It could be that our social media personalities are too fractured to fit comfortably in any single destination.

Facebook’s revenue model depends on advertising, which depends on eyeballs. It’s a real estate play. Maybe to be successful, social has to be less about location and more about functionality. In other words, to become a social media platform, you have to be a utility, not a destination. Facebook seems to be trying to do both. According to an article in the Financial Times (registration required) Facebook at work will offer functionality through chat, contact management and document collaboration, but it will do so on a site that “looks very much like Facebook,” including, one assumes, ads served from Facebook. By trying to attract eyeballs to drive revenue, Facebook won’t be able to avoid mixing modality, and therein lays the problem. I suspect Facebook at Work will join an ever-increasing string of Facebook failures.

LinkedIn isn’t perfect, but it has definitely established itself as the B-to-B platform of choice. It fits our sensibilities of what a professional social networking tool should be. And it doesn’t suffer from Facebook’s overly ambitious hubris. It hasn’t launched “LinkedIn at Home” – trying to become the social network platform for our non-work life. It knows what it is. We know what it is. Our social modality isn’t conflicted. Facebook is another matter. It wants to be all things social to all people. I suppose from a revenue point you can’t blame them, but there’s a reason I don’t invite my co-workers to my family reunion – or vice versa.

Someday Facebook will learn that lesson. I suspect it will probably be the hard way.

#AlexfromTarget – An Unexpected Consequence of Technology

1414997478566_wps_10_Original_Tweet_of_Alex_frYes, I’m belatedly jumping on the #AlexfromTarget bandwagon, but it’s in service of a greater truth that I’m trying to illustrate. Last column, I spoke about the Unintended Consequences of Technology. I think this qualifies. And furthermore, this brings us full circle to Kaila Colbin’s original point, which started this whole prolonged discussion.

It is up to us to decide what is important, to create meaning and purpose. And, personally, I think we could do a better job than we’re doing now.

So, why did the entire world go ga-ga over a grocery bagger from Texas? What could possibly be important about this?

Well – nothing – and that’s the point. Thinking about important things is hard work. Damned hard work – if it’s really important. Important things are complex. They make our brains hurt. It’s difficult to pin them down long enough to plant some hooks of understanding in them. They’re like eating broccoli, or doing push ups. They may be good for us, but that doesn’t make them any more fun.

Remember the Yir Yoront from my last column – the tribal society that was thrown into a tail spin by the introduction of steel axes? The intended consequence of that introduction was to make the Yir Yoront more productive. The axes did make the tribe more productive, in that they were able to do the essential tasks more quickly, but the result was that the Yir Yoront spent more time sleeping.

Here’s the thing about technology. It allows us to be more human – and by that I mean the mixed bag of good and bad that defines humanity. It extends our natural instincts. It’s natural to sleep if you don’t have to worry about survival. And it’s also natural for young girls to gossip about adorable young boys. These are hard-wired traits. Deep philosophical thought is not a hard-wired trait. Humans can do it, but it takes conscious effort

Here’s where the normal distribution curve comes in. Any genetically determined trait will have a normal distribution over the population. How we apply new technologies will be no different. The vast majority of the population will cluster around the mean. But here’s the other thing – that “mean” is a moving target. As our brains “re-wire” and adapt to new technologies, the mean that defines typical behavior will move over time. We adapt strategies to incorporate our new technology-aided abilities. This creates a new societal standard and it is also human to follow the unwritten rules of society. This creates a cause and effect cycle. Technologies enable new behaviors that are built on top of the foundations of human instinct – society determines whether these new behaviors are acceptable – and if they are acceptable, they become the new “mean” of our behavioral bell curve. We bounce new behaviors off the backboard of society. So, much as we may scoff at the fan-girls that gave “Alex” insta-fame – ultimately it’s not the girl’s fault, or technology’s. The blame lies with us. It also lies with Ellen DeGeneres, the New York Times, and the other barometers of societal acceptance that offered endorsement of the phenomenon.

It’s human to be distracted by the titillating and trivial. It’s also human to gossip about it. There’s nothing new here. It’s just that these behaviors used to remain trapped within the limited confines of our own social networks. Now, however, they’re amplified through technology. It’s difficult to determine what the long-term consequences of this might be. Is Nicholas Carr right? Is technology leading us down the garden path to imbecility, forever distracted by bright, shiny objects? Or is our finest moment yet to come?

The Unintended Consequences of Technology

Who_caresIn last Friday’s Online Spin Column, Kaila Colbin asks a common question when it comes to the noise surrounding the latest digital technologies: Who Cares? Colbin rightly points out that we tend to ascribe unearned importance to whatever digital technology we seemed to be focused on at the given time. This is called, aptly enough, the focusing illusion and in the words of Daniel Kahneman, who coined the term, “Nothing in life is as important as you think it is, while you are thinking about it.”

But there’s another side to this. How important are the things we aren’t thinking about? For example, because it’s difficult to wrap our minds around big picture consequences in the future, we tend not to think as much as we should about them. In the case of digital technology shifts such as the ones Kaila mentioned, what we should care about is the overall shift caused by the cumulative impact of these technologies, not the individual components that make up the wave.

When we introduce a new technology, we usually have some idea of the impact they will have. These are the intended consequences. And we focus on these, which makes them more important in our minds. But some things will catch us totally by surprise. These are called unintended consequences. We won’t know them until the happen, but when they do, we will very much care about them. To illustrate that point, I’d like to tell the story about the introduction of one technology that dramatically changed one particular society.

yiryorontThe Yir Yoront were a nomadic tribe in Australia that somehow managed to avoid significant contact with the western world until well into the 20th century. In Yir Yoront society, one of the most valuable things you could possess was a stone axe. The making of these axes took time and skill and was typically done by elder males. In return, these “axe-makers” were conferred special status in aboriginal society. Only a man could own an axe and if a woman or child needed one, they had to borrow it. A complex social network evolved around the ownership of axes.

In 1915 the Anglican Church established a mission in Yir Yoront territory. The missionaries brought with them a large supply of steel hatchets. They distributed these freely to any Yir Yoront that asked for them. The intended consequence was to make life easier for the tribe and trigger an improvement in living conditions.

As anthropologist Lauriston Sharp chronicled, steel axes spread rapidly through the Yir Yoront. But they didn’t spread evenly. Elder males held on to their stone axes, both as a symbol of their status and because of their distrust of the missionaries. It was the younger men, women and children that previously had to borrow stone axes who eagerly adopted the new steel axes. The steel axes were more efficient, and so jobs were done in much less time. But, to the missionary’s horror, the Yir Yoront spent most of their extra leisure time sleeping.

Sleeping, however, was the least of the unintended consequences. Social structures, which had evolved over thousands of years, were dismantled overnight. Elders were forced to borrow steel axes from what would have been their social inferiors. People no longer attended important intertribal gatherings, which were once the exchange venues for stone axes. Traditional trading channels and relationships disappeared. Men began prostituting their daughters and wives in exchange for someone else’s steel ax. The very fabric of Yir Yoront society began unraveling as a consequence of the introduction of steel axes by the Anglican missionaries.

Now, one may argue that there were aspects of this culture that were overdue for change. A traditional Yir Yoront society was undeniably chauvinistic. But the point of this story is not to pass judgment. My only purpose here is to show how new technologies can bring massive and unanticipated disruption to a society.

Everett Rogers used the Yir Yoront example in his seminal book Diffusion of Innovations. In it, he said that introductions of new technologies typically have three components: Form, Function and Meaning. The first two of these tend to be understood and intended during the introduction. Both the Yir Yoront and the Anglican missionaries understand the form and function of the steel ax. But neither understood the meaning, because meaning was determined over time through the absorption of the technology into the receiving culture. This is where unintended consequences come from.

When it comes to digital technologies, we usually talk about form and function. We focus on what a technology is and what it will do. We seldom talk about what the meaning of a new technology might be. This is because form and function can be intentionally designed and defined. Meaning has to evolve. You can’t see it until it happens.

So, to return to Kaila’s question. Who cares? Specifically, who cares about the meaning of the new technologies we’re all voraciously adopting? If the story of the Yir Yoront is any lesson, we all should.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.

Twitch – Another Example of a Frictionless Market

twitch_logo3Twitch just sold for $1 billion dollars. That’s not really news. We’ve become inured to the never-ending stream of tech acquisitions that instantly transforms entrepreneurial techies into some of the richest people on the planet. No, what’s interesting about Twitch is if we slow down long enough to think about how this particular start up managed to create $1 billion in value.

A billion dollars is a lot of money. If we looked back just 50 years, a billion dollars in assets would make a company number 40 on the Fortune 500. If Twitch were somehow teleported back to 1964, it would rank just eight slots under Procter and Gamble (assets worth $1.15 billion) and three slots above Sunoco (assets of $0.88 billion). Coca-Cola would be left in the dust with a mere $485 million in assets. Today a half billion dollars is chump change in Silicon Valley terms.

This becomes more amazing when you consider that Twitch is only 3 years old. And it really started as an accident.

justin_kanRemember EDtv? Probably not. It was a pretty forgettable 1999 movie (based on a 1994 Quebec film called Louis 19, King of the Airwaves) starring Matthew McConaughey. The idea was that Ed would be followed by cameras 24 hours a day, 7 days a week, making his life a reality TV show. 1998’s The Truman Show had a similar theme (albeit with better ratings). Anyway, the point made in both movies was that an average life, if televised, could be entertaining enough to make people watch. In 2006, Emmett Shear and Justin Kan decided to test the premise. They launched Justin.tv. Soon they invited others to simulcast their lives as well.

What Kan and Shear did, although they probably weren’t intending to at the time, was create a platform that allowed anyone to be a real-time broadcaster with zero transactional costs. They created a perfect market for live TV. Last week I talked about AirBnB, TripAdvisor and VRBO.com creating a more perfect market for tourism. The key characteristic of a perfect market is that barriers to entry are reduced to zero, turning the market into an emergent sandbox from which new things tend to pop up. And that’s exactly what happened with Twitch.

Shear and Kan found that one group in particular embraced the idea of livecasting – gamers. They could communicate with other gamers, but they could also show off their mad gaming skills. Using the Justin.tv platform, Twitch was launched for the gaming industry in 2011. And thanks to Twitch, gaming has become a spectator sport – at a massive scale.

Twitch’s “stars” – like 30-year-old Tessa Brooks, who goes by “Tessachka” and broadcasts an average of 42 hours of programming a week – post their schedules so that their audiences can tune in. Twitch has about 55 million viewers per month who consume over 16 billion minutes of video programming. According to SocialBlade.com, this month, “Riotgames” is the top ranked Twitch broadcaster, with almost a million followers and over 18 million channel views.

Again, those are big numbers. A network show that pulls in 18 million viewers would be number 5 in the Nielsen ratings. And while Netflix’s House of Cards or Orange is the New Black may have made waves at the Emmies, The Atlantic estimates that only 2 – 3 million people watch a newly posted episode in the first week. On a good week, Riotgames could blow that away without twitching a trigger finger.

Twitch not only created a platform that generates audiences, it also generated a marketplace. Where there are eyeballs, there’s revenue potential. Twitch cuts its gamers in for a cut of the advertising revenue. I couldn’t find numbers on how lucrative this could be, but I suspect Justin may be able to quit his day job.

Like I said, the Twitch story is interesting, but what is vastly more interesting is the market dynamics that it has unleashed. Amazon’s $1 billion bid is not for the technology. It’s for the community and the market that comes with that community. When it comes to leveraging the potential of zero transactional cost markets, Amazon knows a thing or two. And one of the things it knows is that in frictionless markets, if you can navigate the turbulence, tremendous value can be created in an amazing short time. Say, for instance, $1 billion in just 3 years. It took Procter and Gamble 127 years to be worth that much.

Technology is Moving Us Closer to a Perfect Market

I have two very different travel profiles. When I travel on business, I usually stick with the big chains, like Hilton or Starwood. The experience is less important to me than predictability. I’m not there for pleasure; I’m there to sleep. And, because I travel on business a lot (or used to), I have status with them. If something goes wrong, I can wave my Platinum or Diamond guest card around and act like a jerk until it gets fixed.

But, if I’m traveling for pleasure, I almost never stay in a chain hotel. In fact, more and more, I stay in a vacation rental house or apartment. It’s a little less predictable than your average Sheraton or Hampton Inn, but it’s almost always a better value. For example, if I were planning a last minute get away to San Francisco for Labor Day weekend, I’d be shelling out just under $400 for a fairly average hotel room at the Hilton by Union Square. But for about the same price, I could get an entire 4 bedroom house that sleeps 8 just two blocks from Golden Gate park. And that was with just a quick search on AirBnB.com. I could probably find a better deal with the investment of a few minutes of my time.

perfect_market_1Travel is just one of the markets that technology has made more perfect. And when I say “perfect” I use the term in its economic sense. A perfect market has perfect competition, which means that the barriers of entry have been lowered and most of the transactional costs have been eliminated. The increased competition lowers prices to a sustainable minimum. At that point, the market enters a state called the Pareto Optimal, which means that nothing can be changed without it negatively impacting some market participants and positively impacting others.

Whether a perfect market is a good thing or not depends on your perspective. If you’re a long-term participant in the market and your goal is to make the biggest profit possible, a perfect market is the last thing you want. If you’re a new entrant to the market, it’s a much rosier story – any shifts that take the market closer to a Pareto Optimal will probably be to your benefit. And if you’re a customer, you’re in the best position of all. Perfect markets lead inevitably to better value.

Since the advent of VRBO.com and, more recently, AirBnB.com, the travel marketplace has moved noticeably closer to being perfect. Sites like these, along with travel review aggregators like TripAdvisor.com, have significantly reduced the transaction costs of the travel industry. The first wave was the reduction of search costs. Property owners were able to publish listings in a directory that made it easy to search and filter options. Then, the publishing of reviews gave us the confidence we needed to stray beyond the predictably safe territory of the big chains.

But, more recently, a second wave has further reduced transaction costs independent vacation property owners. I was recently talking to a cousin who rents his flat in Dublin through AirBnB, which takes all the headaches of vacation property management away in return for a cut of the action. He was up and running almost immediately and has had no problem renting his flat during the weeks he makes it available. He found the barriers to entry to be essentially zero. A cottage industry of property managers and key exchange services has sprung up around the AirBnB model.

What technology has done to the travel industry is essentially turned it into a Long Tail business model. As Chris Anderson pointed out in his book, Long Tail markets need scale free networks. Scale free networks only work when transaction costs are eliminated and entry into the market is free of friction. When this happen, the Power Law distribution still stays in place but the tail becomes longer . The Long Tail of Tourism now includes millions of individually owned vacation properties. For example, AirBnB has almost 800 rentals available in Dublin alone. According to Booking.com, that’s about 7 times the total number of hotels in the city.

Another thing that happens is, over time, the Tail becomes fatter. More business moves from the head to the tail. The Pareto Principle states that in Power Law distributions, 20 % of the businesses get 80% of the business. Online, the ratio is closer to 72/28.

These shifts in the market are more than just interesting discussion topics for economists. They mark a fundamental change in the rules of the game. Markets that are moving towards perfection remove the advantages of size and incumbency and reward nimbleness and adaptability. They also, at least in this instance, make life more interesting for customers.

The Human Stories that Lie Within Big Data

storytelling-boardIf I wanted to impress upon you the fact that texting and driving is dangerous, I could tell you this:

In 2011, at least 23% of auto collisions involved cell phones. That’s 1.3 million crashes, in which 3331 people were killed. Texting while driving makes it 23 times more likely that you’ll be in a car accident.

Or, I could tell you this:

In 2009, Ashley Zumbrunnen wanted to send her husband a message telling him “I love you, have a good day.” She was driving to work and as she was texting the message, she veered across the centerline into oncoming traffic. She overcorrected and lost control of her vehicle. The car flipped and Ashley broke her neck. She is now completely paralyzed.

After the accident, Zumbrunnen couldn’t sit up, dress herself or bath. She was completely helpless. Now a divorced single mom, she struggles to look after her young daughter, who recently said to her “I like to go play with your friends, because they have legs and can do things.”

The first example gave you a lot more information. But the second example probably had more impact. That’s because it’s a story.

We humans are built to respond to stories. Our brains can better grasp messages that are in a narrative arc. We do much less well with numbers. Numbers are an abstraction and so our brains struggle with numbers, especially big numbers.

One company, Monitor360, is bringing the power of narratives to the world of big data. I chatted with CEO Doug Randall recently about Monitor360’s use of narratives to make sense of Big Data.

“We all have filters through which we see the world. And those filters are formed by our experiences, by our values, by our viewpoints. Those are really narratives. Those are really stories that we tell ourselves.”

For example, I suspect the things that resonated with you with Ashley’s story were the reason for the text – telling her husband she loved him – the irony that the marriage eventually failed after her accident and the pain she undoubtedly felt when her daughter said she likes playing with other moms who can still walk. All of those things, while they don’t really add anything to our knowledge about the incidence rate of texting and driving accidents, are all things that strike us at a deeply emotional level because we can picture ourselves in Ashley’s situation. We empathize with her. And that’s what a story is, a vehicle to help us understand the experiences of another.

Monitor360 uses narratives to tap into these empathetic hooks that lie in the mountain of information being generating by things like social media. It goes beyond abstract data to try to identify our beliefs and values. And then it uses narratives to help us make sense of our market. Monitor360 does this with a unique combination of humans and machines.

“A computer can collect huge amounts of data and the compute can even sort that data. But “sense making” is still very, very difficult for computers to do. So human beings go through that information, synthesize that information and pull out what the underlying narrative is.”

Monitor360 detects common stories in the noisy buzz of Big Data. In the stories we tell, we indicate what we care about.

“This is what’s so wonderful about Big Data. The Data actually tells us, by volume, what’s interesting. We’re taking what are the most often talked about subjects…the data is actually telling us what those subjects are. We then go in and determine what the underlying belief system in that is.”

Monitor360’s realization that it’s the narratives that we care about is an interesting approach to Big Data. It’s also encouraging to know that they’re not trying to eliminate human judgment from the equation. Empathy is still something we can trump computers at.

At least for now.

Social Media: Matching Maturity to the Right Business Model

socialmediaLast week, I talked about the maturity continuum of social media. This week, I’d like to recap and look at the business model implications of each phase

Phase One – It’s a Fad. Here, we use a new social media tool simply because it is new. This is a classic early adopter model. The business goal here is to drive adoption as fast and far as possible, hoping that acceptance will go viral. There is no revenue opportunity at this point, as you don’t want to do anything to slow adoption. It’s all about getting it into as many hands as possible.

Phase Two – It’s a statement. You use the tool because it says something about who you are. Revenue opportunities are still limited, but this is the time for cross-promotion with brands that make a similar statement. Messaging and branding become essential at this point. You have to carve a unique niche for yourself and hope that it resonates with segments of your market. The goal is to create an emotional connection with your audience to help shore up loyalty in the next phase. This is the time to start laying the foundations of an user community.

Phase Three – It’s a tool. You use it because it offers the best functionality for a particular task. Here, things have to get more practical. This is where user testing and new feature development has to move as quickly as possible. Revenue opportunities at this point are possible, depending on the usage profile of your app. If there’s high frequency of usage, advertising sponsorship is a possibility. But be aware that this will bring inevitable push back from your users, especially if there has been no advertising up to this point. This shakes the loyalty of the “Statement” users, as they feel you’re selling out. The functionality will have to be rock solid to prevent attrition of your user base during this phase. Essentially, it will have to be good enough to “lock out” the competition. But there’s another goal here as well. Introducing new functionality allows you to move beyond being a one-trick pony. This is where you have to start moving from being a tool to the next phase…

Phase Four – It’s a Platform. If you’ve successfully transitioned to being a social media platform, you should have the opportunity to finally turn a profit. The stability of the revenue model will be wholly dependent on how high you’ve been able to raise the cost of switching. The more “sticky” your platform is, the more stable your revenue will be. But, be aware that using advertising as your revenue channel is fraught with issues in the world of social media. Unlike search, where we are used to dealing with a crystal clear indication of consumer interest, social media usage seldom comes tied to clear buyer intent. You have to worry about modality and social norms, along with the erosion of your “cool” factor.

In the last two phases, the best revenue opportunities should be directly tied to functionality and intent. The closer you can align your advertising message to the intent of the users “in the moment” the more stable your revenue model will be. In fact, if you can introduce tools that are focused on users when they are in social modes where commercial messaging is appropriate, you will find revenue opportunities dropping into your lap. For example, if users use LinkedIn to crowdsource opinions on B2B purchases, you have a natural monetization opportunity. If they’re using your app to post pictures of their cat playing a xylophone, you’re going to find it much harder to make a buck. Not impossible, but pretty damned difficult.