Why SEO Never Lived Up to Its Potential

IAB Canada President Chris Williams asked me a great question last week.

seo9We had just finished presenting the results of the new eye tracking study I told you about in the last three columns. I had mentioned that about 84% of all the clicks on the page in the study were on some type of non-paid result. I had also polled the audience of some 400 plus Internet marketers about how many were doing some type of organic optimization. A smattering of hands (which, in case you’re wondering, is somewhere south of a dozen, or about 3% of the audience) went up. Williams picked up on the disconnect right away. “We have a multi-billion dollar interactive advertising industry here in Canada, and you’re telling me that (on search at least) that only represents about 16% of the potential traffic? Why isn’t SEO a massive industry?”

Like I said – great question. I wish I had responded with a great answer. But the best I could do fell well short of the mark: “Uhh..well…(pick up slight whining tone at this point)…SEO is just really, really hard!”

Okay, maybe I was slightly more eloquent than that – but the substance of my reply was essentially that flimsy. SEO is a backbreaking way to earn a living, whether you’re a lone consultant, an agency or an in-house marketer.

Coincidentally, I was also in an inaugural call last week with a dear friend of mine who asked me to serve on the advisory board of his successful digital agency. I asked if they offered SEO services. I got the same answer from him – SEO was just too hard to make it profitable. They dropped it a few years ago from their services portfolio.

It Was a Case of Showing Search the Money

The potential value of SEO hasn’t changed in the almost 20 years since I started in this biz. In fact, it’s probably greater than ever. But SEO never seems to gain traction. The reason becomes clear when you start following the money. Goto.com (which became Overture, which was swallowed by Yahoo) sealed SEO’s fate when it started auctioning off search ads in 1998.  Google eventually followed suit in 2000 and the rest, along with SEO, was history. Even devout SEOers (myself included) eventually followed the money trail to the paid side of the house. The reasons why were abundantly and painfully clear when you consider this one particular example. We had the SEO contract with one fortune 500 brand that brought in about $300K annually. At the time, it would have been our biggest SEO contract, but it also was resource intensive. We had an entire team working on it. We did well, securing a number of first page results for some very high traffic terms. Based on what analytics we had it appeared that SEO was driving about 90% of the traffic and was converting substantially better than any other traffic source, including paid search. This translated into hundreds of millions of dollars in business yearly. But we could never seem to grow our contract beyond that $300K ceiling.

Paid search was another story. From fairly humble beginnings, that same brand became one of Google’s top advertisers, spending over $30 million per year. The management of that contract became a multimillion-dollar account. Unfortunately, it wasn’t our account. It belonged to another agency – a much smarter and more profitable agency.

Why We Got Pigeon Holed with SEO

If, as a service provider, you live and die by SEO, it’s probable that you’ll end up dying by SEO. Here’s why. To gain any traction you need to have influence over almost every aspect of the business. SEO has to become systemic. It has to be baked into the way an organization does business. It can’t be done as window dressing.

Most organizations don’t get that. They get tantalized by initial easy wins – things like cleaning up code, improving crawlability and doing some basic content optimization. Organic traffic skyrockets and everyone cheers. Life is good. But then it gets hard. The next step means rolling up your sleeves and diving deep into the guts of the organization. And if that organization isn’t ready to open the kimono to the SEO consultant at all levels, you hit a brick wall. This is typically where the organization falls prey to more unscrupulous SEO promises and practices from other vendors, which invariably get slammed by a future algo-update. And that brings us to the last challenge for SEO.

Flip Your SEO Coin

Even the best SEOers can get blindsided by Google. A tweak in an algorithm or a shift in ranking factors can drop you like a rock from the first page. And, if the recent study showed anything, it was that you can’t afford to drop from the first page. Traffic can go from a roar to a whisper overnight. That’s tough for the marketing department of an organization to swallow. People in the C-Suite that sign off on a sizable SEO contract have a tough time understanding why their investment suddenly got flushed down Google’s drain, perhaps never to resurface. They love control, and SEO offers anything but. As important as SEO is, it’s not predictable. You can’t bank on it.

So Chris…thanks for the question. Like I said, it was a really good one. And I hope this is a little better answer than the one I came up with on the spot.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.

A Prospect Ignored isn’t Really a Prospect

asleep at work / schoolI’ve ranted about this before and – oh yes – I shall rant again!

But first – the back-story.

I needed some work done at a property I own. I found three contractors online and reached out to each of them to get a quote.

Cue crickets.

No response. Nothing! So a few days later I politely followed up with each to prod the process along. Again, nothing. Finally, after 4 weeks of repeated e-nagging, one finally coughed up a quote. Most of the details were wrong, but at least someone at the other end was responding with minimal signs of consciousness.

Fast-forward 2 months. The work is still not done. At this point, I’m still trying to convey the specifics of the job and to get an estimated timeline. If I had an option, I’d take it. But the sad fact is, as spotty as the communication is with my contractor of choice, it’s still better than his competitors. One never did respond, even after a number of emails and voicemails. One finally sent a quote, but it was obvious he didn’t want the work. Fair enough. If the laws of supply and demand are imbalanced this much in their favor, who am I to fight it?

But here’s the thing. Market balances can change on a dime. Someday I’ll be in the driver’s seat and they’ll be scrambling to line up work to stay in business. And when they reach out to their contact list, a lot of those contacts will respond with an incredulous WTF. If you didn’t want my business when I needed you, why would you think I would give you it when you need me? A prospect spurned has a long memory for the specifics of said spurning. So, Mr. (or Ms.) Contractor, you can go take a flying leap.

If you’re going to use online channels to build your business, don’t treat it like a tap you can turn on and off at your discretion. Your online prospects have to be nurtured. If you can’t take any new business on, that’s fine. But at least have enough respect for them to send a polite response explaining the reason you can’t do the work. As long as we prospects are treated with respect, you’d be amazed at how reasonable we can be. Perhaps we can schedule the job for when you do have time. At the very least, we won’t walk away from the interaction with a bitter taste that will linger for years to come.

In 2005, Benchmark Portal did a study to compare response rates for email requests. The results were discouraging. Over 50% of SMB’s never responded at all. Only a small fraction actually managed to respond within 24 hours of the request.

I would encourage you to do a little surreptitious checking on your own response rates. Prospects contacting you need your help, and none of us like to hear our pleas for help go unanswered. 24 hours may seem like a reasonable time frame to you, but if you’re on the other end, it’s more than enough time to see your enthusiasm cool dramatically. Make it someone’s job to field online requests and set a 4-hour response time limit. I’m not talking about an auto-generated generic email here. I’m talking about a personalized response that makes it clear that someone has taken the time to read your request and is working on it. Also give a clear indication of how long it will take to follow up with the required information.

Why are these initial responses so critical? It’s not just to keep your field of potential prospects green and growing. It’s also because we prospects are using something called “signaling” to judge future interactions with a business. When we reach out to a new business we find online, we have no idea what it will be like to be their customer. We don’t have access to that information. So, we use things we do know as a proxy for that information. These things provide “signals” to help us fill in the blanks in our available information. An example would be hiring new employees. We don’t know how the person we’re interviewing will perform as an employee, so we look for certain things in a resume or an interview to act as signals that would indicate that the candidate will perform well on the job if hired.

If I’m a prospect looking for a business – especially one providing a service that will require an extended relationship between the business and myself – I need signals to show me how reliable the business will be if I chose them. Will they get the work done in a timely manner? Will the quality of the work be acceptable? Will they be responsive and accommodating to my requirements? If problems arise, will they be willing to work through those problems? Those are all questions I don’t have the answer to. All I have are indications based on my current interactions with the business. And if those interactions have required my constant nagging and clarification to avoid incorrect responses, guess what my level of confidence might be with said business?

Why Cognitive Computing is a Big Deal When it comes to Big Data

IBM-Watson

Watson beating it’s human opponents at Jeopardy

When IBM’s Watson won against humans playing Jeopardy, most of the world considered it just another man against machine novelty act – going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that. As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning Jeopardy requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge. Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a Jeopardy champion. Watson learned as it went. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem solve. These are all things that Watson can do.

 

Peter Pirolli - PARC

Peter Pirolli – PARC

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines has been to “make sense” of situations and adapt accordingly. Remember, a few columns ago where I talked about narratives and Big Data, this is where Monitor360 uses a combination of humans and computers – computers to do the data crunching and humans to make sense of the results. But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to sense make in an unlimited variety of environmental contexts. We humans excel at quick and dirty sense making no matter what the situation. We’re not always correct in our conclusions but we’re far more flexible than machines. But computers are constantly narrowing the gap and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense making. But this flexibility has required a trade-off. We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it.

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

Rethinking the Channelization of Advertising

Anybody who has been a regular reader of my column knows I very seldom write a column exclusively about search, even though it runs every Thursday under the masthead of “Search Insider.” I’ve been fortunate in that Ken Fadner and the editorial staff of Mediapost has never restricted my choice of subject matter. But the eclecticism of my column isn’t simply because I’m attention deficit. It’s because the subject that interests me most is the intersection between human behavior and technology. Although that often involves search, it also includes mobile, social, email and a number of other channels. I simply couldn’t write about what interests me if I was restricted to a single channel.

So why is Mediapost divided into the subject areas it is? Why, when you go to navigate the site, do you choose from email marketing, search marketing, mobile marketing, real time marketing, video marketing or social media marketing? Mediapost is structured this way because it’s a reflection of the industry it serves. Online marketing is divvied up in exactly the same way. We are an industry of channels.

the_rhine_color_coverThe problem here is one of perspective – the industry perspective vs. the customer perspective. Let me use another example to make my point. One of the best things about cruising the Rhine is that there is a stunning medieval castle or fortress around every bend. From Rüdesheim to Koblenz (the Middle Rhine) there are over 40 of these fortifications sprinkled along 40 miles of the river. As picturesque as they are, they were not put there to enhance the views for generations of sightseers yet to come. They were put there because the river was one of the major thoroughfares of Europe and anyone who owned land along the river had the opportunity to make some money. They exacted tolls from travellers to guarantee safe passage.

While this build up along the Rhine probably made sense for the German land barons, it did nothing to make life easier for the poor souls who had to get up the Rhine to reach their eventual destination. Unfortunately, they had few alternatives. They were stuck with paying the tolls.

The advertising business is divided up into channels for exactly the same reason the Rhine has a castle every mile. Channels are there to show ownership of property. Advertising is a way to generate revenue from that ownership. It is a toll that customers have to pay. Mediapost is divided up the way it is because its readers are the modern day equivalent of medieval land barons and that’s they way they think. If it were published in 1224 its sections may have been labeled Pfalzgrafenstein, Sterrenberg and Reichenstein (3 of the Rhine castles).

But if you’re like me, you’re not as interested in the castles as in the journey itself. And, in this way, I think we have built our industry in exactly the wrong way. We should all be more interested in the journey than in ownership of individual destinations along that journey. If you asked a traveller from Rüdesheim to Koblenz in 1205 which they would prefer; paying 40 separate tolls or paying one guide to safely escort them to the destination, I’m pretty sure they would chose the later. That is what our industry should aspire to.

The reason our industry is channel obsessive is because we had no option previously. In a pre-digital world, all we could do is own or control a channel. But technology is rapidly allowing us an option. Today, it is possible for us to map a customer’s journey and act as a guide along the way. All that is required is a change of perspective.

I believe it’s time to consider it.

The Human Stories that Lie Within Big Data

storytelling-boardIf I wanted to impress upon you the fact that texting and driving is dangerous, I could tell you this:

In 2011, at least 23% of auto collisions involved cell phones. That’s 1.3 million crashes, in which 3331 people were killed. Texting while driving makes it 23 times more likely that you’ll be in a car accident.

Or, I could tell you this:

In 2009, Ashley Zumbrunnen wanted to send her husband a message telling him “I love you, have a good day.” She was driving to work and as she was texting the message, she veered across the centerline into oncoming traffic. She overcorrected and lost control of her vehicle. The car flipped and Ashley broke her neck. She is now completely paralyzed.

After the accident, Zumbrunnen couldn’t sit up, dress herself or bath. She was completely helpless. Now a divorced single mom, she struggles to look after her young daughter, who recently said to her “I like to go play with your friends, because they have legs and can do things.”

The first example gave you a lot more information. But the second example probably had more impact. That’s because it’s a story.

We humans are built to respond to stories. Our brains can better grasp messages that are in a narrative arc. We do much less well with numbers. Numbers are an abstraction and so our brains struggle with numbers, especially big numbers.

One company, Monitor360, is bringing the power of narratives to the world of big data. I chatted with CEO Doug Randall recently about Monitor360’s use of narratives to make sense of Big Data.

“We all have filters through which we see the world. And those filters are formed by our experiences, by our values, by our viewpoints. Those are really narratives. Those are really stories that we tell ourselves.”

For example, I suspect the things that resonated with you with Ashley’s story were the reason for the text – telling her husband she loved him – the irony that the marriage eventually failed after her accident and the pain she undoubtedly felt when her daughter said she likes playing with other moms who can still walk. All of those things, while they don’t really add anything to our knowledge about the incidence rate of texting and driving accidents, are all things that strike us at a deeply emotional level because we can picture ourselves in Ashley’s situation. We empathize with her. And that’s what a story is, a vehicle to help us understand the experiences of another.

Monitor360 uses narratives to tap into these empathetic hooks that lie in the mountain of information being generating by things like social media. It goes beyond abstract data to try to identify our beliefs and values. And then it uses narratives to help us make sense of our market. Monitor360 does this with a unique combination of humans and machines.

“A computer can collect huge amounts of data and the compute can even sort that data. But “sense making” is still very, very difficult for computers to do. So human beings go through that information, synthesize that information and pull out what the underlying narrative is.”

Monitor360 detects common stories in the noisy buzz of Big Data. In the stories we tell, we indicate what we care about.

“This is what’s so wonderful about Big Data. The Data actually tells us, by volume, what’s interesting. We’re taking what are the most often talked about subjects…the data is actually telling us what those subjects are. We then go in and determine what the underlying belief system in that is.”

Monitor360’s realization that it’s the narratives that we care about is an interesting approach to Big Data. It’s also encouraging to know that they’re not trying to eliminate human judgment from the equation. Empathy is still something we can trump computers at.

At least for now.

Social Media: Matching Maturity to the Right Business Model

socialmediaLast week, I talked about the maturity continuum of social media. This week, I’d like to recap and look at the business model implications of each phase

Phase One – It’s a Fad. Here, we use a new social media tool simply because it is new. This is a classic early adopter model. The business goal here is to drive adoption as fast and far as possible, hoping that acceptance will go viral. There is no revenue opportunity at this point, as you don’t want to do anything to slow adoption. It’s all about getting it into as many hands as possible.

Phase Two – It’s a statement. You use the tool because it says something about who you are. Revenue opportunities are still limited, but this is the time for cross-promotion with brands that make a similar statement. Messaging and branding become essential at this point. You have to carve a unique niche for yourself and hope that it resonates with segments of your market. The goal is to create an emotional connection with your audience to help shore up loyalty in the next phase. This is the time to start laying the foundations of an user community.

Phase Three – It’s a tool. You use it because it offers the best functionality for a particular task. Here, things have to get more practical. This is where user testing and new feature development has to move as quickly as possible. Revenue opportunities at this point are possible, depending on the usage profile of your app. If there’s high frequency of usage, advertising sponsorship is a possibility. But be aware that this will bring inevitable push back from your users, especially if there has been no advertising up to this point. This shakes the loyalty of the “Statement” users, as they feel you’re selling out. The functionality will have to be rock solid to prevent attrition of your user base during this phase. Essentially, it will have to be good enough to “lock out” the competition. But there’s another goal here as well. Introducing new functionality allows you to move beyond being a one-trick pony. This is where you have to start moving from being a tool to the next phase…

Phase Four – It’s a Platform. If you’ve successfully transitioned to being a social media platform, you should have the opportunity to finally turn a profit. The stability of the revenue model will be wholly dependent on how high you’ve been able to raise the cost of switching. The more “sticky” your platform is, the more stable your revenue will be. But, be aware that using advertising as your revenue channel is fraught with issues in the world of social media. Unlike search, where we are used to dealing with a crystal clear indication of consumer interest, social media usage seldom comes tied to clear buyer intent. You have to worry about modality and social norms, along with the erosion of your “cool” factor.

In the last two phases, the best revenue opportunities should be directly tied to functionality and intent. The closer you can align your advertising message to the intent of the users “in the moment” the more stable your revenue model will be. In fact, if you can introduce tools that are focused on users when they are in social modes where commercial messaging is appropriate, you will find revenue opportunities dropping into your lap. For example, if users use LinkedIn to crowdsource opinions on B2B purchases, you have a natural monetization opportunity. If they’re using your app to post pictures of their cat playing a xylophone, you’re going to find it much harder to make a buck. Not impossible, but pretty damned difficult.

Today, Spend Some Time in Quadrant Two

First published April 17, 2014 in Mediapost’s Search Insider

Last week, I ranted, and it was therapeutic — for me, at least. Some of you agreed that the social media landscape was littered with meaningless crap. Others urged me to “loosen up and take a chill pill,” intimating that I had slipped across the threshold of “grumpy old man-itis.” Guilty, I guess, but there was a point to my rant. We need to spend more time with important stuff, and less time with content that may be popular but trivial.

Hey, I’m the first to admit that I can be tempted into wasting gobs of time with a tweet like: “Prom season sizzles with KFC chicken corsages.” This is courtesy of Guy Kawasaki. Guy’s Twitter feed is a fire hose of enticing trivia. And the man (with the team that supports him) does have a knack of writing tweets with irresistible hooks. Come on. Who could resist checking out a fried chicken corsage?

But here’s the problem. Online is littered with fried chicken corsages. No matter where we turn, we’re bombarded by these tasty little tidbits of brain candy. Publishers have grown quite adept at stringing these together, leading us from trivial link to trivial link. Personally, I’m a sucker for Top Ten lists. But after succumbing to the temptation for “just a second” I find myself, 20 minutes later, having accomplished nothing other than learning what the 10 Biggest Reality Show Blunders were, or where the 10 Most Extravagant Homes in the U.S. happen to be.

Entertaining? Absolutely.

Useful? Doubtful.

Important?  Not a chance.

merrillcoveymatrixWe need to set aside time for important stuff. A few decades ago, I happened to read Stephen Covey’s “First Things First,” which introduced a concept I still try to live by to this day. Covey called it the Urgent/Important matrix. It’s a simple two-by-two matrix with four quadrants:

1 – Urgent and Important – for example, a fire in your kitchen.

2 – Not Urgent but Important – long-term planning.

3 – Urgent but Not Important – interruptions.

4 – Not Important and Not Urgent – time-wasters.

Covey’s Advice? Better balance your time in these quadrants. Quadrant One takes care of itself. We can’t ignore these types of crises. But we should try to minimize the distractions that fall into Quadrant Three and cut down the time we spend in Quadrant Four. Then, we should move as much of this freed-up time as possible into Quadrant Two.

Covey’s Quadrants are more applicable than ever to the online world.  I suspect most of us spend the majority of time in the online equivalents of Quadrant Three (responding to emails or other instant forms of messaging that aren’t really important) or Quadrant Four (online time wasters). We probably don’t spend much time in Quadrant Two (which I’ll abbreviate it to Q2). In fact, in writing this column, I tried to find a quick guide to finding important stuff online. I have a few places I like to go, which I’ll share in a moment, but despite the vast potential of online as a Q2 resource, it doesn’t seem that anyone is it making it easy to filter for “importance.” As I said in my last column, we have filters for popularity and recency, but I couldn’t find anything helping me track down Q2 candidates.

So, here is my contribution to helping you set aside more quality Q2 time:

Amazon Kindle and DevonThink: Reading thought-provoking books is my favorite Q2 activity.  I try to set aside at least an hour a day to read. Anytime someone suggests a book or I find one referenced, I download immediately it from Kindle and add it to the queue. Then, as I read, I use Kindle’s highlight feature to create a summary of the important ideas. After, I copy my highlighted notes into DevonThink, a tool that helps track and archive notes and resources for future reference.

Scientific American & Science Daily: I’m a science geek. I love learning about the latest advances — in particular, new discoveries in the areas of psychology and neuroscience. When I find an interesting article, I again save it to DevonThink.

Google Scholar and Questia: Every so often, I dive into the world of academia to find research done in a particular area, usually related to a blog post or column idea. Google Scholar usually unearths a number of publicly available papers on most topics. And, if you share my predilection for academic research, a subscription to Questia is worth considering.

Big Think, weforum.org and TED: Looking for big ideas — world-changing stuff? These three sites are the place to find them.

HBR, Wired, The Atlantic and The Economist: Another favorite topic of mine is corporate strategy — particularly how organizations have to adapt to a rapidly evolving environment. I find sites like these great for giving me a sense of what’s happening in the world of business.

Hey, it may not be a fried chicken corsage, but these aren’t bad ways to spend an hour or two a day.

 

The Bug in Google’s Flu Trend Data

First published March 20, 2014 in Mediapost’s Search Insider

Last year, Google Flu Trends blew it. Even Google admitted it. It over predicted the occurrence of flu by a factor of almost 2:1.  Which is a good thing for the health care system, because if Google’s predictions had have been right, we would have had the worst flu season in 10 years.

Here’s how Google Flu Trends works. It monitors a set of approximately 50 million flu related terms for query volume. It then compares this against data collected from health care providers where Influenza-like Illnesses (ILI) are mentioned during a doctor’s visit. Since the tracking service was first introduced there has been a remarkably close correlation between the two, with Google’s predictions typically coming within 1 to 2 percent of the number of doctor’s visits where the flu bug is actually mentioned. The advantage of Google Flu Trends is that it is available about 2 weeks prior to the ILI data, giving a much needed head start for responsiveness during the height of flu season.

FluBut last year, Google’s estimates overshot actual ILI data by a substantial margin, effectively doubling the size of the predicted flu season.

Correlation is not Causation

This highlights a typical trap with big data – we tend to start following the numbers without remembering what is generating the numbers. Google measures what’s on people’s minds. ILI data measures what people are actually going to the doctor about. The two are highly correlated, but one doesn’t not necessarily cause the other. In 2013, for instance, Google speculated that increased media coverage might be the cause for the overinflated predictions. More news coverage would have spiked interest, but not actual occurrences of the flu.

Allowing for the Human Variable

In the case of Google Flu Trends, because it’s using a human behavior as a signal – in this case online searching for information – it’s particularly susceptible to network effects and information cascades. The problem with this is that these social signals are difficult to rope into an algorithm. Once they reach a tipping point, they can break out on their own with no sign of a rational foundation. Because Google tracks the human generated network effect data and not the underlying foundational data, it is vulnerable to these weird variables in human behavior.

Predicting the Unexpected

A recent article in Scientific American pointed out another issue with an over reliance on data models –  Google Flu Trends completely missed the non-seasonal H1N1 pandemic in 2009. Why? Algorithmically, Google wasn’t expecting it. In trying to eliminate noise from the model, they actually eliminated signal coming during an unexpected time. Models don’t do very well at predicting the unexpected.

Big Data Hubris

The author of the Scientific American piece, associate editor Larry Greenemeier, nailed another common symptom of our emerging crush on data analytics – big data hubris. We somehow think the quantitative black box will eliminate the need for more mundane data collection – say – actually tracking doctor’s visits for the flu. As I mentioned before, the biggest problem with this is that the more we rely on data, which often takes the form of arm’s length correlated data, the further we get from exploring causality. We start focusing on “what” and forget to ask “why.”

We should absolutely use all the data we have available. The fact is, Google Flu Trends is a very valuable tool for health care management. It provides a lot of answers to very pertinent questions. We just have to remember that it’s not the only answer.