Is Brand Strategy a Myth?

BrandStrategyThemeOn one side of the bookshelf, you have an ever growing pile of historic business best sellers, with promising titles like In Search of Excellence, 4 +2: What Really Works, Good to Great and Built to Last. Essentially, they’re all recipes for building a highly effective company. They are strategic blueprints for success.

On the other side of the bookshelf, you have books like Phil Rosenweig’s “The Halo Effect.” He trots out a couple of sobering facts: In a rigorous study conducted by Marianne Bertrand at the University of Chicago and Antoinette Schoar at MIT, they isolated and quantified the impact of a leader on the performance of a company. The answer, as it turned out, was 4%. That’s right, on the average, even if you have a Jack Welch at the helm, it will only make about 4% difference to the performance of your company. Four percent is not insignificant, but it’s hardly the earth shaking importance we tend to credit to leadership.

The other fact? What if you followed the instructions of a Jim Collins or Tom Peters? What if you transformed your company’s management practices to emulate those of the winning case studies in these books? Surely, that would make a difference? Well, yes – kind of. Here, the number is 10. In a study done by Nick Bloom of the London School of Economics and Stephen Dorgan at McKinsey, the goal was the test the association between specific management practices and company performance. There was an association. In fact, it explained about 10% of the total variation in company performance.

These are hard numbers for me to swallow. I’ve always been a huge believer in strategy. But I’m also a big believer in good research. Rosenweig’s entire book is dedicated to poking holes in much of the “exhaustive” research we’ve come to rely on as the canonical collection of sound business practices. He doesn’t disagree with many of the resulting findings. He goes as far as saying they “seem to make sense.” But he stops short of given them a scientific stamp of endorsement. The reality is, much of what we endorse as sound strategic thinking comes down to luck and the seizing of opportunities. Business is not conducted in a vacuum. It’s conducted in a highly dynamic, competitive environment. In such an environments, there are few absolutes. Everything is relative. And it’s these relative advantages that dictate success or failure.

Rosenweig’s other point is this: Saying that we just got lucky doesn’t make a very good corporate success story. Humans hate unknowns. We crave identifiable agents for outcomes. We like to assign credit or blame to something we understand. So, we make up stories. We create heroes. We identify villains. We rewrite history to fit into narrative arcs we can identify with. It doesn’t seem right to say that 90% of company performance is due to factors we have no control over. It’s much better to say it came from a well-executed strategy. This is the story that is told by business best sellers.

So, it caught my eye the other day when I saw that ad agencies might not be very good at creating and executing on brand strategies.

First of all, I’ve never believed that branding should be handled by an agency. Brands are the embodiment of the business. They have to live and breathe at the core of that business.

Secondly, brands are not “created” unilaterally – they emerge from that intersection point where the company and the market meet. We as marketers may go in with a predetermined idea of that brand, but ultimately the brand will become whatever the market interprets it to be. Like business in general, this is a highly dynamic and unpredictable environment.

I suspect that if we ever found a way to quantify the impact of brand strategy on the ultimate performance of the brand, we’d find that the number would be a lot lower than we thought it would be. Most of brand success, I suspect, will come down to luck and the seizing of opportunities when they arise.

I know. That’s probably not the story you wanted to hear.

The Coming Data Marketplace

The stakes are currently being placed in the ground. The next great commodity will be data and you can already sense the battle beginning the heat up.

Consumer data will be generated by connections. Those connections will fall into two categories: broad and deep. Both will generate data points that will become critical to businesses looking to augment their own internal data.

First, broad data is the domain of Google, Apple, Amazon, eBay and Facebook. Their play is it to stretch their online landscape as broadly as possible, generating thousands of new potential connections with the world at large. Google’s new “Buy” button is a perfect example of this. Adding to the reams of conversion data Google already collects, the “Buy” button means that Google will control even more transactional landscape. They’re packaging it with the promise of an improved mobile buying experience, but the truth is that purchases will be consummated on Google controlled territory, allowing them to harvest the rich data that will be generated from millions of individual transactions across every conceivable industry category. If Google can control a critical mass of connected touch points across the online landscape, they can get an end-to-end view of purchase behavior. The potential of that data is staggering.

In this market, data will be stripped of identity and aggregated to provide a macro but anonymous view of market behaviors. As the market evolves, we’ll be able to subscribe to data services that will provide real time views of emerging trends and broad market intelligence that can be sliced and diced in thousands of ways. Of course, Google (and their competitors) will have a free hand to use all this data to offer advertisers new ways to target ever more precisely.

This particular market is an online territory grab. It relies on a broad set of touch points with as many people across as many devices as possible. The more territory that is covered, the more comprehensive the data set.

The other data market will run deep. Consider the new health tracking devices like Fitbit, Garmin’s VivoActive and Apple’s iWatch. Focused purpose hardware and apps will rely on deep relationships with users. The more reliant you become on these devices, the more valuable the data collected will become. But this data comes with a caveat – unlike the broad data market, this data should not be striped of its identity. The value of the data comes from its connection with an individual. Therefore, that individual has to be an active participant in any potential data marketplaces. The data collector will act more as a data middleman – brokering matches between potential customers and vendors. If the customer agrees, they can choose to release the data to the vendor (or at least, a relevant subset of the data) in order to individualize the potential transaction.

As the data marketplace evolves, expect an extensive commercial eco-system to emerge. Soon, there will be a host of services that will take raw data and add value through interpretation, aggregation and filtering. Right now, the onus for data refinement falls on the company who is attempting to embrace Big Data marketing. As we move forward, expect an entire Big Data value chain to emerge. But it will all rely on players like Google, Amazon and Apple who have the front line access to the data itself. Just as natural resources provided the grist that drove the last industrial revolution, expect data to be the resource that fuels the next one.

An Eulogy for “Kathy” – The First Persona

My column last week on the death of the persona seemed to find a generally agreeable audience. But prior to tossing our cardboard cutouts of “Sally the Soccer Mom” in the trash bin, let’s just take a few minutes to remind ourselves why personas were created in the first place.

Alan Cooper – the father of usability personas – had no particular methodology in mind when he created “Kathy,” his first persona. Kathy was based on a real person that Cooper had talked to during his research for a new project management program. Cooper found himself with a few hours on his hands every day when his early 80’s computer chugged away, compiling the latest version of his program. He would use the time to walk around a golf course close to his office and run through the design in his head. One day, he engaged himself in an imaginary dialogue with “Kathy,” a potential customer who was requesting features based on her needs. Soon, he was deep in his internal discussion with Kathy. His first persona was a way to get away from the computer and cubicle and get into the skin of a customer.

There are a few points here that important to note. “Kathy” was based on input from a real person. The creation of “Kathy” had no particular goal, other than to give Cooper a way to imagine how a customer might use his program. It was a way to make the abstract real, and to imagine that reality through the eyes of another person. At the end we realize that the biggest goal of a persona is just that – to imagine the world through someone else’s eyes.

As we transition from personas to data modeling, it’s essential to keep that aspect alive. We have to learn how to live in someone else’s skin. We have to somehow take on the context of their world and be aware of their beliefs, biases and emotions. Until we do this, the holy grail of the “Market of One” is just more marketing hyperbole.

I think the persona started its long decline towards death when it transitioned from a usability tool to a marketing one. Personas were never intended to be a slide deck or a segmentation tool. They were just supposed to be a little mental trick to allow designers to become more empathetic – to slip out of their own reality and into that of a customer. But when marketers got their hands on personas, they do what marketers tend to do. They added the gloss and gutted the authenticity. At that moment, personas started to die.

So, for all the reasons I stated last week, I think personas should be allowed to slip away into oblivion. But if we do so, we have to find a way to understand the reality of our customers on a one to one basis. We have to find a better way to accomplish what personas were originally intended to do. We have to be more empathetic.

Because humans are humans, and not spreadsheets, I’m not sure we can get all the way there with data alone. Data analysis forces us to put on another set of lenses – ones that analyze – not empathize. Those lenses help us to see the “what” but not the “why.” It’s the view of the world that Alan Cooper would have had if he never left his cubicle to walk around the Old Del Monte golf course, waving his arms and carrying on his internal dialogue with “Kathy.” The way to empathize is to make connections with our customers – in the real world – where they live and play.  It’s using qualitative methods like ethnographic research to gain insights that can then be verified with data. Personas may be dead, but qualitative research is more important than ever.

The Persona is Dead, Long Live the Person

First, let me go on record as saying up to this point, I’ve been a fan of personas. In my past marketing and usability work, I used personas extensively as a tool. But I’m definitely aware that not everyone is equally enamored with personas. And I also understand why.

Personas, like any tool, can be used both correctly and incorrectly. When used correctly, they can help bridge the gap between the left brain and the right brain. They live in the middle ground between instinct and intellectualism. They provide a human face to raw data.

But it’s just this bridging quality that tends to lead to abuse. On the instinct side, personas are often used as a short cut to avoid quantitative rigor. Data driven people typically hate personas for this reason. Often, personas end up as fluffy documents and life sized cardboard cutouts with no real purpose. It seems like a sloppy way to run things.

On the intellectual side, because quant people distrust personas, they also leave themselves squarely on data side of the marketing divide. They can understand numbers – people not so much. This is where personas can shine. At their best, they give you a conceptual container with a human face to put data into. It provides a richer but less precise context that allows you to identify, understand and play out potential behaviors that data alone may not pinpoint.

As I said, because personas are intended as a bridging tool, they often remain stranded in no man’s land. To use them effectively, the practitioner should feel comfortable living in this gap between quant and qual. Too far one way or the other and it’s a pretty safe bet that personas will either be used incorrectly or be discarded entirely.

Because of this potential for abuse, maybe it’s time we threw personas in the trash bin. I suspect they may be doing more harm than good to the practice of marketing. Even at their best, personas were meant as a more empathetic tool to allow you to thing through interactions with a real live person in mind. But in order to make personas play nice with real data, you have to be very diligent about continually refining your personas based on that data. Personas were never intended to be placed on a shelf. But all too often, this is exactly what happens. Usually, personas are a poor and artificial proxy for real human behaviors. And this is why they typically do more harm than good.

The holy grail of marketing would be to somehow give real time data a human face. If we could find a way to bridge left brain logic and right brain empathy in real time to discover insights that were grounded in data but centered in the context of a real person’s behaviors, marketing would take a huge leap forward. The technology is getting tantalizingly close to this now. It’s certainly close enough that it’s preferable to the much abused persona. If – and this is a huge if – personas were used absolutely correctly they can still add value. But I suspect that too much effort is spent on personas that end up as documents on a shelf and pretty graphics. Perhaps that effort would be better spent trying to find the sweet spot between data and human insights.

The Virtuous Cycle and the End of Arm’s Length Marketing

brandstewardshipLast week I wrote what should have been an open and shut column – looking at why SEO never really lived up to the potential of the business opportunity. Then my friend Scott Brinker had to respond with this comment:

“Seems like Google has long been focused on making SEO a “result” of companies doing good things, rather than a search-specific optimization “cause” to generate good rankings. They seem to have gotten what they wanted. Now as Google starts to do that with paid search, the world gets interesting for those agencies too..”

Steven Aresenault jumped on the bandwagon with this:

“Companies are going to wake up to the reality that part of their marketing is really about creating content. Content is everywhere and everything. Reality is I believe that it is a new way of thinking.”

As they both point out, SEO should be a natural result of a company doing good things, not the outcome of artificial manipulations practiced by a third party. It has to be baked into and permeate through the operating DNA of a company. But, as I started this column, I realized that this doesn’t stop at SEO. This is just the tip of a much bigger iceberg. Marketing, at least the way it’s been done up to now, is fundamentally broken. And it’s because many companies still rely on what I would call “Arm’s Length Marketing.”

Brand Stewardship = B.S.

Here is a quote lifted directly from the Ogilvy Mather website:

We believe our role as 360 Degree Brand Stewards is this: Creating attention-getting messages that make a promise consistent and true to the brand’s image and identity. And guiding actions, both big and small, that deliver on that brand promise. To every audience that brand has. At every brand intersection point. At all times.

Now, Ogilvy is very good at crafting messages and this one is no exception. Who could possibly argue with their view of brand stewardship? The problem comes when you look at what “stewardship” means. Here’s the Merriam Webster definition:

the conducting, supervising, or managing of something; especially :  the careful and responsible management of something entrusted to one’s care

The last five words are the key – “something entrusted to one’s care”. This implies that the agency has functional control of the brand, and with due apologies to David Ogilvy and his cultural legacy, that is simply bullshit.

Brands = Experience

Hmmm - coincidence?

Hmmm – coincidence?

Maybe Arm’s Length Brand Stewardship was possible in the era of David Ogilvy, Don Draper and Darrin Stephens (now, there’s a pop culture trifecta for you) – where brand messaging defined the brand, but that era is long gone. Brands used to be crafted from exposure, but now they’re created through experience, amplified through the resonant network of the online community. And an arm’s length third party cannot, nor should they, control that experience. It has to live at the heart of the company. For decades, companies abdicated the responsibility of brand stewardship to the communication experts – or, to do a little word crafting – they “entrusted (it) to (their) care.” That has to change. Marketing has to come back home.

The Virtuous Marketing Cycle

Scott talked about the SEO rewards that come from doing good things. Steven talked about authentic content creation being one of those good things. But this is a much bigger deal. This is about forcefully moving marketing’s place in the strategic chain. Currently, the order is this: Management > Strategy > Marketing > Revenue. Marketing’s current job is to execute on strategy, which comes from management. And, in that scenario, it’s plausible to execute at arm’s length. Also, things like SEO and content management fall well down the chain, typically beneath the threshold of senior management awareness. By the way, usability and other user-centric practices typically suffer the same fate.

But what if we moved our thinking from a chain to a cycle: Marketing > Management > Strategy > Marketing > Revenue > Marketing (and repeat)? Let me explain. To begin with, Marketing is perfectly situated to become the “sensemaking” interface with the market. This goes beyond market research, which very seldom truly informs strategy. Market research in its current form is typically intended to optimize the marketing program.

I’m talking about a much bigger role – Marketing would define the “outside in” view of the company which would form the context within which strategy would be determined by Management. Sensemaking as it applies to corporate strategy is a huge topic, but for brevity’s sake, let’s suppose that Marketing fills the role of the corporation’s five senses, defining what reality looks (and smells and sounds and tastes and feels) like . Then, when strategy is defined within that context, Marketing is well positioned to execute on it. Finally, execution is not the end – it is the beginning of another cycle. Sense making is an iterative process. Marketing then redefines what reality looks like and the cycle starts over again.

Bringing stewardship of marketing back to the very heart of the organization fundamentally changes things like arm’s length agency partnerships. It creates a virtuous cycle that runs through length and breadth of a company’s activities. Things like SEO, content creation and usability naturally fall into place.

Strategic Planning as though the Future Matters – Strategy and Leadership

chesspiecesWhy do organizations get blindsided by market transformations that could have been anticipated? It may not be because their planning methods are flawed, but rather that they undertake strategic planning processes like scenario development without seeing them as a unique opportunity for learning about and exploring the future.

To help planners avoid strategic surprise, Monitor 360 has created a five-step strategic planning process that has been tested in interactions with leaders in the military, intelligence community, and corporations. This paper guides you through a systematic process for incorporating plausible but challenging future scenarios into your organization’s learning processes, to help mitigate risk and decrease the likelihood of being unprepared for discontinuities.

The PDF is available for download.

Why do organizations get blindsided by market transformations that could have been anticipated? After all, scenario planning has been a widely used strategic planning tool for decades and most managers are familiar with the process of considering how they would operate in alternative futures. The reason most organizations get surprised by game-changing events, in my experience, is not that their planning methods are bad. The problem is that they undertake strategic planning processes like scenario development without seeing them as a unique opportunity for learning about and exploring the future. In some cases this is because management lacks sufficient appreciation for the uncertainty and ambiguity their organizations face. More often, however, management is fully aware of the uncertainty of their situation but is seemingly powerless to prepare to adapt to new business realities, especially unpleasant ones.

To help planners avoid strategic surprise, Monitor 360 has created a five-step strategic planning process that has been tested in interactions with leaders in the military, intelligence community, and corporations. By systematically incorporating plausible but challenging future scenarios into their learning processes, decision makers can both mitigate risk and decrease the likelihood of not being prepared for discontinuities. This approach overcomes the paralysis that sometimes happens when people see all the uncertainty their organization faces, as well as the denial that happens when they don’t.

Multiple futures

When thinking about the future, many strategic planners make the mistake of asking, “What will the future be?” Because the future is the net result of so many complex and interdependent issues the question is daunting, and perhaps unanswerable.

A more realistic question is, “What are the possible challenging futures?” Exploring multiple possible ways the future could unfold in ways that would require the organization to radically adapt enables leaders to better prepare for a wide range of contingencies, and to manage the consequences more effectively when surprises do occur.

Scenario analysis can provide planners with a systematic way of imaging the future and identifying winning long-term strategies that respond to the many ways the future could play out. It helps individuals and their organizations identify and challenge their entrenched mental models and assumptions about what the future might hold, while helping bound the uncertainties they face.

Instead of attempting to predict what’s going to happen, the scenario methodology offers a way to see the forces as they are taking shape and not be blindsided when they lead to major changes. Anticipating the future gives decision-makers the ability to look in the right place for game-changing events, to rehearse the appropriate responses and to systematically tack indicators of change.

Five Mindsets for Managing Uncertainty

Scenario thinking is the foundation of our five-step toolkit because of the unique ways it allows leaders to explore and exploit the unknown, and because it offers managers a methodology to consider alternatives in the face of uncertainty. To make scenario planning more effective, we’ve identified five discrete steps in the process, each of which should be undertaken with a distinct mindset. It is important to take these steps one at a time and in order, rather than skipping right away to decision-making.

Create Scenarios — Unleash your Imagination

Scenarios are plausible narratives about futures that are distinctly different from the present. If they are well prepared, they allow for a thorough exploration of future risks and opportunities. Scenario thinkers begin at the same place as traditional risk managers, skillfully making an inventory of what is known about the future. After exploring issues such as demographics as well as aspects of industry structure and customer behavior, scenario thinkers turn to the unknown, the unknowable, and the perceptions that should be challenged. Following a rigorous analytical process aimed at articulating the range of uncertainties an organization could face and all of the relevant outcomes, scenario thinkers design a number of cogent narratives about relevant futures.

Scenarios are written as plausible stories — not probable ones. Traditional risk management is based on probabilities, actuary tables, and other known and measurable quantities. But scenarios are intended to provoke the imagination and provide a more comprehensive view of risk, so that the results can shed light on critical strategic decisions.

It is important to note that scenario developers create multiple futures, rather than just one. This allows for a more complete exploration of the future, thus avoiding getting wedded to specific set of assumptions about how uncertainties will unfold. The process of developing multiple scenarios helps to increase the possibility that leaders will not be surprised, because it allows them to rehearse multiple unique futures. Importantly, it also grounds decision-makers in the reality that, in most circumstances, they cannot accurately predict the future. Rathe than falsely assuming one outcome will happen, leaders learn that they must make decisions in light of the true uncertainty they face.

As an example of this process, the U.S. Navy developed a set of scenarios that would help guide the development of the first unified strategy of all the country’s maritime forces in the “A Cooperative Strategy for 21st Century Seapower” released October 2007. The first step was to develop four working scenarios. These were discussed and refined in a series of eight working sessions around the country with people from the business, government, and academic sectors who could provide valuable insight about issues the Navy needed to address in the future. The participation of these experts, and their feedback, helped to test the validity of scenarios, which were then refined for publication and dissemination.

The scenarios had a significant impact on the future strategy of the Navy. For example, the scenarios helped to provide a new mission for the Navy in its response to humanitarian crises. the report concluded: “Building on relationships forged in times of calm, we will continue to mitigate human suffering as the vanguard of interagency and multinational efforts, both in a deliberate, proactive fashion and in response to crises. Human suffering moves us to act, and the expeditionary character of maritime forces uniquely positions them to provide assistance.”

Determine Required Capabilities for each Scenario — Give your Creativity Free Rein

The second step of the process is to identify what it takes to be successful in each of the futures identified. After the scenario process has imagined distinctly different future worlds that the organization’s leaders have acknowledge are plausible, relevant, and important, what would a high-performing organization look like in each of these worlds? That is, if an organization were dealt the card of one scenario, what would it need to do in order to be successful?

To answer this question, planners need to make a list of key success factors and capabilities. Key capabilities for militaries or intelligence agencies might be the ability to project force rapidly abroad or the ability to collect and process open-source information. Capabilities often start with “the ability to….” For companies these might be the ability to build brands that address customer needs and inspire loyalty as well as the ability to launch products quickly.

As a case in point, a major software company needed to determine where to invest its limited resources to succeed in a market roiled by new competitors. In one scenario, the company needed good relationships with its value-added resellers and excellent customer service. In another scenario, it needed an entirely different set of capabilities, including low cost, operating system integration.

It’s important to address the capabilities question as if it were a set of independent problems: what it would take to be a winner in a given scenario? Doing so encourages bold, creative thinking, and avoids the trap of limiting the alternatives to those that are doable with current capabilities and resources. By keeping this step separate from the next one, assessing current capabilities, planners are not hobbled by only thinking about what they are good at today or nor do they have struggle with imagining themselves in four different worlds at the same time.

It is often a wrenching experience for leaders to simply look for the absolutely best strategic posture for their organization in each scenario. This is one measure of how hard it is for them to imagine doing business in any future that has totally different success factors from the current environment.

Assess Current Capabilities — Be Painfully Realistic

Separate from the critical examination of the capabilities needed for success in each scenario, planners must ask: What are we good at right now? The answers could be human capital, relationship, or operational efficiency. These capabilities are generally described as competitive assets that cannot be bought and sold on the free market. Organizations can’t just say, “We’ll invest $100 million next month, and then we’ll have that ability” or “We want to do that.” They have to build the capability over time.

Often outside perspective — for example, based on detailed discussions with customers — can be helpful in getting in unbiased evaluation of what capabilities an organization excels in.

Identify Gaps — Provide Honest Analysis

Next organization should compare its own capabilities with the capabilities needed to succeed in the scenarios. Such capability maps will not only highlight what capabilities it needs to develop — the capability gaps — but also what capabilities it has already invested in that may become redundant.

Make Choices — Consider your Options

Once organizations have analyzed the gap between their strengths and the capabilities needed in each scenario, they face some big decisions. There could be capabilities that they need in all the scenarios imagined but that they don’t currently have. As a first step, an organization might safely develop these. We call those “no-regrets moves.”

Others move are what we call “big bets.”  These are capabilities needed in a particular scenario or a small number of scenarios, but not in others. Organizations make bets consciously after systematically thinking through the types of capabilities, their relationship to the environment around them, or the futures that they feel are likely to occur. They can adjust their decision when more data is collected or events unfold in the world. This process is based on the theory of real options, which suggests an organization can gain an advantage by making many small bets, and as information accumulates, start to increase or decrease those bets accordingly.

The crucial questions for organizations to ask when making choices are: What would be the risk if a scenario happened and we didn’t have this capability? And what would be the risk if a scenario didn’t happen and we did have it?

What’s Clouding the Future?

It’s painfully difficult for individual leaders to keep their minds open to multiple futures and to follow a systematic process like the one described above. IBM”s famous story illustrates this all-to-common tendency.

In 1980, the personal computer represented a tiny niche market. When IBM was considering developing a computer for the masses it convened a working group to forecast its potential market. The team projected that the market for PCs would total a mere 241,683 units over the next five years, and that demand would peak after two years and then trend downward. They believed that since existing computers had such a small amount of processing power, people would not want to purchase a second one.

As a result, IBM determined that there was no potential in the marketplace and effectively killed its effort to dominate the personal computer market, ceding the operating system to Microsoft and the processor to Intel. IBM lost out on a $240 billion market, one in which nearly every household in the developed world would eventually want one or more of the machines and then would want to upgrade them every few years.

But what if someone in the room had asked, “What if people want a PC on every desktop?” What if individuals start carrying PC’s in their pockets? What if PCs develop a communications capability?What if they are widely used to play games? Maybe we should think of a different scenario where the market would be more like 20 million units?” These would have been completely off-the-wall, outrageous ideas at the time, but if just one person in the room had explored such different lines of thought, the futures of Microsoft, Intel and IBM might have evolved differently.

The Benefits of a Systematic, Disciplined Approach

Anticipating the future isn’t just about avoiding strategic surprise or minimizing the downside risk. There’s also a huge upside: You are creating the future that you want and making sense of how the world may play out. Understanding your choices can be an empowering process.

When planners follow a process that systematically cuts through the barriers to effective group learning and decision-making, and combine that process with principles that give discipline and robustness to the entire endeavor, the future, and our place in it, comes into a much sharper focus.

Learning about Big Data from Big Brother

icreach-search-illo-feature-hero-bYou may not have heard of ICREACH, but it has probably heard of you. ICREACH is the NSA’s own Google-like search engine.  And if Google’s mission is to organize the world’s information, ICREACH’s mission is to snoop on the world.  After super whistle blower Edward Snowden tipped the press off to the existence of ICREACH, the NSA fessed up last month. The amount of data we’re talking about is massive. According to The Intercept website, the tool can handle two to five billion new records every day, including data on the US’s emails, phone calls, faxes, Internet chats and text messages. It’s Big Brother meets Big Data.

I’ll leave aside for the moment the ethical aspect of this story.  What I’ll focus on is how the NSA deals with this mass of Big Data and what it might mean for companies who are struggling to deal with their own Big Data dilemmas.

Perhaps no one deals with more big data than the Intelligence Community. And Big Data is not new for them. They’ve been digging into data trying to find meaningful signals amongst the noise for decades. Finally, the stakes of successful data analysis are astronomically high here. Not only is it a matter of life and death – a failure to successfully connect the dots can lead to the kinds of nightmares that will haunt us for the rest of our lives. When the pressure is on to this extent, you can be sure that they’ve learned a thing or two. How the Intelligence community handles data is something I’ve been looking at recently. There are a few lessons to be learned here.

Owned Data vs Environmental Data

The first lesson is that you need different approaches for different types of data. The Intelligence Community has their own files, which include analyst’s reports, suspect files and other internally generated documentation. Then you have what I would call “Environmental” data. This includes raw data gathered from emails, phone calls, social media postings and cellphone locations. Raw data needs to be successfully crunched, screened for signals vs. noise and then interpreted in a way that’s relevant to the objectives of the organization. That’s where…

You Need to Make Sense of the Data – at Scale

Probably the biggest change in the Intelligence community has been to adopt an approach called “Sense making.”  Sense making really mimics how we, as humans, make sense of our environment. But while we may crunch a few hundred or thousand sensory inputs at any one time, the NSA needs to crunch several billion signals.

Human intuition expert Gary Klein has done much work in the area of sense making. His view of sense making relies on the existence of a “frame” that represents what we believe to be true about the world around us at any given time.  We constantly update that frame based on new environmental inputs.  Sometimes they confirm the frame. Sometimes they contradict the frame. If the contradiction is big enough, it may cause us to discard the frame and build a new one. But it’s this frame that allows us to not only connect the dots, but also to determine what counts as a dot. And to do this…

You Have to Be Constantly Experimenting

Crunching of the data may give you the dots, but there will be multiple ways to connect them. A number of hypothetical “frames” will emerge from the raw data. You need to test the validity of these hypotheses. In some cases, they can be tested against your own internally controlled data. Sometimes they will lie beyond the limits of that data. This means adopting a rigorous and objective testing methodology.  Objective is the key word here, because…

You Need to Remove Human Limitations from the Equation

When you look at the historic failures of Intelligence gathering, the fault usually doesn’t lie in the “gathering.” The signals are often there. Frequently, they’re even put together into a workable hypothesis by an analyst. The catastrophic failures in intelligence generally arise because some one, somewhere, made an intuitive call to ignore the information because they didn’t agree with the hypothesis. Internal politics in the Intelligence Community has probably been the single biggest point of failure. Finally…

Data Needs to Be Shared

The ICREACH project came about as a way to allow broader access to the information required to identify warning signals and test out hunches. ICREACH opens up this data pool to nearly two-dozen U.S. Government agencies.

Big Data shouldn’t replace intuition. It should embrace it. Humans are incredibly proficient at recognizing patterns. In fact, we’re too good at it. False positives are a common occurrence. But, if we build an objective way to validate our hypotheses and remove our irrational adherence to our own pet theories, more is almost always better when it comes to generating testable scenarios.

Why Cognitive Computing is a Big Deal When it comes to Big Data

IBM-Watson

Watson beating it’s human opponents at Jeopardy

When IBM’s Watson won against humans playing Jeopardy, most of the world considered it just another man against machine novelty act – going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that. As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning Jeopardy requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge. Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a Jeopardy champion. Watson learned as it went. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem solve. These are all things that Watson can do.

 

Peter Pirolli - PARC

Peter Pirolli – PARC

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines has been to “make sense” of situations and adapt accordingly. Remember, a few columns ago where I talked about narratives and Big Data, this is where Monitor360 uses a combination of humans and computers – computers to do the data crunching and humans to make sense of the results. But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to sense make in an unlimited variety of environmental contexts. We humans excel at quick and dirty sense making no matter what the situation. We’re not always correct in our conclusions but we’re far more flexible than machines. But computers are constantly narrowing the gap and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense making. But this flexibility has required a trade-off. We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it.

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

The Human Stories that Lie Within Big Data

storytelling-boardIf I wanted to impress upon you the fact that texting and driving is dangerous, I could tell you this:

In 2011, at least 23% of auto collisions involved cell phones. That’s 1.3 million crashes, in which 3331 people were killed. Texting while driving makes it 23 times more likely that you’ll be in a car accident.

Or, I could tell you this:

In 2009, Ashley Zumbrunnen wanted to send her husband a message telling him “I love you, have a good day.” She was driving to work and as she was texting the message, she veered across the centerline into oncoming traffic. She overcorrected and lost control of her vehicle. The car flipped and Ashley broke her neck. She is now completely paralyzed.

After the accident, Zumbrunnen couldn’t sit up, dress herself or bath. She was completely helpless. Now a divorced single mom, she struggles to look after her young daughter, who recently said to her “I like to go play with your friends, because they have legs and can do things.”

The first example gave you a lot more information. But the second example probably had more impact. That’s because it’s a story.

We humans are built to respond to stories. Our brains can better grasp messages that are in a narrative arc. We do much less well with numbers. Numbers are an abstraction and so our brains struggle with numbers, especially big numbers.

One company, Monitor360, is bringing the power of narratives to the world of big data. I chatted with CEO Doug Randall recently about Monitor360’s use of narratives to make sense of Big Data.

“We all have filters through which we see the world. And those filters are formed by our experiences, by our values, by our viewpoints. Those are really narratives. Those are really stories that we tell ourselves.”

For example, I suspect the things that resonated with you with Ashley’s story were the reason for the text – telling her husband she loved him – the irony that the marriage eventually failed after her accident and the pain she undoubtedly felt when her daughter said she likes playing with other moms who can still walk. All of those things, while they don’t really add anything to our knowledge about the incidence rate of texting and driving accidents, are all things that strike us at a deeply emotional level because we can picture ourselves in Ashley’s situation. We empathize with her. And that’s what a story is, a vehicle to help us understand the experiences of another.

Monitor360 uses narratives to tap into these empathetic hooks that lie in the mountain of information being generating by things like social media. It goes beyond abstract data to try to identify our beliefs and values. And then it uses narratives to help us make sense of our market. Monitor360 does this with a unique combination of humans and machines.

“A computer can collect huge amounts of data and the compute can even sort that data. But “sense making” is still very, very difficult for computers to do. So human beings go through that information, synthesize that information and pull out what the underlying narrative is.”

Monitor360 detects common stories in the noisy buzz of Big Data. In the stories we tell, we indicate what we care about.

“This is what’s so wonderful about Big Data. The Data actually tells us, by volume, what’s interesting. We’re taking what are the most often talked about subjects…the data is actually telling us what those subjects are. We then go in and determine what the underlying belief system in that is.”

Monitor360’s realization that it’s the narratives that we care about is an interesting approach to Big Data. It’s also encouraging to know that they’re not trying to eliminate human judgment from the equation. Empathy is still something we can trump computers at.

At least for now.

The Bug in Google’s Flu Trend Data

First published March 20, 2014 in Mediapost’s Search Insider

Last year, Google Flu Trends blew it. Even Google admitted it. It over predicted the occurrence of flu by a factor of almost 2:1.  Which is a good thing for the health care system, because if Google’s predictions had have been right, we would have had the worst flu season in 10 years.

Here’s how Google Flu Trends works. It monitors a set of approximately 50 million flu related terms for query volume. It then compares this against data collected from health care providers where Influenza-like Illnesses (ILI) are mentioned during a doctor’s visit. Since the tracking service was first introduced there has been a remarkably close correlation between the two, with Google’s predictions typically coming within 1 to 2 percent of the number of doctor’s visits where the flu bug is actually mentioned. The advantage of Google Flu Trends is that it is available about 2 weeks prior to the ILI data, giving a much needed head start for responsiveness during the height of flu season.

FluBut last year, Google’s estimates overshot actual ILI data by a substantial margin, effectively doubling the size of the predicted flu season.

Correlation is not Causation

This highlights a typical trap with big data – we tend to start following the numbers without remembering what is generating the numbers. Google measures what’s on people’s minds. ILI data measures what people are actually going to the doctor about. The two are highly correlated, but one doesn’t not necessarily cause the other. In 2013, for instance, Google speculated that increased media coverage might be the cause for the overinflated predictions. More news coverage would have spiked interest, but not actual occurrences of the flu.

Allowing for the Human Variable

In the case of Google Flu Trends, because it’s using a human behavior as a signal – in this case online searching for information – it’s particularly susceptible to network effects and information cascades. The problem with this is that these social signals are difficult to rope into an algorithm. Once they reach a tipping point, they can break out on their own with no sign of a rational foundation. Because Google tracks the human generated network effect data and not the underlying foundational data, it is vulnerable to these weird variables in human behavior.

Predicting the Unexpected

A recent article in Scientific American pointed out another issue with an over reliance on data models –  Google Flu Trends completely missed the non-seasonal H1N1 pandemic in 2009. Why? Algorithmically, Google wasn’t expecting it. In trying to eliminate noise from the model, they actually eliminated signal coming during an unexpected time. Models don’t do very well at predicting the unexpected.

Big Data Hubris

The author of the Scientific American piece, associate editor Larry Greenemeier, nailed another common symptom of our emerging crush on data analytics – big data hubris. We somehow think the quantitative black box will eliminate the need for more mundane data collection – say – actually tracking doctor’s visits for the flu. As I mentioned before, the biggest problem with this is that the more we rely on data, which often takes the form of arm’s length correlated data, the further we get from exploring causality. We start focusing on “what” and forget to ask “why.”

We should absolutely use all the data we have available. The fact is, Google Flu Trends is a very valuable tool for health care management. It provides a lot of answers to very pertinent questions. We just have to remember that it’s not the only answer.