Strategic Planning as though the Future Matters – Strategy and Leadership

chesspiecesWhy do organizations get blindsided by market transformations that could have been anticipated? It may not be because their planning methods are flawed, but rather that they undertake strategic planning processes like scenario development without seeing them as a unique opportunity for learning about and exploring the future.

To help planners avoid strategic surprise, Monitor 360 has created a five-step strategic planning process that has been tested in interactions with leaders in the military, intelligence community, and corporations. This paper guides you through a systematic process for incorporating plausible but challenging future scenarios into your organization’s learning processes, to help mitigate risk and decrease the likelihood of being unprepared for discontinuities.

The PDF is available for download.

Why do organizations get blindsided by market transformations that could have been anticipated? After all, scenario planning has been a widely used strategic planning tool for decades and most managers are familiar with the process of considering how they would operate in alternative futures. The reason most organizations get surprised by game-changing events, in my experience, is not that their planning methods are bad. The problem is that they undertake strategic planning processes like scenario development without seeing them as a unique opportunity for learning about and exploring the future. In some cases this is because management lacks sufficient appreciation for the uncertainty and ambiguity their organizations face. More often, however, management is fully aware of the uncertainty of their situation but is seemingly powerless to prepare to adapt to new business realities, especially unpleasant ones.

To help planners avoid strategic surprise, Monitor 360 has created a five-step strategic planning process that has been tested in interactions with leaders in the military, intelligence community, and corporations. By systematically incorporating plausible but challenging future scenarios into their learning processes, decision makers can both mitigate risk and decrease the likelihood of not being prepared for discontinuities. This approach overcomes the paralysis that sometimes happens when people see all the uncertainty their organization faces, as well as the denial that happens when they don’t.

Multiple futures

When thinking about the future, many strategic planners make the mistake of asking, “What will the future be?” Because the future is the net result of so many complex and interdependent issues the question is daunting, and perhaps unanswerable.

A more realistic question is, “What are the possible challenging futures?” Exploring multiple possible ways the future could unfold in ways that would require the organization to radically adapt enables leaders to better prepare for a wide range of contingencies, and to manage the consequences more effectively when surprises do occur.

Scenario analysis can provide planners with a systematic way of imaging the future and identifying winning long-term strategies that respond to the many ways the future could play out. It helps individuals and their organizations identify and challenge their entrenched mental models and assumptions about what the future might hold, while helping bound the uncertainties they face.

Instead of attempting to predict what’s going to happen, the scenario methodology offers a way to see the forces as they are taking shape and not be blindsided when they lead to major changes. Anticipating the future gives decision-makers the ability to look in the right place for game-changing events, to rehearse the appropriate responses and to systematically tack indicators of change.

Five Mindsets for Managing Uncertainty

Scenario thinking is the foundation of our five-step toolkit because of the unique ways it allows leaders to explore and exploit the unknown, and because it offers managers a methodology to consider alternatives in the face of uncertainty. To make scenario planning more effective, we’ve identified five discrete steps in the process, each of which should be undertaken with a distinct mindset. It is important to take these steps one at a time and in order, rather than skipping right away to decision-making.

Create Scenarios — Unleash your Imagination

Scenarios are plausible narratives about futures that are distinctly different from the present. If they are well prepared, they allow for a thorough exploration of future risks and opportunities. Scenario thinkers begin at the same place as traditional risk managers, skillfully making an inventory of what is known about the future. After exploring issues such as demographics as well as aspects of industry structure and customer behavior, scenario thinkers turn to the unknown, the unknowable, and the perceptions that should be challenged. Following a rigorous analytical process aimed at articulating the range of uncertainties an organization could face and all of the relevant outcomes, scenario thinkers design a number of cogent narratives about relevant futures.

Scenarios are written as plausible stories — not probable ones. Traditional risk management is based on probabilities, actuary tables, and other known and measurable quantities. But scenarios are intended to provoke the imagination and provide a more comprehensive view of risk, so that the results can shed light on critical strategic decisions.

It is important to note that scenario developers create multiple futures, rather than just one. This allows for a more complete exploration of the future, thus avoiding getting wedded to specific set of assumptions about how uncertainties will unfold. The process of developing multiple scenarios helps to increase the possibility that leaders will not be surprised, because it allows them to rehearse multiple unique futures. Importantly, it also grounds decision-makers in the reality that, in most circumstances, they cannot accurately predict the future. Rathe than falsely assuming one outcome will happen, leaders learn that they must make decisions in light of the true uncertainty they face.

As an example of this process, the U.S. Navy developed a set of scenarios that would help guide the development of the first unified strategy of all the country’s maritime forces in the “A Cooperative Strategy for 21st Century Seapower” released October 2007. The first step was to develop four working scenarios. These were discussed and refined in a series of eight working sessions around the country with people from the business, government, and academic sectors who could provide valuable insight about issues the Navy needed to address in the future. The participation of these experts, and their feedback, helped to test the validity of scenarios, which were then refined for publication and dissemination.

The scenarios had a significant impact on the future strategy of the Navy. For example, the scenarios helped to provide a new mission for the Navy in its response to humanitarian crises. the report concluded: “Building on relationships forged in times of calm, we will continue to mitigate human suffering as the vanguard of interagency and multinational efforts, both in a deliberate, proactive fashion and in response to crises. Human suffering moves us to act, and the expeditionary character of maritime forces uniquely positions them to provide assistance.”

Determine Required Capabilities for each Scenario — Give your Creativity Free Rein

The second step of the process is to identify what it takes to be successful in each of the futures identified. After the scenario process has imagined distinctly different future worlds that the organization’s leaders have acknowledge are plausible, relevant, and important, what would a high-performing organization look like in each of these worlds? That is, if an organization were dealt the card of one scenario, what would it need to do in order to be successful?

To answer this question, planners need to make a list of key success factors and capabilities. Key capabilities for militaries or intelligence agencies might be the ability to project force rapidly abroad or the ability to collect and process open-source information. Capabilities often start with “the ability to….” For companies these might be the ability to build brands that address customer needs and inspire loyalty as well as the ability to launch products quickly.

As a case in point, a major software company needed to determine where to invest its limited resources to succeed in a market roiled by new competitors. In one scenario, the company needed good relationships with its value-added resellers and excellent customer service. In another scenario, it needed an entirely different set of capabilities, including low cost, operating system integration.

It’s important to address the capabilities question as if it were a set of independent problems: what it would take to be a winner in a given scenario? Doing so encourages bold, creative thinking, and avoids the trap of limiting the alternatives to those that are doable with current capabilities and resources. By keeping this step separate from the next one, assessing current capabilities, planners are not hobbled by only thinking about what they are good at today or nor do they have struggle with imagining themselves in four different worlds at the same time.

It is often a wrenching experience for leaders to simply look for the absolutely best strategic posture for their organization in each scenario. This is one measure of how hard it is for them to imagine doing business in any future that has totally different success factors from the current environment.

Assess Current Capabilities — Be Painfully Realistic

Separate from the critical examination of the capabilities needed for success in each scenario, planners must ask: What are we good at right now? The answers could be human capital, relationship, or operational efficiency. These capabilities are generally described as competitive assets that cannot be bought and sold on the free market. Organizations can’t just say, “We’ll invest $100 million next month, and then we’ll have that ability” or “We want to do that.” They have to build the capability over time.

Often outside perspective — for example, based on detailed discussions with customers — can be helpful in getting in unbiased evaluation of what capabilities an organization excels in.

Identify Gaps — Provide Honest Analysis

Next organization should compare its own capabilities with the capabilities needed to succeed in the scenarios. Such capability maps will not only highlight what capabilities it needs to develop — the capability gaps — but also what capabilities it has already invested in that may become redundant.

Make Choices — Consider your Options

Once organizations have analyzed the gap between their strengths and the capabilities needed in each scenario, they face some big decisions. There could be capabilities that they need in all the scenarios imagined but that they don’t currently have. As a first step, an organization might safely develop these. We call those “no-regrets moves.”

Others move are what we call “big bets.”  These are capabilities needed in a particular scenario or a small number of scenarios, but not in others. Organizations make bets consciously after systematically thinking through the types of capabilities, their relationship to the environment around them, or the futures that they feel are likely to occur. They can adjust their decision when more data is collected or events unfold in the world. This process is based on the theory of real options, which suggests an organization can gain an advantage by making many small bets, and as information accumulates, start to increase or decrease those bets accordingly.

The crucial questions for organizations to ask when making choices are: What would be the risk if a scenario happened and we didn’t have this capability? And what would be the risk if a scenario didn’t happen and we did have it?

What’s Clouding the Future?

It’s painfully difficult for individual leaders to keep their minds open to multiple futures and to follow a systematic process like the one described above. IBM”s famous story illustrates this all-to-common tendency.

In 1980, the personal computer represented a tiny niche market. When IBM was considering developing a computer for the masses it convened a working group to forecast its potential market. The team projected that the market for PCs would total a mere 241,683 units over the next five years, and that demand would peak after two years and then trend downward. They believed that since existing computers had such a small amount of processing power, people would not want to purchase a second one.

As a result, IBM determined that there was no potential in the marketplace and effectively killed its effort to dominate the personal computer market, ceding the operating system to Microsoft and the processor to Intel. IBM lost out on a $240 billion market, one in which nearly every household in the developed world would eventually want one or more of the machines and then would want to upgrade them every few years.

But what if someone in the room had asked, “What if people want a PC on every desktop?” What if individuals start carrying PC’s in their pockets? What if PCs develop a communications capability?What if they are widely used to play games? Maybe we should think of a different scenario where the market would be more like 20 million units?” These would have been completely off-the-wall, outrageous ideas at the time, but if just one person in the room had explored such different lines of thought, the futures of Microsoft, Intel and IBM might have evolved differently.

The Benefits of a Systematic, Disciplined Approach

Anticipating the future isn’t just about avoiding strategic surprise or minimizing the downside risk. There’s also a huge upside: You are creating the future that you want and making sense of how the world may play out. Understanding your choices can be an empowering process.

When planners follow a process that systematically cuts through the barriers to effective group learning and decision-making, and combine that process with principles that give discipline and robustness to the entire endeavor, the future, and our place in it, comes into a much sharper focus.

The Metaphysical Corporation and The Death of Capitalism?

Something strange is happening to companies. More and more, their business is being conducted in non-physical markets. Businesses used to produce stuff. Now, they produce ideas. A recent op-ed piece from Wharton speculated that companies are working their way up Maslow’s Hierarchy. The traditional business produced things that met the needs of the lowest levels of the pyramid – shelter, food, warmth, security. As consumerism spread, companies worked their way up to next levels: entertainment, attainment and enjoyment.  Now, the things that companies sell sit at the top of the pyramid – fulfillment, creativity, self-actualization.

ComponentsSP500_2010The post also talks about another significant shift that’s happening on the balance sheets of Corporate America. Not only are the things that corporations sell changing, but the things that make up the value of the company itself are also changing.  According to research by Ocean Tomo, a merchant bank that specializes in intellectual property, the asset mix of companies has shifted dramatically in the past 40 years. In 1975, tangible assets (buildings, land, equipment, inventory) made up 83% of the market value of the S&P 500 companies. By 2010, that had flipped – Intangible assets (patents, trademarks, goodwill and brand) made up 80% of the market value of the S & P 500.

Chains vs Networks and the Removal of Friction

Barry Libert, Jerry Wind and Megan Beck Finley, the authors of the Wharton piece, focus mainly on the financial aspects of this shift. They point out that general accounting principles (GAAP) are quickly falling behind this corporate evolution. For example, employees are still classified as an expense, rather than an asset. I’m personally more interested in what this shift means for the very structure of a corporation.

If you built stuff, you needed a supply chain. Vertical integration was the way to remove physical transactional friction from the manufacturing process. Vertical integration bred hierarchal management styles. Over time, technology would remove some of the friction and some parts of the chain may evolve into open markets. The automotive industry is a good example. Many of the components of your 2015 Fusion are supplied to Ford by independent vendors. Despite this, makers of “stuff” still want to control the entire chain through centralized management.

But if you sell ideas, you need to have a network. Intangible products don’t have any physical friction, so supply chains are not required. And if you try to control a network with a centralized hierarchy, branches of your network soon wither and die.

The New Real Thing

coca-cola-freestyle-machineCoke has not been a maker of stuff for quite some time now. Sure, they make beverages, so technically they’re quenching our thirst, but the true value of Coke lies in its brand and our connection to that brand. The “Real Thing” is, ironically and quite literally, a figment of our imagination. If you were to place Coke on Maslow’s Hierarchy – it wouldn’t sit on the bottom level (physiological) but on the third (Love/Belonging) or even the fourth (Esteem).

Coke is very aware of its personal connection with it’s customers and the intangibles that come with it. That’s why the Coca-Cola Freestyle Vending Machine comes with the marketing tag line: “So many options. Thirst isn’t one of them.” You can customize your own formulation from over 100 choices, and if you have the Freestyle app, you can reorder your brand at any Coke Freestyle machine in the world. Of course, Coke is quietly gathering all this customer data that’s generated, including consumption patterns and regional preferences. Again, this intimate customer insight is just one of the intangibles that is becoming increasing valuable.

Coke is not only changing how it distributes its product. It’s also grappling with changing its very structure. In a recent conversation I had with CMO Joe Tripodi, he talked a lot about Coke’s move towards becoming a networked corporation. Essentially, Coke wants to make sure that worldwide innovation isn’t choked off by commands coming from Atlanta.

The Turning Point of Capitalism

As corporate America moves away from the making of physical stuff and towards the creation of concepts that it shares with customers, what does that mean for capital markets? If you believe Jeremy Rifkin, in his new book The Zero Marginal Cost Society, he contends that capitalism is dying a slow death. Eventually, it will be replaced by a new collaborative common market made possible by the increasing shrinkage of marginal costs. As we move from the physical to the metaphysical, the cost of producing consumable services or digital concept-based products (books, music, video, software) drops dramatically. Capital was required to overcome physical transactional friction. If that friction disappears, so does the need for capital.   Rifkin doesn’t believe the death of capitalism will be any time soon, but he does see an inevitable trend towards a new type of market he calls the Collaborative Commons.

Get Intimate

My last takeaway is this – if future business depends on connecting with customers and their conceptual needs, it becomes essential to know those customers on a deeply intimate level.  Throw away any preconceptions from the days of mass marketing and start thinking about how to connect with the “Market of One.”

A Prospect Ignored isn’t Really a Prospect

asleep at work / schoolI’ve ranted about this before and – oh yes – I shall rant again!

But first – the back-story.

I needed some work done at a property I own. I found three contractors online and reached out to each of them to get a quote.

Cue crickets.

No response. Nothing! So a few days later I politely followed up with each to prod the process along. Again, nothing. Finally, after 4 weeks of repeated e-nagging, one finally coughed up a quote. Most of the details were wrong, but at least someone at the other end was responding with minimal signs of consciousness.

Fast-forward 2 months. The work is still not done. At this point, I’m still trying to convey the specifics of the job and to get an estimated timeline. If I had an option, I’d take it. But the sad fact is, as spotty as the communication is with my contractor of choice, it’s still better than his competitors. One never did respond, even after a number of emails and voicemails. One finally sent a quote, but it was obvious he didn’t want the work. Fair enough. If the laws of supply and demand are imbalanced this much in their favor, who am I to fight it?

But here’s the thing. Market balances can change on a dime. Someday I’ll be in the driver’s seat and they’ll be scrambling to line up work to stay in business. And when they reach out to their contact list, a lot of those contacts will respond with an incredulous WTF. If you didn’t want my business when I needed you, why would you think I would give you it when you need me? A prospect spurned has a long memory for the specifics of said spurning. So, Mr. (or Ms.) Contractor, you can go take a flying leap.

If you’re going to use online channels to build your business, don’t treat it like a tap you can turn on and off at your discretion. Your online prospects have to be nurtured. If you can’t take any new business on, that’s fine. But at least have enough respect for them to send a polite response explaining the reason you can’t do the work. As long as we prospects are treated with respect, you’d be amazed at how reasonable we can be. Perhaps we can schedule the job for when you do have time. At the very least, we won’t walk away from the interaction with a bitter taste that will linger for years to come.

In 2005, Benchmark Portal did a study to compare response rates for email requests. The results were discouraging. Over 50% of SMB’s never responded at all. Only a small fraction actually managed to respond within 24 hours of the request.

I would encourage you to do a little surreptitious checking on your own response rates. Prospects contacting you need your help, and none of us like to hear our pleas for help go unanswered. 24 hours may seem like a reasonable time frame to you, but if you’re on the other end, it’s more than enough time to see your enthusiasm cool dramatically. Make it someone’s job to field online requests and set a 4-hour response time limit. I’m not talking about an auto-generated generic email here. I’m talking about a personalized response that makes it clear that someone has taken the time to read your request and is working on it. Also give a clear indication of how long it will take to follow up with the required information.

Why are these initial responses so critical? It’s not just to keep your field of potential prospects green and growing. It’s also because we prospects are using something called “signaling” to judge future interactions with a business. When we reach out to a new business we find online, we have no idea what it will be like to be their customer. We don’t have access to that information. So, we use things we do know as a proxy for that information. These things provide “signals” to help us fill in the blanks in our available information. An example would be hiring new employees. We don’t know how the person we’re interviewing will perform as an employee, so we look for certain things in a resume or an interview to act as signals that would indicate that the candidate will perform well on the job if hired.

If I’m a prospect looking for a business – especially one providing a service that will require an extended relationship between the business and myself – I need signals to show me how reliable the business will be if I chose them. Will they get the work done in a timely manner? Will the quality of the work be acceptable? Will they be responsive and accommodating to my requirements? If problems arise, will they be willing to work through those problems? Those are all questions I don’t have the answer to. All I have are indications based on my current interactions with the business. And if those interactions have required my constant nagging and clarification to avoid incorrect responses, guess what my level of confidence might be with said business?

Learning about Big Data from Big Brother

icreach-search-illo-feature-hero-bYou may not have heard of ICREACH, but it has probably heard of you. ICREACH is the NSA’s own Google-like search engine.  And if Google’s mission is to organize the world’s information, ICREACH’s mission is to snoop on the world.  After super whistle blower Edward Snowden tipped the press off to the existence of ICREACH, the NSA fessed up last month. The amount of data we’re talking about is massive. According to The Intercept website, the tool can handle two to five billion new records every day, including data on the US’s emails, phone calls, faxes, Internet chats and text messages. It’s Big Brother meets Big Data.

I’ll leave aside for the moment the ethical aspect of this story.  What I’ll focus on is how the NSA deals with this mass of Big Data and what it might mean for companies who are struggling to deal with their own Big Data dilemmas.

Perhaps no one deals with more big data than the Intelligence Community. And Big Data is not new for them. They’ve been digging into data trying to find meaningful signals amongst the noise for decades. Finally, the stakes of successful data analysis are astronomically high here. Not only is it a matter of life and death – a failure to successfully connect the dots can lead to the kinds of nightmares that will haunt us for the rest of our lives. When the pressure is on to this extent, you can be sure that they’ve learned a thing or two. How the Intelligence community handles data is something I’ve been looking at recently. There are a few lessons to be learned here.

Owned Data vs Environmental Data

The first lesson is that you need different approaches for different types of data. The Intelligence Community has their own files, which include analyst’s reports, suspect files and other internally generated documentation. Then you have what I would call “Environmental” data. This includes raw data gathered from emails, phone calls, social media postings and cellphone locations. Raw data needs to be successfully crunched, screened for signals vs. noise and then interpreted in a way that’s relevant to the objectives of the organization. That’s where…

You Need to Make Sense of the Data – at Scale

Probably the biggest change in the Intelligence community has been to adopt an approach called “Sense making.”  Sense making really mimics how we, as humans, make sense of our environment. But while we may crunch a few hundred or thousand sensory inputs at any one time, the NSA needs to crunch several billion signals.

Human intuition expert Gary Klein has done much work in the area of sense making. His view of sense making relies on the existence of a “frame” that represents what we believe to be true about the world around us at any given time.  We constantly update that frame based on new environmental inputs.  Sometimes they confirm the frame. Sometimes they contradict the frame. If the contradiction is big enough, it may cause us to discard the frame and build a new one. But it’s this frame that allows us to not only connect the dots, but also to determine what counts as a dot. And to do this…

You Have to Be Constantly Experimenting

Crunching of the data may give you the dots, but there will be multiple ways to connect them. A number of hypothetical “frames” will emerge from the raw data. You need to test the validity of these hypotheses. In some cases, they can be tested against your own internally controlled data. Sometimes they will lie beyond the limits of that data. This means adopting a rigorous and objective testing methodology.  Objective is the key word here, because…

You Need to Remove Human Limitations from the Equation

When you look at the historic failures of Intelligence gathering, the fault usually doesn’t lie in the “gathering.” The signals are often there. Frequently, they’re even put together into a workable hypothesis by an analyst. The catastrophic failures in intelligence generally arise because some one, somewhere, made an intuitive call to ignore the information because they didn’t agree with the hypothesis. Internal politics in the Intelligence Community has probably been the single biggest point of failure. Finally…

Data Needs to Be Shared

The ICREACH project came about as a way to allow broader access to the information required to identify warning signals and test out hunches. ICREACH opens up this data pool to nearly two-dozen U.S. Government agencies.

Big Data shouldn’t replace intuition. It should embrace it. Humans are incredibly proficient at recognizing patterns. In fact, we’re too good at it. False positives are a common occurrence. But, if we build an objective way to validate our hypotheses and remove our irrational adherence to our own pet theories, more is almost always better when it comes to generating testable scenarios.

Twitch – Another Example of a Frictionless Market

twitch_logo3Twitch just sold for $1 billion dollars. That’s not really news. We’ve become inured to the never-ending stream of tech acquisitions that instantly transforms entrepreneurial techies into some of the richest people on the planet. No, what’s interesting about Twitch is if we slow down long enough to think about how this particular start up managed to create $1 billion in value.

A billion dollars is a lot of money. If we looked back just 50 years, a billion dollars in assets would make a company number 40 on the Fortune 500. If Twitch were somehow teleported back to 1964, it would rank just eight slots under Procter and Gamble (assets worth $1.15 billion) and three slots above Sunoco (assets of $0.88 billion). Coca-Cola would be left in the dust with a mere $485 million in assets. Today a half billion dollars is chump change in Silicon Valley terms.

This becomes more amazing when you consider that Twitch is only 3 years old. And it really started as an accident.

justin_kanRemember EDtv? Probably not. It was a pretty forgettable 1999 movie (based on a 1994 Quebec film called Louis 19, King of the Airwaves) starring Matthew McConaughey. The idea was that Ed would be followed by cameras 24 hours a day, 7 days a week, making his life a reality TV show. 1998’s The Truman Show had a similar theme (albeit with better ratings). Anyway, the point made in both movies was that an average life, if televised, could be entertaining enough to make people watch. In 2006, Emmett Shear and Justin Kan decided to test the premise. They launched Justin.tv. Soon they invited others to simulcast their lives as well.

What Kan and Shear did, although they probably weren’t intending to at the time, was create a platform that allowed anyone to be a real-time broadcaster with zero transactional costs. They created a perfect market for live TV. Last week I talked about AirBnB, TripAdvisor and VRBO.com creating a more perfect market for tourism. The key characteristic of a perfect market is that barriers to entry are reduced to zero, turning the market into an emergent sandbox from which new things tend to pop up. And that’s exactly what happened with Twitch.

Shear and Kan found that one group in particular embraced the idea of livecasting – gamers. They could communicate with other gamers, but they could also show off their mad gaming skills. Using the Justin.tv platform, Twitch was launched for the gaming industry in 2011. And thanks to Twitch, gaming has become a spectator sport – at a massive scale.

Twitch’s “stars” – like 30-year-old Tessa Brooks, who goes by “Tessachka” and broadcasts an average of 42 hours of programming a week – post their schedules so that their audiences can tune in. Twitch has about 55 million viewers per month who consume over 16 billion minutes of video programming. According to SocialBlade.com, this month, “Riotgames” is the top ranked Twitch broadcaster, with almost a million followers and over 18 million channel views.

Again, those are big numbers. A network show that pulls in 18 million viewers would be number 5 in the Nielsen ratings. And while Netflix’s House of Cards or Orange is the New Black may have made waves at the Emmies, The Atlantic estimates that only 2 – 3 million people watch a newly posted episode in the first week. On a good week, Riotgames could blow that away without twitching a trigger finger.

Twitch not only created a platform that generates audiences, it also generated a marketplace. Where there are eyeballs, there’s revenue potential. Twitch cuts its gamers in for a cut of the advertising revenue. I couldn’t find numbers on how lucrative this could be, but I suspect Justin may be able to quit his day job.

Like I said, the Twitch story is interesting, but what is vastly more interesting is the market dynamics that it has unleashed. Amazon’s $1 billion bid is not for the technology. It’s for the community and the market that comes with that community. When it comes to leveraging the potential of zero transactional cost markets, Amazon knows a thing or two. And one of the things it knows is that in frictionless markets, if you can navigate the turbulence, tremendous value can be created in an amazing short time. Say, for instance, $1 billion in just 3 years. It took Procter and Gamble 127 years to be worth that much.

Technology is Moving Us Closer to a Perfect Market

I have two very different travel profiles. When I travel on business, I usually stick with the big chains, like Hilton or Starwood. The experience is less important to me than predictability. I’m not there for pleasure; I’m there to sleep. And, because I travel on business a lot (or used to), I have status with them. If something goes wrong, I can wave my Platinum or Diamond guest card around and act like a jerk until it gets fixed.

But, if I’m traveling for pleasure, I almost never stay in a chain hotel. In fact, more and more, I stay in a vacation rental house or apartment. It’s a little less predictable than your average Sheraton or Hampton Inn, but it’s almost always a better value. For example, if I were planning a last minute get away to San Francisco for Labor Day weekend, I’d be shelling out just under $400 for a fairly average hotel room at the Hilton by Union Square. But for about the same price, I could get an entire 4 bedroom house that sleeps 8 just two blocks from Golden Gate park. And that was with just a quick search on AirBnB.com. I could probably find a better deal with the investment of a few minutes of my time.

perfect_market_1Travel is just one of the markets that technology has made more perfect. And when I say “perfect” I use the term in its economic sense. A perfect market has perfect competition, which means that the barriers of entry have been lowered and most of the transactional costs have been eliminated. The increased competition lowers prices to a sustainable minimum. At that point, the market enters a state called the Pareto Optimal, which means that nothing can be changed without it negatively impacting some market participants and positively impacting others.

Whether a perfect market is a good thing or not depends on your perspective. If you’re a long-term participant in the market and your goal is to make the biggest profit possible, a perfect market is the last thing you want. If you’re a new entrant to the market, it’s a much rosier story – any shifts that take the market closer to a Pareto Optimal will probably be to your benefit. And if you’re a customer, you’re in the best position of all. Perfect markets lead inevitably to better value.

Since the advent of VRBO.com and, more recently, AirBnB.com, the travel marketplace has moved noticeably closer to being perfect. Sites like these, along with travel review aggregators like TripAdvisor.com, have significantly reduced the transaction costs of the travel industry. The first wave was the reduction of search costs. Property owners were able to publish listings in a directory that made it easy to search and filter options. Then, the publishing of reviews gave us the confidence we needed to stray beyond the predictably safe territory of the big chains.

But, more recently, a second wave has further reduced transaction costs independent vacation property owners. I was recently talking to a cousin who rents his flat in Dublin through AirBnB, which takes all the headaches of vacation property management away in return for a cut of the action. He was up and running almost immediately and has had no problem renting his flat during the weeks he makes it available. He found the barriers to entry to be essentially zero. A cottage industry of property managers and key exchange services has sprung up around the AirBnB model.

What technology has done to the travel industry is essentially turned it into a Long Tail business model. As Chris Anderson pointed out in his book, Long Tail markets need scale free networks. Scale free networks only work when transaction costs are eliminated and entry into the market is free of friction. When this happen, the Power Law distribution still stays in place but the tail becomes longer . The Long Tail of Tourism now includes millions of individually owned vacation properties. For example, AirBnB has almost 800 rentals available in Dublin alone. According to Booking.com, that’s about 7 times the total number of hotels in the city.

Another thing that happens is, over time, the Tail becomes fatter. More business moves from the head to the tail. The Pareto Principle states that in Power Law distributions, 20 % of the businesses get 80% of the business. Online, the ratio is closer to 72/28.

These shifts in the market are more than just interesting discussion topics for economists. They mark a fundamental change in the rules of the game. Markets that are moving towards perfection remove the advantages of size and incumbency and reward nimbleness and adaptability. They also, at least in this instance, make life more interesting for customers.

Why Cognitive Computing is a Big Deal When it comes to Big Data

IBM-Watson

Watson beating it’s human opponents at Jeopardy

When IBM’s Watson won against humans playing Jeopardy, most of the world considered it just another man against machine novelty act – going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that. As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning Jeopardy requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge. Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a Jeopardy champion. Watson learned as it went. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem solve. These are all things that Watson can do.

 

Peter Pirolli - PARC

Peter Pirolli – PARC

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines has been to “make sense” of situations and adapt accordingly. Remember, a few columns ago where I talked about narratives and Big Data, this is where Monitor360 uses a combination of humans and computers – computers to do the data crunching and humans to make sense of the results. But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to sense make in an unlimited variety of environmental contexts. We humans excel at quick and dirty sense making no matter what the situation. We’re not always correct in our conclusions but we’re far more flexible than machines. But computers are constantly narrowing the gap and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense making. But this flexibility has required a trade-off. We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it.

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

The Human Stories that Lie Within Big Data

storytelling-boardIf I wanted to impress upon you the fact that texting and driving is dangerous, I could tell you this:

In 2011, at least 23% of auto collisions involved cell phones. That’s 1.3 million crashes, in which 3331 people were killed. Texting while driving makes it 23 times more likely that you’ll be in a car accident.

Or, I could tell you this:

In 2009, Ashley Zumbrunnen wanted to send her husband a message telling him “I love you, have a good day.” She was driving to work and as she was texting the message, she veered across the centerline into oncoming traffic. She overcorrected and lost control of her vehicle. The car flipped and Ashley broke her neck. She is now completely paralyzed.

After the accident, Zumbrunnen couldn’t sit up, dress herself or bath. She was completely helpless. Now a divorced single mom, she struggles to look after her young daughter, who recently said to her “I like to go play with your friends, because they have legs and can do things.”

The first example gave you a lot more information. But the second example probably had more impact. That’s because it’s a story.

We humans are built to respond to stories. Our brains can better grasp messages that are in a narrative arc. We do much less well with numbers. Numbers are an abstraction and so our brains struggle with numbers, especially big numbers.

One company, Monitor360, is bringing the power of narratives to the world of big data. I chatted with CEO Doug Randall recently about Monitor360’s use of narratives to make sense of Big Data.

“We all have filters through which we see the world. And those filters are formed by our experiences, by our values, by our viewpoints. Those are really narratives. Those are really stories that we tell ourselves.”

For example, I suspect the things that resonated with you with Ashley’s story were the reason for the text – telling her husband she loved him – the irony that the marriage eventually failed after her accident and the pain she undoubtedly felt when her daughter said she likes playing with other moms who can still walk. All of those things, while they don’t really add anything to our knowledge about the incidence rate of texting and driving accidents, are all things that strike us at a deeply emotional level because we can picture ourselves in Ashley’s situation. We empathize with her. And that’s what a story is, a vehicle to help us understand the experiences of another.

Monitor360 uses narratives to tap into these empathetic hooks that lie in the mountain of information being generating by things like social media. It goes beyond abstract data to try to identify our beliefs and values. And then it uses narratives to help us make sense of our market. Monitor360 does this with a unique combination of humans and machines.

“A computer can collect huge amounts of data and the compute can even sort that data. But “sense making” is still very, very difficult for computers to do. So human beings go through that information, synthesize that information and pull out what the underlying narrative is.”

Monitor360 detects common stories in the noisy buzz of Big Data. In the stories we tell, we indicate what we care about.

“This is what’s so wonderful about Big Data. The Data actually tells us, by volume, what’s interesting. We’re taking what are the most often talked about subjects…the data is actually telling us what those subjects are. We then go in and determine what the underlying belief system in that is.”

Monitor360’s realization that it’s the narratives that we care about is an interesting approach to Big Data. It’s also encouraging to know that they’re not trying to eliminate human judgment from the equation. Empathy is still something we can trump computers at.

At least for now.

Want to Be More Strategic? Stand Up!

article-1388357-0050C69D00000258-771_472x345One of the things that always frustrated me in my professional experience was my difficulty in switching from tactical to strategic thinking. For many years, I served on a board that was responsible for the strategic direction of an organization. A friend of mine, Andy Freed, served as an advisor to the board. He constantly lectured us on the difference between strategy and tactics:

“Strategy is your job. Tactics are mine. Stick to your job and I’ll stick to mine.”

Despite this constant reminder, our discussions always seemed to quickly spiral down to the tactical level. We all caught ourselves doing it. It seemed that as soon as we started thinking about what needed to be done and why, we automatically shifted gears and thought about how it should be done.

A recent study may have found the problem. We were sitting down. We should have stood up. Better yet, we should have taken the elevator to the top of the building (we actually did do this at one board retreat in Scottsdale, Arizona). Two researchers at the University of Toronto (home, I should point out, of what was the tallest free standing structure in the world for many years – the CN Tower), Pankaj Aggarwal and Min Zhao, found that a subject’s physical situation impacted how strategic they were. When subjects were physically higher up, say standing on a tall stool, they were more likely to look at the “big picture.”

Our physical context has more than a little impact on how we think. It’s a phenomenon called Mental Construal. And it’s not just restricted to how strategic our thinking is. It can impact thinks like social judgment as well. In a 2006 paper, University of Michigan professor Norbert Schwartz gave some examples that fall under the category called “situated concepts.” For example, the mental images you retrieve when I say “chair” might be different if we’re standing in a living room rather than an airplane or movie theatre. Another example, which unfortunately speaks to a darker side of human nature, is how you would respond to the face of a young African American when shown in the context of a church scene versus the context of a street corner scene.

Schwartz also talks about levels of construal. We’re more successful staying at strategic levels when our planning is trouble free. The minute we hit a problem, we tend to revert to finer grained tactical thinking. Again, in my board experience, the minute we started hitting problems we immediately tried to solve them, which effectively derailed any strategic discussion.

In his book, Creativity: Flow and the Psychology of Discovery and Invention, Mihaly Csikszentmihalyi found that physical contexts can also impact creativity. Physicist Freeman Dyson found that walking was essential to drive the creative process,

“Again, I never went to a class that (Richard) Feynman taught. I never had any official connection with him at all, in fact. But we went for walks. Most of the time that I spent with him was actually walking, like the old style of philosophers who used to walk around under the cloisters.”

In a study where subjects were given pagers and were signaled at random times of the day, they were asked to rate how creative they felt. It turned out the highest level of creativity came while they were walking, driving or swimming. Perhaps it was the physical stimulation, but it may have also been mental construal at work. Perhaps physical movement primed the brain for mental movement.

So, if you need to be strategic, find the highest vantage point possible, with room to walk around, preferably with the smartest person you know.

When Are Crowds Not So Wise?

the-wisdom-of-crowdsSince James Surowiecki published his book “The Wisdom of Crowds”, the common wisdom is – well – that we are commonly wise. In other words, if we average the knowledge of many people, we’ll be smarter than any of us would be individually. And that is true – to an extent. But new research suggests that there are group decision dynamics at play where bigger (crowds) may not always be better.

A recent study by Iain Couzin and Albert Kao at Princeton suggests that in real world situations, where information is more complex and spotty, the benefits of crowd wisdom peaks in groups of 5 to 20 participants and then decreases after that. The difference comes in how the group processes the information available to them.

In Surowiecki’s book, he uses the famous example of Sir Francis Galton’s 1907 observation of a contest where villagers were asked to guess the weight of an ox. While no individual correctly guessed the weight, the average of all the guesses came in just one pound short of the correct number. But this example has one unique characteristic that would be rare in the real world – every guesser had access to the same information. They could all see the ox and make their guess. Unless you’re guessing the number of jellybeans in a jar, this is almost never the case in actual decision scenarios.

Couzin and Kao say this information “patchiness” is the reason why accuracy tends to diminish as the crowd gets bigger. In most situations, there is commonly understood and known information, which the researchers refer to as “correlated information.” But there is also information that only some of the members of the group have, which is “uncorrelated information.” To make matters even more complex, the nature of uncorrelated information will be unique to each individual member. In real life, this would be our own experience, expertise and beliefs.  To use a technical term, the correlated information would be the “signal” and the uncorrelated information would be the “noise.” The irony here is that this noise is actually beneficial to the decision process.

In big groups, the collected “noise” gets so noisy it becomes difficult to manage and so it tends to get ignored. It drowns itself out. The collective focuses instead on the correlated information. In engineering terms this higher signal-to-noise ratio would seem to be ideal, but in decision-making, it turns out a certain amount of noise is a good thing. By focusing just on the commonly known information, the bigger crowd over-simplifies the situation.

Smaller groups, in contrast, tend to be more random in their make up. The differences in experiences, knowledge, beliefs and attitudes, even if not directly correlated to the question at hand, have a better chance of being preserved. They don’t get “averaged” out like they would in a bigger group. And this “noise” leads to better decisions if the situation involves imperfect information. Call it the averaging of intuition, or hunches. In a big group, the power of human intuition gets sacrificed in favor of the commonly knowable. But in a small group, it’s preserved.

In the world of corporate strategy, this has some interesting implications. Business decisions are almost always complex and involve imperfectly distributed information. This research seems to indicate that we should carefully consider our decision-making units. There is a wisdom of crowds benefit as long as the crowd doesn’t get too big. We need to find a balance where we have the advantage of different viewpoints and experiences, but this aggregate “noise” doesn’t become unmanageable.