The Psychology of Usefulness: How Our Brains Judge What is Useful

To-Do-ListDid you know that “task” and “tax” have the same linguistic roots? They both come from the Latin “taxare” – meaning to appraise. This could explain the lack of enthusiasm we have for both.

Tasks are what I referred to in the last post as an exotelic activity – something we have to do to reach an objective that carries no inherent reward. We do them because we have to do them, not because we want to do them.

When we undertake a task, we want to find the most efficient way to get it done. Usefulness becomes a key criterion. And when we judge usefulness, there are some time-tested procedures the brain uses.

Stored Procedures and Habits

The first question our brain asks when undertaking a task is – have we done this before? Let’s first deal with what happens if the answer is yes:

If we’ve done something before our brains – very quickly and at a subconscious level – asks a number of qualifying questions:

–       How often have we done this?

–       Does the context in which the task plays out remain fairly consistent (i.e. are we dealing with a stable environment)?

–       How successful have we been in carrying out this task in the past

If we’ve done a task a number of times in a stable environment with successful outcomes, it’s probably become a habit. The habit chunk is retrieved from the basal ganglia and plays out without much in the way of rational mediation. Our brain handles the task on autopilot.

If we have less familiarity with the task, or if there’s less stability in the environment, but have done it before we probably have stored procedures, which are set procedural alternatives. These require more in the way of conscious guidance and often have decision points where we have to determine what we do next, based on the results of the previous action.

If we’re entering new territory and can’t draw on past experience, our brains have to get ready to go to work. This is the route least preferred by our brain. It only goes here when there’s no alternative.

Judging Expected Utility and Perceived Risk

If a task requires us to go into unfamiliar territory, there are new routines that the brain must perform. Basically, the brain must place a mental bet on the best path to take, balancing a prediction of a satisfactory outcome against the resources required to complete the task. Psychologists call this “Expected Utility.”

Expected Utility is the brain’s attempt to forecast scenarios that require the balancing of risks and rewards where the outcomes are not known.  The amount of processing invested by the brain is usually tied to the size of the potential risk and reward. Low risk/reward scenarios require less rationalization. The brain drives this balance by using either positive or negative emotional valences, interpreted by us as either anticipation or anxiety. Our emotional balance correlates with the degree of risk or reward.

Expected utility is more commonly applied in financial decision and game theory. In the case of conducting a task, there is usually no monetary element to risk and reward. What we’re risking is our own resources – time and effort. Because these are long established evolved resources, it’s reasonable to assume that we have developed subconscious routines to determine how much effort to expend in return for a possible gain. This would mean that these cognitive evaluations and calculations may happen at a largely subconscious level, or at least, more subconscious than the processing that would happen in evaluating financial gambles or those involving higher degrees of risk and reward.  In that context, it might make sense to look at how we approach another required task – finding food.

Optimal Foraging and Marginal Value

Where we balance gain against expenditure of time and effort, the brain has some highly evolved routines that have developed over our history. The oldest of these would be how we forage for food. But, we also have a knack of borrowing strategies developed for other purposes and using them in new situations.

Pirolli and Card (1999) found, for instance, that we use our food foraging strategies to navigate digital information. Like food, information online tends to be “patchy” and of varying value to us. Often, just like looking for a food source, we have to forage for information by judging the quality of hyperlinks that may take us to those information sources or “patches.” Pirolli and Card called these clues to the quality of information that may lie on the other end of links information scent.

Cartoon_foraging_theoryTied with this foraging strategy is the concept of Marginal Value.  This was first proposed by Eric Charnov in 1976 as a evolved strategy for determining how much time to spend in a food patch before deciding to move on. In a situation with diminishing returns (ie depleted food supplies) the brain must balance effort expended against return. If you happen on a berry bush in the wild, with a reasonable certainty that there are other bushes nearby (perhaps you can see them just a few steps away) you have to mentally solve the following equation – how many berries can be gathered with a reasonable expenditure of effort vs. how much effort would it take to walk to the next bush and how many berries would be available there?

This is somewhat analogous to information foraging, with one key difference. Information isn’t depleted as you consume it. So the rule of diminishing returns is less relevant. But if, as I suspect, we’ve borrowed these subconscious strategies for judging usefulness – both in terms of information and functionality – in online environment, our brains may not know or care about the subtle differences in environments.

The reason why we may not be that rational in the application of these strategies in online encounters is that they play out below the threshold of consciousness. We are not constantly and consciously adjusting our marginal value algorithm or quantifiably assessing the value of an information patch. No, our brains use a quicker and more heuristic method to mediate our output of effort – emotions. Frustration and anxiety tell us it’s time to move onto the next site or application. Feelings of reward and satisfaction indicate we should stay right where we are. The remarkable thing about this is that as quick and dirty as these emotional guidelines are, if you went to the trouble of rationally quantifying the potential of all possible alternatives, using a Bayesian approach, for instance, you’d probably find you ended up in pretty much the same place. These strategies, simmering below the surface of our consciousness, are pretty damn accurate!

So, to sum up this post, when judging the most useful way to get a task done, we have an evaluation cascade that happens very quickly in our brain:

  • If a very familiar task needs to be done in a stable environment, our habits will take over and it will be executed with little or no rational thought.
  • If the task is fairly familiar but requires some conscious guidance, we’ll retrieve a stored procedure and look for successful feedback as we work through it.
  • If a task is relatively new to us, we’ll forage through alternatives for the best way to do it, using evolved biological strategies to help balance risk (in terms of expended effort) against reward.

Now, to return to our original question, how does this evaluation cascade impact long and short-term user loyalty? I’ll return to this question in my next post.

Google Holds the Right Cards for a Horizontal Market

First published January 9, 2014 in Mediapost’s Search Insider

android_trhoneFunctionality builds up, then across. That was the principle of emerging markets that I talked about in last week’s column. Up – then across – breaking down siloes into a more open, competitive and transparent market. I’ll come back here in a moment.

I also talked about how Google + might be defining a new way of thinking about social networking, one free of dependence on destinations. It could create a social lens through which all our online activity passes through, adding functionality and enriching information.

Finally, this week, I read that Google is pushing hard to extend Android as the default operating system in the Open Automotive Alliance – turning cars into really big mobile devices. This builds on Android’s dominance in the smartphone market (with an 82% market share).

See a theme here?

For years, I’ve been talking about the day when search transitions from being a destination to a utility, powering apps which provide very specific functionality that far outstrips anything you could do on a “one size fits all” search portal. This was a good news/bad news scenario for Google, who was the obvious choice to provide this search grid. But, in doing so, they lose their sole right to monetize search traffic, a serious challenge to their primary income source. However, if you piggy back that search functionality onto the de facto operating system that powers all those apps, and then add a highly functional social graph, you have all the makings of a foundation that will support the ‘horizontalization” of the mobile connected market. Put this in place, and revenue opportunities will begin falling into your lap.

The writing is plainly on the wall here. The future is all about mobile connections. It is the foundation of the Web of Things, wearable technology, mobile commerce – anything and everything we see coming down the pipe.  The stakes are massive. And, as markets turn horizontal in the inevitable maturation phase to come, Google seems to be well on their way to creating the required foundations for that market.

Let’s spend a little time looking at how powerful this position might be for Google. Microsoft is still coasting on their success in creating a foundation for the desktop, 30 years later.  The fact that they still exist at all is testament to the power of Windows. But the desktop expansion that happened was reliant on just one device – the PC. And, the adoption curve for the PC took two decades to materialize, due to two things: the prerequisite of a fairly hefty investment in hardware and a relatively steep learning curve. The mobile adoption curve, already the fastest in history, has no such hurdles to clear. Relative entry price points are a fraction of what was required for PCs. Also, the learning curve is minimal. Mobile connectivity will leave the adoption curve of PCs in the dust.

In addition, an explosion of connected devices will propel the spread of mobile connectivity. This is not just about smart phones. Two of the biggest disruptive waves in the next 10 years will be wearable technologies and the Web of Things. Both of these will rely on the same foundations, an open and standardized operating system and the ability to access and share data. At the user interface level, the enhancements of powerful search technologies and social-graph enabled filters will significantly improve the functionality of these devices as they interface with the “cloud.”

In the hand that will have to inevitably be played, it seems that Google is currently holding all the right cards.

Revisiting Entertainment vs Usefulness

brain-cogsSome time ago, I did an extensive series of posts on the psychology of entertainment. My original goal, however, was to compare entertainment and usefulness in how effective they were in engendering long-term loyalty. How do our brains process both? And, to return to my original intent, in that first post almost 4 years ago, how does this impact digital trends and their staying power?

My goal is to find out why some types of entertainment have more staying power than other types. And then, once we discover the psychological underpinnings of entertainment, lets look at how that applies to some of the digital trends I disparaged: things like social networks, micro-blogging, mobile apps and online video. What role does entertainment play in online loyalty? How does it overlap with usefulness? How can digital entertainment fads survive the novelty curse and jump the chasm to a mainstream trends with legs?

In the previous set of posts, I explored the psychology of entertainment extensively, ending up with a discussion of the evolutionary purpose of entertainment. My conclusion was that entertainment lived more in the phenotype than the genotype. To save you going back to that post, I’ll quickly summarize here: the genotype refers to traits actually encoded in our genes through evolution – the hardwired blueprint of our DNA. The phenotype is the “shadow” of these genes – behaviors caused by our genetic blueprints. Genotypes are directly honed by evolution for adaptability and gene survival. Phenotypes are by-products of this process and may confer no evolutionary advantage. Our taste for high-fat foods lives in the genotype – the explosion of obesity in our society lives in the phenotype.

This brings us to the difference between entertainment and usefulness – usefulness relies on mechanisms that predominately live in the genotype.  In the most general terms, it’s the stuff we have to do to get through the day. And to understand how we approach these things on our to-do list, it’s important to understand the difference between autotelic and exotelic activities.

Autotelic activities are the things we do for the sheer pleasure of it. The activity is it’s own reward. The word autotelic is Greek for “self + goal” – or “having a purpose in and not apart from itself.” We look forward to doing autotelic things. All things that we find entertaining are autotelic by nature.

Exotelic activities are simply a necessary means to an end. They have no value in and of themselves.  They’re simply tasks – stuff on our to do list.

The brain, when approaching these two types of activities, treats them very differently. Autotelic activities fire our reward center – the nucleus accumbens. They come with a corresponding hit of dopamine, building repetitive patterns. We look forward to them because of the anticipation of the reward. They typically also engage the prefrontal medial cortex, orchestrating complex cognitive behaviors and helping define our sense of self. When we engage in an autotelic activity, there’s a lot happening in our skulls.

Exotelic activities tend to flip the brain onto its energy saving mode. Because there is little or no neurological reward in these types of activities (other than a sense of relief once they’re done) they tend to rely on the brain’s ability to store and retrieve procedures. With enough repetition, they often become habits, skipping the brain’s rational loop altogether.

In the next post, we’ll look at how the brain tends to process exotelic activities, as it provides some clues about the loyalty building abilities of useful sites or tools. We’ll also look at what happens when something is both exotelic and autotelic.

Our Brain on Books

Brain-on-BooksHere’s another neuroscanning study out of Emory University showing the power of a story.

Lead researcher Gregory Burns and his team wanted to “understand how stories get into your brain, and what they do to it.” Their findings seem to indicate that stories, in this case a historical fiction novel about Pompeii, caused a number of changes in the participants brain, at least in the short term. Over time, some of these changes decayed, but more research is required to determine how long lasting the changes are.

One would expect reading to alter related parts of the brain and this was true in the Emory study. The left temporal cortex, a section of the brain that handles language reception and interpretation showed signs of heightened connectivity for a period of time after reading the novel. This is almost like the residual effects of exercise on a muscle, which responds favorably to usage.

What was interesting, however, was that the team also saw increased connectivity in the areas of the brain that control representations of sensation for the body. This relates to Antonio Damasio’s “Embodied Semantics” theory where the reading of metaphors, especially those relating specifically to tactile images, activate the same parts of the brain that control the corresponding physical activity. The Emory study (and Damasio’s work) seems to show that if you read a novel that depicts physical activity, such as running through the streets of Pompeii as Vesuvius erupts, your brain is firing the same neurons as it would if you were actually doing it!

There are a number of interesting aspects to consider here, but what struck me is the multi-prong impact a story has on us. Let’s run through them:

Narratives have been shown to be tremendously influential frameworks for us to learn and update our sense of the world, including our own belief networks. Books have been a tremendously effect agent for meme transference and propagation. The structure of a story allows us to grasp concepts quickly, but also reinforces those concepts because it engages our brain in a way that a simple recital of facts could not. We relate to protagonists and see the world through their eyes. All our socially tuned, empathetic abilities kick into action when we read a story, helping to embed new information more fully. Reading a story helps shape our world view.

Reading exercises the language centers of our brain, heightening the neural connectivity and improving the effectiveness. Neurologists call this “shadow activity” – a concept similar to muscle memory.

Reading about physical activity fires the same neurons that we would use to do the actual activity. So, if you read an action thriller, even through you’re lying flat on a sofa, your brain thinks you’re the one racing a motorcycle through the streets of Istanbul and battling your arch nemesis on the rooftops of Rome. While it might not do much to improve muscle tone, it does begin to create neural pathways. It’s the same concept of visualization used by Olympic athletes.

For Future Consideration

As we learn more about the underlying neural activity of story reading, I wonder how we can use this to benefit ourselves? The biggest question I have is if a story in written form has this capacity to impact us at all the aforementioned levels, what would  more sense-engaged media like television or video games do? If reading about a physical activity tricks the brain into firing the corresponding sensory controlling neurons, what would happen if we are simulating that activity on an action controlled gaming system like Microsoft’s X Box? My guess would be that the sensory motor connections would obviously be much more active (because we’re physically active). Unfortunately, research in the area of embodied semantics is still at an early stage, so many of the questions have yet to be answered.

However, if our stories are conveyed through a more engaging sensory experience, with full visuals and sound, do we lose some opportunity for abstract analysis? The parts of our brain we use to read depend on relatively slow processing loops. I believe much of the power of reading lies in the requirements it places on our imagination to fill in the sensory blanks. When we read about a scene in Pompeii we have to create the visuals, the soundtrack and the tactile responses. In all this required rendering, does it more fully engage our sense-making capabilities, giving us more time to interpret and absorb?

The Death and Rebirth of Google+

google_plus_logoGoogle Executive Chairman Eric Schmidt has come out with his predictions for 2014 for Bloomberg TV. Don’t expect any earth-shaking revelations here. Schmidt plays it pretty safe with his prognostications:

Mobile has won – Schmidt says everyone will have a smartphone. “The trend has been mobile was winning..it’s now won.” Less a prediction than stating the obvious.

Big Data and Machine Intelligence will be the Biggest Disruptor – Again, hardly a leap of intuitive insight. Schmidt foresees the evolution of an entirely new data marketplace and corresponding value chain. Agreed.

Gene Sequencing Has Promise in Cancer Treatments – While a little fuzzier than his other predictions, Schmidt again pounces on the obvious. If you’re looking for someone willing to bet the house on gene sequencing, try LA billionaire Patrick Soon-Shiong.

See Schmidt’s full clip:

The one thing that was interesting to me was an admission of failure with Google+:

The biggest mistake that I made was not anticipating the rise of the social networking phenomenon.  Not a mistake we’re going to make again. I guess in our defense we were busy working on many other things, but we should have been in that area and I take responsibility for that.

I always called Google+ a non-starter, despite a deceptively encouraging start. But I think it’s important to point out that we tend to judge Google+ against Facebook or other social destinations. As Google+ Vice President of Product Bradley Horowitz made clear in an interview last year with Dailytech.com, Google never saw this as a “Facebook killer.”

I think in the early going there was a lot of looking for an alternative [to Facebook, Twitter, etc.],” said Horowitz. “But I think increasingly the people who are using Google+ are the people using Google. They’re not looking for an alternative to anything, they’re looking for a better experience on Google.

social-networkAnd this highlights a fundamental change in how we think about online social activity – one that I think is more indicative of what the future holds. Social is not a destination, social is a paradigm. It’s a layer of connectedness and shared values that acts as a filter, a lens  – a way we view reality. That’s what social is in our physical world. It shapes how we view that world. And Horowitz is telling us that that’s how Google looks at social too. With the layering of social signals into our online experience, Google+ gives us an enhanced version of our online experience. It’s not about a single destination, no matter how big that destination might be. It’s about adding richness to everything we do online.

Because humans are social animals our connections and our perception of ourselves as part of an extended network literally shape every decision we make and everything we do, whether we’re conscious of the fact or not. We are, by design, part of a greater whole. But because online, social originated as distinct destinations, it was unable to impact our entire online experience. Facebook, or Pinterest, act as a social gathering place – a type of virtual town square – but social is more than that. Google+ is closer to this more holistic definition of “social.”

I’m not  sure Google+ will succeed in becoming our virtual social lens, but I do agree that as our virtual sense of social evolves, it will became less about distinct destinations and more about a dynamic paradigm that stays with us constantly, helping to shape, sharpen, enhance and define what we do online. As such, it becomes part of the new way of thinking about being online – not going to a destination but being plugged into a network.

What’s Apple’s Plan for 2014?

First published January 2, 2014 in Mediapost’s Search Insider

apple-storeWhen new markets open, value chains first build up, then across. Someone first creates a vertically integrated experience, and then the market opens up as free competition drives efficiency. This is the challenge that currently lies ahead of Apple.

Apple has been the acknowledged master at creating seamless vertically integrated experiences. They did it with the personal computer. They did it with music. They did it with mobile. They did it with tablets. The advantage of working within a closed value chain is that you control every aspect of the experience. You can make sure that everyone plays nice with each other.

The challenge is that at some point, as adoption heats up, you simply cannot scale fast enough to meet market demand. Open competition drives horizontal competition, which drives down prices. The lack of control up and down the chain introduces some short-term user pain, but eventually the dynamics of an open market overcome this and the advantages of having several companies working on an opportunity outweigh the disadvantages.

Apple loves early markets. Or, at least, they have in the past. Under Jobs, they had a knack of creating an elegantly integrated experience that was carefully crafted from top to bottom within the walls of Cupertino. The vision and obsession with detail that defined the Jobs era was a potent combination when it came to building vertical experiences. Somehow, Apple was able to open new markets over and over again, seemingly at will. They were able to bridge Geoffrey Moore’s “Chasm” – by making new experiences painless enough for the front end of the adoption bell curve. As markets rode up the curve, markets turned from vertical to horizontal, driving a decline in margins and prices. This is where Apple tended to kick out and look for the next wave to catch.

But that was then, and this is now. As mentioned, Apple doesn’t do very well when markets turn horizontal. They depend on high margins. Only once, with the Mac, were they able to come back and stake out a respectable claim in a horizontal market. And they almost disappeared in the process. The number of dependent circumstances that would be required to repeat that trick is such that I doubt they’re eager to go down the same path with the iPhone or iPad.

In the year end summaries, many are talking about a seeming anomaly –  that despite Android’s massive market share dominance over iOS (81% vs 12.9%, according to a recent Forbes article) it’s Apple that’s ringing up the holiday sales with mobile shoppers (23% vs Android’s paltry 5%).  This becomes more understandable when you put it in the context of a vertical market that is becoming horizontal. Shopping experiences are still much less painful on iOS. And, you have a user base that is much more comfortable with mobile ecommerce because they’re on the leading edge of the adoption curve. They’ve had a mobile device for a number of years now. Android users, in general, tend to be further back on the curve. As the benefits of Darwinian competition redefine the mobile marketplace along more horizontal lines, those ecommerce numbers will revert to a more natural balance, but it will take some time.

As this inevitable change in the marketplace happens, the question then becomes, “What does Apple do next?” Can they find the next wave? And, if they do, does an Apple without Jobs still have what it takes to create the vertical experience that can open up a new market? There are plenty of opportunities – the two most notable ones being connected entertainment devices (the much-rumored new generation of Apple TV) and wearable technology (iWatches, etc).

Apple has always been known for keeping their cards glued against their chest. In 2014, it remains to be seen if they have anything amazing up their sleeve.

A Tale of Two Research Philosophies

First published December 19, 2013 in Mediapost’s Search Insider

They only sit about five miles apart physically. One’s in Palo Alto, the other’s in Mountain View. But when it comes to how R&D is integrated into an organization’s strategy, there is significantly more distance between Xerox’s PARC and Google.

Xerox Alto computer

Xerox Alto computer

I recently visited both locations on the same day. PARC, of course, is the legendary research wing that created the graphical user interface, the personal computer, object oriented programming, the mouse, Ethernet and the laser printer. It was at PARC that Steve Jobs saw the interface that would eventually form the OS foundation for the Macintosh. Every time we touch the technology that today we take for granted, we should give thanks to the many people who have called the unassuming campus on Coyote Hill Road home.

But in 1969, when PARC was first created, there was a different attitude towards R&D. Research required isolation and distance from the regular business rhythms of the mother ship. Xerox could not have put more distance between its head office, in Rochester, N.Y., and its new research arm, 3,000 miles away. When it came to innovation, the choice of location was fortuitous. PARC, together with HP and other Silicon Valley pioneers, tapped into the stream of talent that was coming out of Stanford. In fact, PARC is located on land leased from Stanford. It soon became an innovation hotbed, thanks to the visionary leadership of Bob Taylor, who headed up the Computer Science division. But Xerox’s track record of bringing its own innovations to the market was dismal. As great as the physical distance was between PARC and the executive wing of Xerox in upstate New York, the philosophical distance was several times greater.

Google’s research efforts, under the leadership of Peter Norvig, is taking a much different direction, likely due to lessons learned from PARC and others.  Research is embedded in the ever-expanding Google campus that currently sprawls along Amphitheatre Parkway and Charleston Road. There is a free flow of traffic and communication between current product engineering teams (many riding brightly colored Google bikes) and those working on longer-term projects. The distance between “today” and “tomorrow” is minimized at every opportunity.

Norvig commented on this in a recent interview with me:

We don’t have a separate research entity whose job is to be isolated from the rest of the company and think about the future. Rather, everybody’s job, regardless of their job title, is to make our products better or invent a new product. So the distinction between being a researcher versus an engineer is not how academic you are, it’s not how forward-thinking you are  — whether you’re looking at this year or next year or the year after. It’s more in terms of the area that you work in. If you work in core search or in core distributed computer systems, then your title’s going to be software engineer, even if you’re a Nobel Prize-winning professor.

Google has taken a hybrid approach to research, in which even long-term projects are developed at production scale, minimizing the risk of projects failing during the technology transfer phase. Norvig touched on this in a recent article:

Elaborate research prototypes are rarely created, since their development delays the launch of improved end-user services. Typically, a single team iteratively explores fundamental research ideas, develops and maintains the software, and helps operate the resulting Google services — all driven by real-world experience and concrete data. This long-term engagement serves to eliminate most risk to technology transfer from research to engineering.

This was exactly the trap that PARC ran into, when some of the most innovative advances in the history of computing failed to significantly contribute to Xerox’s bottom line.  Google has thrown the doors open for internal research teams to access the full power of complete data sets and production scale systems while espousing the practice of agile development. The goal is to ensure that all innovation that happens at Google is not too far removed from the goal of either diversifying Google’s revenue stream with new products, or contributing to existing ones.

The Emerging Data Ecosystem

First published December 13, 2013 in Mediapost’s Search Insider

big-dataData is ubiquitous, and that is true pretty much everywhere. It was certainly true at the Search Insider Summit, where every panel and presentation talked about data. And not just any data — this was “Big Data.”  But what exactly is Big Data — just more data? Or is there a fundamental shift happening here?

I believe there is. When I think about Big Data, I think about an emerging data ecosystem, where the explosion of available data will exponentially increase the complexity of the ecosystem. This is not just more data, but a different environment that will require different strategies.

Typically, the data we currently use is either first-party data — the data that emerges as part of our business process — or structured third-party data, available from a rapidly growing number of data vendors. This is probably what most people think of when they think of Big Data. But I don’t consider data in this form a departure from the data we’re used to using. There’s more of it, true, but the process is already identified. It just needs to be scaled to deal with increased volumes.

Let me use one example from the recent Search Insider Summit. The Weather Company has recently launched a new division called Weather FX, aimed at taking the vast amount of weather data it has to create predictive models to help companies add weather-based variables to their own data sets. For example, ad targeting can now be weather-sensitive, ramping up campaigns and changing messaging based on predicted changes in weather patterns. While pretty impressive, this is a relatively straightforward use of data. The data feeds are well structured and have been “predigested” by Weather FX to make them easy to implement.

Big Data, at least in my interpretation, is a different beast altogether. Here, data is messy, often unstructured, hard to find and in raw form. To further complicate matters, it lives in disparate siloes that often have no market-facing interface. T It’s an organic ecosystem that bears more than a passing similarity to how we think of natural resources. This data needs to be identified, nurtured and harvested (or mined, if you’d prefer).

It’s this data that will lead to a true view of Big Data, a world of vast data nodes that require significant development before they can be used. Think of how the world was a century and a half ago, when a lot of raw stuff — wood, minerals, water, crops, livestock — lay scattered about our planet. At the time, there was little in the way of established manufacturing and distribution chains that transformed that raw stuff into consumable products. Over time, the chain emerged, but a lot of logistical challenges had to be addressed along the way. The same is true, I believe, for data.

But there’s another challenge with Big Data: It’s not always clear how to use it. It needs a framework. You can’t dump a ton of various metals and a couple barrels of oil into a big black box, shake it and expect a Ford Focus to drop out. You need to have a pretty clear idea of what your expected outcome is. And you need to have a long chain that moves your raw material towards your end product. In the early days of creating physical goods, these chains were often verticalized within a single organization, but as the ecosystem evolved, the markets became more horizontal. I would expect the same pattern to emerge in the data ecosystem.

If you create a conceptual framework within which to use data, you can determine which data is required and how that data will be used. You can pick your data sources, and identify the gaps and resource as required to address those gaps. Often, because we’re in the earliest stages of this process, we will need to explore, guess and iteratively test before the data will provide value.

This definition of Big Data requires new rules and strategies. It requires a commitment to mining raw data and integrating it in useful ways. It will mean dynamically adapting to the continuing data explosion. It will require blood, sweat and tears. This is not a “plug and play” exercise. When I think of Big Data, that’s what I think about.

360 Degrees of Seperation

First published December 5, 2013 in Mediapost’s Search Insider

IMT_iconsIn the past two decades or so, a lot of marketers talked about gaining a 360-degree view of their customers.  I’m not exactly sure what this means, so I looked it up.  Apparently, for most marketers, it means having a comprehensive record of every touch point a customer has had with a company. Originally, it was the promise of CRM vendors, where anyone in an organization, at any time, can pull up a complete customer history.

So far, so good.

But like many phrases, it’s been appropriated by marketers and its meaning has become blurred. Today, it’s bandied about in marketing meetings, where everyone nods knowingly, confident in the fact that they are firmly ensconced in the customer’s cranium and have all things completely under control. “We have a 360-degree view of our customers,” the marketing manager beams, and woe to anyone that dares question it.

But there are no standard criteria that you have to meet before you use the term. There is no rubber-meets-the-road threshold you have to climb over. No one knows exactly what the hell it means. It sure sounds good, though!

If a company is truly striving to build as complete a picture of their customers as possible, they probably define 360 degrees as the total scope of a customer’s interaction with their company. This would follow the original CRM definition. In marketing terms, it would mean every marketing touch point and would hopefully extend through the customer’s entire relationship with that company. This would be 360-degrees as defined by Big Data.

But is it actually 360 degrees? If we envision this as a Venn diagram, we have one 360-degree sphere representing the mental model of customers, including all the things they care about. We have another 360-degree sphere representing the footprint of the company and all the things they do. What we’re actually looking at then, even in an ideal world, is where those two spheres intersect. At best, we’re looking at a relatively small chunk of each sphere.

So let’s flip this idea on its head. What if we redefine 360 degrees as understanding the customer’s decision space? I call this the Buyersphere. The traditional view of 360 degrees is from the inside looking out, from the company’s perspective. The Buyersphere moves the perspective to that of the customer, looking from the outside in. It expands the scope to include the events that lead to consideration, the competitive comparisons, the balancing of buying factors, interactions with all potential candidates and the branches of the buying path itself.  What if you decide to become the best at mapping that mental space?  I still wouldn’t call it a 360-degree view, but it would be a view that very few of your competitors would have.

One of the things that I believe is holding Big Data back is that we don’t have a frame within which to use Big Data. Peter Norvig, chief researcher for Google, outlined 17 warning signs in experimental design and interpretation. One was lack of a specific hypothesis, and the other was a lack of a theory. You need a conceptual frame from which to construct a theory, and then, from that theory, you can decide on a specific hypothesis for validation. It’s this construct that helps you separate signal from noise. Without the construct, you’re relying on serendipity to identify meaningful patterns, and we humans have a nasty tendency to mistake noise for patterns.

If we look at opportunities for establishing a competitive advantage, redefining what we mean by understanding our customers is a pretty compelling one. This is a construct that can provide a robust and testable space within which to use Big Data and other, more qualitative, approaches. It’s relatively doable for any organization to consolidate its data to provide a fairly comprehensive “inside-out” view of customer’s touch points. Essentially, it’s a logistical exercise. I won’t say it’s easy, but it is doable.  But if we set our goal a little differently, working to achieve a true “outside-in” view of our company, that sets the bar substantially higher.

360 degrees? Maybe not. But it’s a much broader view than most marketers have.

Evolutionary Hotspots in Marketing

First published November 21, 2013 in Mediapost’s Search Insider

paramoscene1_7in

The Páramos Ecosystem

The Páramos are remarkable places: grasslands that sit above the tree lines in the Andes, some 10,000 feet above sea level. What makes them remarkable are the things that grow and live there — like Espeletia uribei,which looks like a huge palm tree, but is actually an overgrown member of the daisy family.

The Páramos just happen to be the place on earth where evolution happens the fastest.  There are other places where species evolve quickly, including Darwin’s Galápagos Islands, but scientists believe the Páramos are the hottest of the evolutionary hot spots.

The reason for this supercharged speciation is the climate, which makes them a very tough place to call home.  They’re located at the equator, so they get sunshine year round. But the elevation introduces harsh temperatures and extreme ultraviolet exposure. Also, the weather can change in a heartbeat. A few minutes can mark the difference between sunshine, mist and full-on storms.  This constant adaptive stress has resulted in biodiversity not seen anywhere else on the planet.

In biology, evolution is measured by the rate of mutation. In the business world, mutation equates to innovation. A new idea introduces a wild card into the competitive environment, just as a genetic mutation introduces a wild card into nature. It disrupts the status quo, either positively or negatively. That’s why it’s important for organizations to embrace failure. Openness to error encourages innovation, driving the competitive evolution of the company. Successful innovations can be game-changers, as long as you create a framework to identify unsuccessful innovations before they do irreparable damage.

So if we accept that corporate evolution is a good thing, and we want to increase our mutation/innovation rate, then it makes sense to seek our own organizational “Páramos.” These will be departments or divisions where volatility is the norm, rather than the exception. Stability is the enemy of innovation. Typically, these will be areas that require rapid reaction to external forces and adaption to new environmental factors. Much as we like to mythologize the lone genius toiling away in an ivory tower or R&D lab, the history of innovation shows that it most often comes from far messier, more organic sources.

In the Páramos, it’s the harsh, unpredictable climate that drives evolution. In a company, it’s the instability of the competitive marketplace that drives the forces of innovation. So it makes sense that the hotspots will be those areas of the organization that have the most exposure to that marketplace. Front-line touch points with customers, head to head contact with competitors and real world usage of your products or services are the externalities you’ll be looking for. That makes sales, marketing and customer service prime candidates for becoming your own Páramos.

The challenge is to enable innovation at this level. Typically, innovation in an organization is constrained (and unfortunately, often choked to death) by bureaucratic frameworks that build in “top-down” governance from executives who are traditionally miles away from the “Páramos” in the org chart. This is exactly the wrong approach. Mechanisms should be developed to encourage “bottom-up” innovation in these identified hotspots, with appropriate guidelines for identifying successful opportunities as quickly as possible, allowing organizations to fast-track the winners and cut their losses on the losers. These hotspots can become the strategic radar of the organization.

Darwin’s “dangerous idea” has completely changed biology. Currently, it’s causing everyone from psychologists to economists to rethink their respective fields. In the future, don’t be surprised if it has a similar impact on marketing and corporate strategy.