We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

How Our Brains Process Price Information

On-Off-Switch-For-Human-BrainWe have a complex psychological relationship with pricing. A new brain scanning study out of Harvard and Stanford starts to pick apart the dynamics of that relationship.

Uma R. Karmarkar, Baba Shiv, and Brian Knutson wanted to see how we evaluate a potential purchase when the price is the first piece of information we get as opposed to the last piece of information. They used both fMRI scanning and behavioral tracking to see how the study participants responded. Participants were given $40 dollars to spend and then were presented with a number of sample offers. In all cases, the price represented an attractive bargain on the product featured. But one group was given the price first, and the second group was given the price last.

There was another critical difference in the evaluation process as well. In the first phase of the study, participants were shown products that they would like to buy, and in the second phase, they were shown products that they would have to buy. The difference between the two was how they activated the reward center of our brain – the nucleus accumbens. I’ve been talking for years about the importance of understanding the balance of risk and reward in our purchase decisions. This study provides a little more understanding about how our brain processes those two factors.

In the first phase, participants were shown a variety of products that they would consider rewarding. These would fall into the first quadrant of the risk/reward matrix I introduced in my column from 5 years ago. The researchers were paying particular attention to two different parts of the brain – the nucleus accumbens and the medial prefrontal cortex. For a layman’s analogy, think of you and a five year old walking down the toy aisle in a department store. The nucleus accumbens is the five year old who starts chanting, “I want it. I want it. I want it.” The medial prefrontal cortex is the adult who decides if they’re actually going to buy it. In the study, the researchers found that the sequence in which these two parts of the brain “lit up” depended on whether or not you saw the price first. If you saw the product first, the nucleus accumbens started its chant – “I want it.” If you saw the price first, the prefrontal medial cortex kicked into action and started evaluating whether the offer represented a good bargain. In the case of the reward products, although the sequence varied, the actually purchase process didn’t. In most cases, participants still ended up making the purchase, whether price was presented first or last.

But things changed when the researchers tried a variety of products that fell into the second quadrant of the risk reward matrix – low risk and low reward. These are the everyday items we have to buy. In the study, they included things like a water filtration pitcher, a pack of AA batteries, a USB drive, and a flashlight. There was nothing here that was likely to get the nucleus accumbens starting to chant.

Now, it should be noted that this follow-up study did not include the fMRI scanning, but by tracking purchasing behaviors we can make some pretty educated guesses as to what’s happening in the respective brains of our participants. Here, presenting prices first resulted in a significant increase in actual purchases over instances when price was presented last. If price comes first, we can imagine that the prefrontal cortex is indicating that it’s a good bargain on a needed product. But if a relatively boring product is presented first for evaluation to the nucleus accumbens, there’s little to excite the reward center.

An important caveat to this part of the study comes with knowing that the prices presented represented significant savings on the products. After the simulated purchases, participants were asked to indicate a price they would be willing to pay for the product. When the price was the lead, the named prices tended to be a little lower, indicating that if you are going to lead with price, especially for quadrant two products, you’d better make sure you’re offering a true bargain.

If anything, this study provides further proof of the value of knowing a prospect’s mental landscape. What are the risk and reward factors that will be motivating them? Will the media prefrontal cortex or the nucleus accumbens be calling the shots? What priming effects might an early introduction of price introduce into the process?

When I wrote about the risk/reward matrix five years ago, one commenter said “a simple low-high risk/low-high reward graph is not very useful for driving just in time and location based offers, discounts, etc.” I respectfully disagree. While more sophisticated models are certainly possible, I think even a simple 2X2 matrix that helps map out the decision factors that are in play with purchases would be a significant step forward. And this isn’t about driving real time variations on offers. It’s about understanding the fundamentals of the buyer’s decision process. There’s nothing wrong with simplicity, especially if it drives greater usage.

The Persona is Dead, Long Live the Person

First, let me go on record as saying up to this point, I’ve been a fan of personas. In my past marketing and usability work, I used personas extensively as a tool. But I’m definitely aware that not everyone is equally enamored with personas. And I also understand why.

Personas, like any tool, can be used both correctly and incorrectly. When used correctly, they can help bridge the gap between the left brain and the right brain. They live in the middle ground between instinct and intellectualism. They provide a human face to raw data.

But it’s just this bridging quality that tends to lead to abuse. On the instinct side, personas are often used as a short cut to avoid quantitative rigor. Data driven people typically hate personas for this reason. Often, personas end up as fluffy documents and life sized cardboard cutouts with no real purpose. It seems like a sloppy way to run things.

On the intellectual side, because quant people distrust personas, they also leave themselves squarely on data side of the marketing divide. They can understand numbers – people not so much. This is where personas can shine. At their best, they give you a conceptual container with a human face to put data into. It provides a richer but less precise context that allows you to identify, understand and play out potential behaviors that data alone may not pinpoint.

As I said, because personas are intended as a bridging tool, they often remain stranded in no man’s land. To use them effectively, the practitioner should feel comfortable living in this gap between quant and qual. Too far one way or the other and it’s a pretty safe bet that personas will either be used incorrectly or be discarded entirely.

Because of this potential for abuse, maybe it’s time we threw personas in the trash bin. I suspect they may be doing more harm than good to the practice of marketing. Even at their best, personas were meant as a more empathetic tool to allow you to thing through interactions with a real live person in mind. But in order to make personas play nice with real data, you have to be very diligent about continually refining your personas based on that data. Personas were never intended to be placed on a shelf. But all too often, this is exactly what happens. Usually, personas are a poor and artificial proxy for real human behaviors. And this is why they typically do more harm than good.

The holy grail of marketing would be to somehow give real time data a human face. If we could find a way to bridge left brain logic and right brain empathy in real time to discover insights that were grounded in data but centered in the context of a real person’s behaviors, marketing would take a huge leap forward. The technology is getting tantalizingly close to this now. It’s certainly close enough that it’s preferable to the much abused persona. If – and this is a huge if – personas were used absolutely correctly they can still add value. But I suspect that too much effort is spent on personas that end up as documents on a shelf and pretty graphics. Perhaps that effort would be better spent trying to find the sweet spot between data and human insights.

The Secret of Successful Marketing Lies in Split Seconds

affordanceThe other day, I was having lunch in a deli. I was also watching the front door, which you had to push to get in. Almost everyone who came to the door pulled, even though there was a fairly big sign over the handle which said “Push.” The problem? The door had the wrong kind of handle. It was a pull handle, not a push. The door had been mounted backwards. In usability terms, the door handle presented a misleading affordance.

I suspect the door had been there for many years. I was at the deli for about 30 minutes. In that time, about 70% of the people (probably close to 50) pulled rather than pushed. Extrapolating this to the whole, that means over the years, thousands and thousands of people have had to try twice to enter this particular place of business. Yet, the only acknowledgement of this instance of customer pain was the sign that had been taped to the door – “Push” – and I suspect there was an implied “(You Idiot)” following that.

I suspect most marketing falls in the same category as that sign. It’s an attempt to fight the intuitive actions that customers take – those split-second actions that happen before our brain has a chance to kick in. And we have to counteract those split-second decisions because the path we have created for our customers was built without an understanding of those intuitive actions. After we realize that our path runs counter to our customer’s natural behaviors do we rebuild the path? Does the deli owner pay a contractor to remount the door? No, we post a sign asking customers to push rather than pull. After all, all they have to do is think for a moment. It seems like a reasonable request.

But here’s the problem with that. You don’t want your customers to think. You want them to act. And you want them to act as quickly and naturally as possible. The battles of marketing are won in those split seconds before the brain kicks in.

Let me give you one example. A few years ago I did a study with Simon Fraser University in Canada. We wanted to know how the brain responded in those same split seconds to brands we like versus brands we have no particular affinity to. What we found was fascinating. In about 150 milliseconds (roughly a sixth of a second) our brain responds to a well-loved brand the same way we respond to a smiling face. This all happens before any rational part of the brain can kick in. This positive reaction sets the stage for a much different subsequent mental processing of the brand (which starts at about 450 milliseconds, or half a second). And the power of this alignment can be startling. As Dr. Read Montague discovered, it can literally alter your perception of the world.

If you can rebuild your path to purchase to align with your customer’s intuitive behaviors, you don’t need to put up “push” signs when they stray off course. You don’t have to make your customers think. Here’s why that is important. As long as we operate at the intuitive level, humans are a fairly predictable lot. Evolution has wired in a number of behaviors that are universal across the population. You would not be risking your vacation fund if you placed a bet that the majority of people would try to pull a door with a door handle that suggested your should pull it, even if there was a sign that said “push.” As long as we operate on auto-pilot, we can plot a predicted behavioral course with a fair degree of confidence (assuming, of course, we’ve taken the time to understand those behaviors).

But the minute we start to think, all bets are off. The miracle of the human brain is that it has two loops of activity – one fast and one slow. The fast loop relies on instinct and evolved behavioral habits. It’s incredibly efficient but stubbornly rigid. The slow loop brings the full power of human rationality to bear on the problem. It’s what happens when we think. And once the prefrontal cortex kicks it, we are amazingly flexible but we pay the price in efficiency. It takes time to think. It also brings a massive amount of variability into the equation. If we start thinking, behaviors become much more difficult to predict.

The longer you can keep your customers on the fast path, the closer you’ll be to a successful outcome. Plan that path carefully and remove any signs telling them to “push.”

Why More Connectivity is Not Just More – Why More is Different

data-brain_SMEric Schmidt is predicting from Davos that the Internet will disappear. I agree. I’ve always said that Search will go under the hood, changing from a destination to a utility. Not that Mr. Schmidt or the Davos crew needs my validation. My invitation seems to have got lost in the mail.

Laurie Sullivan’s recent post goes into some of the specifics of how search will become an implicit rather than an explicit utility. Underlying this is a pretty big implication that we should be aware of – the very nature of connectivity will change. Right now, the Internet is a tool, or resource. We access it through conscious effort. It’s a “task at hand.” Our attention is focused on the Internet when we engage with it. The world described by Eric Schmidt and the rest of the panel is much, much different.   In this world, the “Internet of Things” creates a connected environment that we exist in. And this has some pretty important considerations for us.

First of all, when something becomes an environment, it surrounds us. It becomes our world as we interpret it through our assorted sensory inputs. These inputs have evolved to interpret a physical world – an environment of things. We will need help interpreting a digital world – an environment of data. Our reality, or what we perceive our reality to be, will change significantly as we introduce technologically mediated inputs into it.

Our brains were built to parse information from a physical world. We have cognitive mechanisms that evolved to do things like keep us away from physical harm. Our brains were never intended to crunch endless reams of digital data. So, we will have to rely on technology to do that for us. Right now we have an uneasy alliance between our instincts and the capabilities of machines. We are highly suspicious of technology. There is every rational reason in the world to believe that a self-driving Google car will be far safer than a two ton chunk of accelerating metal under the control of a fundamentally flawed human, but who of us are willing to give up the wheel? The fact is, however, that if we want to function in the world Schmidt hints at, we’re going to have to learn not only to trust machines, but also to rely totally on them.

The other implication is one of bandwidth. Our brains have bottlenecks. Right now, our brain together with our senses subconsciously monitor our environment and, if the situation warrants, they wake up our conscious mind for some focused and deliberate processing. The busier our environment gets, the bigger this challenge becomes. A digitally connected environment will soon exceed our brain’s ability to comprehend and process information. We will have to determine some pretty stringent filtering thresholds. And we will rely on technology to do the filtering. As I said, our physical senses were not built to filter a digital world.

It will be an odd relationship with technology that will have to develop. Even if we lower our guard on letting machines do much of our “thinking” (in terms of processing environmental inputs for us) we still have to learn how to give machines guidelines so they know what our intentions are. This raises the question, “How smart do we want machines to become?” Do we want machines that can learn about us over time, without explicit guidance from us? Are we ready for technology that guesses what we want?

One of the comments on Laurie’s post was from Jay Fredrickson, “Sign me up for this world, please. When will this happen and be fully rolled out? Ten years? 20 years?” Perhaps we should be careful what we wish for.  While this world may seem to be a step forward, we will actually be stepping over a threshold into a significantly different reality. As we step over that threshold, we will change what it means to be human. And there will be no stepping back.

Learning about Big Data from Big Brother

icreach-search-illo-feature-hero-bYou may not have heard of ICREACH, but it has probably heard of you. ICREACH is the NSA’s own Google-like search engine.  And if Google’s mission is to organize the world’s information, ICREACH’s mission is to snoop on the world.  After super whistle blower Edward Snowden tipped the press off to the existence of ICREACH, the NSA fessed up last month. The amount of data we’re talking about is massive. According to The Intercept website, the tool can handle two to five billion new records every day, including data on the US’s emails, phone calls, faxes, Internet chats and text messages. It’s Big Brother meets Big Data.

I’ll leave aside for the moment the ethical aspect of this story.  What I’ll focus on is how the NSA deals with this mass of Big Data and what it might mean for companies who are struggling to deal with their own Big Data dilemmas.

Perhaps no one deals with more big data than the Intelligence Community. And Big Data is not new for them. They’ve been digging into data trying to find meaningful signals amongst the noise for decades. Finally, the stakes of successful data analysis are astronomically high here. Not only is it a matter of life and death – a failure to successfully connect the dots can lead to the kinds of nightmares that will haunt us for the rest of our lives. When the pressure is on to this extent, you can be sure that they’ve learned a thing or two. How the Intelligence community handles data is something I’ve been looking at recently. There are a few lessons to be learned here.

Owned Data vs Environmental Data

The first lesson is that you need different approaches for different types of data. The Intelligence Community has their own files, which include analyst’s reports, suspect files and other internally generated documentation. Then you have what I would call “Environmental” data. This includes raw data gathered from emails, phone calls, social media postings and cellphone locations. Raw data needs to be successfully crunched, screened for signals vs. noise and then interpreted in a way that’s relevant to the objectives of the organization. That’s where…

You Need to Make Sense of the Data – at Scale

Probably the biggest change in the Intelligence community has been to adopt an approach called “Sense making.”  Sense making really mimics how we, as humans, make sense of our environment. But while we may crunch a few hundred or thousand sensory inputs at any one time, the NSA needs to crunch several billion signals.

Human intuition expert Gary Klein has done much work in the area of sense making. His view of sense making relies on the existence of a “frame” that represents what we believe to be true about the world around us at any given time.  We constantly update that frame based on new environmental inputs.  Sometimes they confirm the frame. Sometimes they contradict the frame. If the contradiction is big enough, it may cause us to discard the frame and build a new one. But it’s this frame that allows us to not only connect the dots, but also to determine what counts as a dot. And to do this…

You Have to Be Constantly Experimenting

Crunching of the data may give you the dots, but there will be multiple ways to connect them. A number of hypothetical “frames” will emerge from the raw data. You need to test the validity of these hypotheses. In some cases, they can be tested against your own internally controlled data. Sometimes they will lie beyond the limits of that data. This means adopting a rigorous and objective testing methodology.  Objective is the key word here, because…

You Need to Remove Human Limitations from the Equation

When you look at the historic failures of Intelligence gathering, the fault usually doesn’t lie in the “gathering.” The signals are often there. Frequently, they’re even put together into a workable hypothesis by an analyst. The catastrophic failures in intelligence generally arise because some one, somewhere, made an intuitive call to ignore the information because they didn’t agree with the hypothesis. Internal politics in the Intelligence Community has probably been the single biggest point of failure. Finally…

Data Needs to Be Shared

The ICREACH project came about as a way to allow broader access to the information required to identify warning signals and test out hunches. ICREACH opens up this data pool to nearly two-dozen U.S. Government agencies.

Big Data shouldn’t replace intuition. It should embrace it. Humans are incredibly proficient at recognizing patterns. In fact, we’re too good at it. False positives are a common occurrence. But, if we build an objective way to validate our hypotheses and remove our irrational adherence to our own pet theories, more is almost always better when it comes to generating testable scenarios.

Why Cognitive Computing is a Big Deal When it comes to Big Data

IBM-Watson

Watson beating it’s human opponents at Jeopardy

When IBM’s Watson won against humans playing Jeopardy, most of the world considered it just another man against machine novelty act – going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that. As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning Jeopardy requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge. Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a Jeopardy champion. Watson learned as it went. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem solve. These are all things that Watson can do.

 

Peter Pirolli - PARC

Peter Pirolli – PARC

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines has been to “make sense” of situations and adapt accordingly. Remember, a few columns ago where I talked about narratives and Big Data, this is where Monitor360 uses a combination of humans and computers – computers to do the data crunching and humans to make sense of the results. But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to sense make in an unlimited variety of environmental contexts. We humans excel at quick and dirty sense making no matter what the situation. We’re not always correct in our conclusions but we’re far more flexible than machines. But computers are constantly narrowing the gap and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense making. But this flexibility has required a trade-off. We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it.

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

Want to Be More Strategic? Stand Up!

article-1388357-0050C69D00000258-771_472x345One of the things that always frustrated me in my professional experience was my difficulty in switching from tactical to strategic thinking. For many years, I served on a board that was responsible for the strategic direction of an organization. A friend of mine, Andy Freed, served as an advisor to the board. He constantly lectured us on the difference between strategy and tactics:

“Strategy is your job. Tactics are mine. Stick to your job and I’ll stick to mine.”

Despite this constant reminder, our discussions always seemed to quickly spiral down to the tactical level. We all caught ourselves doing it. It seemed that as soon as we started thinking about what needed to be done and why, we automatically shifted gears and thought about how it should be done.

A recent study may have found the problem. We were sitting down. We should have stood up. Better yet, we should have taken the elevator to the top of the building (we actually did do this at one board retreat in Scottsdale, Arizona). Two researchers at the University of Toronto (home, I should point out, of what was the tallest free standing structure in the world for many years – the CN Tower), Pankaj Aggarwal and Min Zhao, found that a subject’s physical situation impacted how strategic they were. When subjects were physically higher up, say standing on a tall stool, they were more likely to look at the “big picture.”

Our physical context has more than a little impact on how we think. It’s a phenomenon called Mental Construal. And it’s not just restricted to how strategic our thinking is. It can impact thinks like social judgment as well. In a 2006 paper, University of Michigan professor Norbert Schwartz gave some examples that fall under the category called “situated concepts.” For example, the mental images you retrieve when I say “chair” might be different if we’re standing in a living room rather than an airplane or movie theatre. Another example, which unfortunately speaks to a darker side of human nature, is how you would respond to the face of a young African American when shown in the context of a church scene versus the context of a street corner scene.

Schwartz also talks about levels of construal. We’re more successful staying at strategic levels when our planning is trouble free. The minute we hit a problem, we tend to revert to finer grained tactical thinking. Again, in my board experience, the minute we started hitting problems we immediately tried to solve them, which effectively derailed any strategic discussion.

In his book, Creativity: Flow and the Psychology of Discovery and Invention, Mihaly Csikszentmihalyi found that physical contexts can also impact creativity. Physicist Freeman Dyson found that walking was essential to drive the creative process,

“Again, I never went to a class that (Richard) Feynman taught. I never had any official connection with him at all, in fact. But we went for walks. Most of the time that I spent with him was actually walking, like the old style of philosophers who used to walk around under the cloisters.”

In a study where subjects were given pagers and were signaled at random times of the day, they were asked to rate how creative they felt. It turned out the highest level of creativity came while they were walking, driving or swimming. Perhaps it was the physical stimulation, but it may have also been mental construal at work. Perhaps physical movement primed the brain for mental movement.

So, if you need to be strategic, find the highest vantage point possible, with room to walk around, preferably with the smartest person you know.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.