Ode to a Grecian Eurozone

comm-crisis I’d like to comment on the Greek debt crisis. But I don’t know anything about it. Zip..or, as they say in Athens – μηδέν. I do, however, know how to say zero in Greek, thanks to Google Translate. At least for the next few minutes. I also happen to know rather a lot right now about the Tour de France, how to wire RV batteries, how to balance pool chemicals, how to write obituaries and most of the plotlines for the Showtime series Homeland. I certainly know more about all those things than the average person. Tomorrow, I’ll probably know different stuff. And I will retain almost nothing. But if you ask me what in the world is happening right now, I’ll likely draw a blank. I’d say it’s all Greek to me, but a certain Mediapost columnist already stole that line. Damn you Bob Garfield!

I’m not really sure if I’m concerned about this. After all, I’m the one who has chosen not to watch the news for a long time. My various information sources feed me a steady diet of information, but it’s all been predetermined based on my interests. I’m in what they call a “filter bubble.” I’ve become my own news curator and somewhere along the line, I’ve completely filtered out anything to do with the Greek economy. It’s because I’m not really interested in the Greek economy, but I’m thinking maybe I should be.

(Incidentally, am I the only one who finds it a bit ironic that the word “economy” comes from – you guessed it – the Greek words for “house” and “management”)

The problem is that I have a limited attention span. My memory capacity is a little more voluminous, but there are definite limits to that, as well. To make matters worse, Google is making me intellectually lethargic. I don’t try as hard to remember stuff because I don’t have to. Why learn how to count to 10 in Greek when I can just look it up when I need to. I’m not alone in this. We’re all going down the same blind cornered path together. Sooner or later, we’ll all run into a major crisis we never saw coming. And it’s because we’ve all been looking in different places.

40 years ago, to be well informed, you had to pay attention to mainstream news sources. It was the only option we had. We all got feed the same diet of information. Some of us retained more than others, but we all dined at the same table. Our knowledge capacity was first filled from these common news sources. Then, after that, we’d fill whatever nooks and crannies were left with whatever our unique interests might be. But we all, to some extent, shared a common context. Knowledge may not have been deep, but it was definitely broad.

Now, if I choose to learn more about the Greek economy, I certainly have plenty of opportunities to do so. But I’d be starting with a blank slate. It would take some work to get up to speed. So I have to decide whether it’s worth the effort for me to inform myself. Is the return worth the investment? Something has to tip the balance to make it important enough to learn more about whatever it is the Greeks are referendumming (referendering?) about. And in the meantime, there will be a lot of other things competing for that same limited supply of information gathering attention. Tomorrow, for instance, it might become really important for me to find out how close BC is to legalizing pot, or what the wild fire hazard is in Northern Saskatchewan, or what July’s weather is like in Chiang Mai. All of these things are relatively easy to find, but I have to reserve enough retention capacity to use the information once I find it. Information may want to be free, but the resources required to utilize it depletes our limited stores of cognitive ability.

Perhaps we’re saving more of our attention for on demand information requirements. Or maybe we’re just filtering out more of what we used to call news. Whatever the cause, I think we’re loosing our common cultural context, bit by byte. A community is defined by what it has in common, and the more technology allows us to pursue our individual interests, the more we surrender the common narratives that used to bind us.

A Few Words on Memories

It’s been a highly emotional weekend for me. After a long battle, my Mom slipped quietly away in the middle of the night last Thursday. My Dad, my sisters Laurel and Heather and I held her hand and stroked her forehead for most of the night as we watched her increasingly shallow breaths. We spent a lot of time reminiscing. It was all horribly beautiful. It was life – and death.

When a parent passes away, you feel like a large chunk of your life has been suddenly ripped away. In my almost 54 years on the planet, my Mom has been one of the constants that has connected the ever changing dots from my birth to today. As I started for the door of the hospital room for the last time and looked back at the tiny still figure on the bed – also for the last time – I realized that constant is gone. I’m adrift. I’m an orphan (my Dad is actually my stepdad). I’m at a loss for words.

Maybe that accounts for the huge wave of nostalgia that hit my sisters and myself this weekend. We realized that a significant piece of our lives was teetering on the edge of a precipice – slipping away from our grasp. We were desperate to freeze it in our memories, securing it for the future. When so much was slipping away, we needed to hang on to what we could. So, we wandered the streets of the small Alberta town we all grew up in. We snuck into our old high school, looking for the locker we had in grade 8 and sitting in the desks in the classroom. We tried to find our grad photos. We were even going to knock on the door of the house we all grew up in, 30 some years ago, and see if the new owners would let us take a quick look inside. But then we decided that was just a little too creepy.

The more past you accumulate, the more important it becomes. This is especially true for your childhood. We all need to know where we came from. One of the most touching discussions I’ve had with my Mom I had a month and a half ago. I was asking her about her childhood. As she remembered, her face transformed. A small smile fixed to her lips, she sunk into a warm remembrance of a post war childhood in Southern Ontario, safe in the embrace of an idyllic home town (that has since become a sprawling suburb of Toronto), family card games with the laughter of the grown-ups fueled by gin and tonics and summers spent “up north” in cottage country.

“It sounds like a good childhood,” I remarked.

“It was a good childhood. A really good childhood.”

She then closed her eyes and napped for a bit. I watched her sleep. Life is short. Life is sad. Life is good.

My Mom, as a little girl in Ontario

My Mom, as a little girl in Ontario

So, I have some fundamental questions. For those of us of my generation or older, childhoods are reconstructed from mostly bad photographs, old letters and our memories. None of these are terribly high fidelity representations of reality. But this can be a good thing. It allows us to fill in the blanks, emphasizing the highs and forgetting the lows. For most of us, it gives us the childhood we wished we had, which can be very comforting 78 years hence as we lie in a bed, slipping towards the end of our journey. But what will the essence of childhood remembrance be for the person who was born today? They will leave a huge digital dust trail. How much of it will be available in 8 decades? As they try to construct a refuge in their memories, will there be digital evidence to call bullshit on them? Will indelible fidelity be a good thing or bad?

Daniel Kahneman has discovered that our remembered lives usually bear little resemblance to our actual experiences. Was my Mom’s childhood really as good as she remembered? I know there was pain. I know there was heartache. I suspect life was much harder than she recalled. But that’s not the point. At that moment, when she needed it most, her past was what she wanted it to be. And that’s exactly what my sisters and I found this past weekend, when we needed it. It was a very human thing we did – highly inaccurate, totally implausible and completely indispensable. I wonder how technology might screw that up for my kids and grand kids.

Goodbye Mom. Dream beautifully.

Feed Up with Feedback Requests

Sorry Google. I realize this is my last chance to tell you about my experience. But you see, you’re in a long line of companies that are also desperate for the juicy details of my various consumer escapades. Best Western, Ford, Kia, Home Depot, Apple, Samsung – my in box is completely clogged with pleas for the “dets” of my transactional interactions with them. I’ve never been more popular – or frustrated.

I appreciate the idea of customer follow up. I really do. But as company after company jumps on the customer feedback bandwagon, poor ordinary mortals like myself don’t have a hope in hell of keeping up. It could be a full time job just filling out surveys and rating every aspect of my life on a scale that runs from “abysmal” to “awesome” The irony is, these customer feedback requests are actually having the opposite effect. Even if my interactions with the brand are satisfactory, the incessant nagging to find out if I “like them, I really like them” are beginning to piss me off. In the quest to quantify brand affinity, these companies are actually eroding it. Ooops! Talk about unintended consequences.

So, if we accept the fact that knowing what our customers think about us is a good thing, and we also accept the fact that our customers have better things to do with their lives than fill out post-purchase surveys, we have to find a more elegant way to get the job done.

First of all, customer feedback should be part of a full customer relationship continuum. It should be just one customer touch point, not the customer touch point. You have to earn the credibility that gives you the right to ask for my feedback. Too many companies don’t worry about gauging satisfaction “in the moment.” If you don’t care enough to ask if I’m happy when I’m right in front of you, why should I believe that you’ll pay any attention to my survey. But too many companies jam this request for feedback on their customers without doing the spadework required to build a relationship first.

Worse, because compensation is increasingly being tied to feedback results, you get the “please say you’ll love me” pleading on the sales floor. See if this sounds familiar: “You’ll be receiving a survey from head office asking me how I’ve done. I don’t get a bonus unless you give me top marks in each category. So if there’s anything I can do better, please tell me now.” There are so many things that are just plain wrong with this that I don’t know where to start. It’s smarmy and disingenuous. It also puts the customer in a very awkward position. When it’s happened to me, I just murmur something like, “No, you’ve been great,” and run with all speed to the nearest exit.

The next thing we have to realize is that not all purchases are created equal. Remember the Risk/Reward matrix I talked about in last week’s column about how our brains process pricing information? While this applies to our motivational balance going into a purchase, it also provides some clues to the emotion landscape that exists post-purchase. If the purchase was in the low risk/low reward quadrant, like the home improvement supplies I picked up at Home Depot this weekend, it’s a task that has been crossed off my to-do list. It’s done. It’s over. The last thing I want to do is prolong that task by filling out a survey about said task. But, if it’s something that falls into the high risk/high reward quadrant, such as a major vacation, then I am probably more apt to invest some time to give you some feedback. The Rule of Thumb is: the higher the degree of risk or reward, the more likely I am to fill out a survey.

The final thing to remember about customer surveys is that you’re capturing extremes. The people who fill out surveys are usually the ones that either hate you or love you. So you get a very skewed perspective on how you’re doing. What you’re missing is the vast middle of your market that may not be sufficiently motivated to toss you either a brick or a bouquet.

I’m all for getting to know your customers better. But it has to be part of a total approach. It begins with simple things, like actually listening to them when you’re engaging with them.

How Our Brains Process Price Information

On-Off-Switch-For-Human-BrainWe have a complex psychological relationship with pricing. A new brain scanning study out of Harvard and Stanford starts to pick apart the dynamics of that relationship.

Uma R. Karmarkar, Baba Shiv, and Brian Knutson wanted to see how we evaluate a potential purchase when the price is the first piece of information we get as opposed to the last piece of information. They used both fMRI scanning and behavioral tracking to see how the study participants responded. Participants were given $40 dollars to spend and then were presented with a number of sample offers. In all cases, the price represented an attractive bargain on the product featured. But one group was given the price first, and the second group was given the price last.

There was another critical difference in the evaluation process as well. In the first phase of the study, participants were shown products that they would like to buy, and in the second phase, they were shown products that they would have to buy. The difference between the two was how they activated the reward center of our brain – the nucleus accumbens. I’ve been talking for years about the importance of understanding the balance of risk and reward in our purchase decisions. This study provides a little more understanding about how our brain processes those two factors.

In the first phase, participants were shown a variety of products that they would consider rewarding. These would fall into the first quadrant of the risk/reward matrix I introduced in my column from 5 years ago. The researchers were paying particular attention to two different parts of the brain – the nucleus accumbens and the medial prefrontal cortex. For a layman’s analogy, think of you and a five year old walking down the toy aisle in a department store. The nucleus accumbens is the five year old who starts chanting, “I want it. I want it. I want it.” The medial prefrontal cortex is the adult who decides if they’re actually going to buy it. In the study, the researchers found that the sequence in which these two parts of the brain “lit up” depended on whether or not you saw the price first. If you saw the product first, the nucleus accumbens started its chant – “I want it.” If you saw the price first, the prefrontal medial cortex kicked into action and started evaluating whether the offer represented a good bargain. In the case of the reward products, although the sequence varied, the actually purchase process didn’t. In most cases, participants still ended up making the purchase, whether price was presented first or last.

But things changed when the researchers tried a variety of products that fell into the second quadrant of the risk reward matrix – low risk and low reward. These are the everyday items we have to buy. In the study, they included things like a water filtration pitcher, a pack of AA batteries, a USB drive, and a flashlight. There was nothing here that was likely to get the nucleus accumbens starting to chant.

Now, it should be noted that this follow-up study did not include the fMRI scanning, but by tracking purchasing behaviors we can make some pretty educated guesses as to what’s happening in the respective brains of our participants. Here, presenting prices first resulted in a significant increase in actual purchases over instances when price was presented last. If price comes first, we can imagine that the prefrontal cortex is indicating that it’s a good bargain on a needed product. But if a relatively boring product is presented first for evaluation to the nucleus accumbens, there’s little to excite the reward center.

An important caveat to this part of the study comes with knowing that the prices presented represented significant savings on the products. After the simulated purchases, participants were asked to indicate a price they would be willing to pay for the product. When the price was the lead, the named prices tended to be a little lower, indicating that if you are going to lead with price, especially for quadrant two products, you’d better make sure you’re offering a true bargain.

If anything, this study provides further proof of the value of knowing a prospect’s mental landscape. What are the risk and reward factors that will be motivating them? Will the media prefrontal cortex or the nucleus accumbens be calling the shots? What priming effects might an early introduction of price introduce into the process?

When I wrote about the risk/reward matrix five years ago, one commenter said “a simple low-high risk/low-high reward graph is not very useful for driving just in time and location based offers, discounts, etc.” I respectfully disagree. While more sophisticated models are certainly possible, I think even a simple 2X2 matrix that helps map out the decision factors that are in play with purchases would be a significant step forward. And this isn’t about driving real time variations on offers. It’s about understanding the fundamentals of the buyer’s decision process. There’s nothing wrong with simplicity, especially if it drives greater usage.

The Coming Data Marketplace

The stakes are currently being placed in the ground. The next great commodity will be data and you can already sense the battle beginning the heat up.

Consumer data will be generated by connections. Those connections will fall into two categories: broad and deep. Both will generate data points that will become critical to businesses looking to augment their own internal data.

First, broad data is the domain of Google, Apple, Amazon, eBay and Facebook. Their play is it to stretch their online landscape as broadly as possible, generating thousands of new potential connections with the world at large. Google’s new “Buy” button is a perfect example of this. Adding to the reams of conversion data Google already collects, the “Buy” button means that Google will control even more transactional landscape. They’re packaging it with the promise of an improved mobile buying experience, but the truth is that purchases will be consummated on Google controlled territory, allowing them to harvest the rich data that will be generated from millions of individual transactions across every conceivable industry category. If Google can control a critical mass of connected touch points across the online landscape, they can get an end-to-end view of purchase behavior. The potential of that data is staggering.

In this market, data will be stripped of identity and aggregated to provide a macro but anonymous view of market behaviors. As the market evolves, we’ll be able to subscribe to data services that will provide real time views of emerging trends and broad market intelligence that can be sliced and diced in thousands of ways. Of course, Google (and their competitors) will have a free hand to use all this data to offer advertisers new ways to target ever more precisely.

This particular market is an online territory grab. It relies on a broad set of touch points with as many people across as many devices as possible. The more territory that is covered, the more comprehensive the data set.

The other data market will run deep. Consider the new health tracking devices like Fitbit, Garmin’s VivoActive and Apple’s iWatch. Focused purpose hardware and apps will rely on deep relationships with users. The more reliant you become on these devices, the more valuable the data collected will become. But this data comes with a caveat – unlike the broad data market, this data should not be striped of its identity. The value of the data comes from its connection with an individual. Therefore, that individual has to be an active participant in any potential data marketplaces. The data collector will act more as a data middleman – brokering matches between potential customers and vendors. If the customer agrees, they can choose to release the data to the vendor (or at least, a relevant subset of the data) in order to individualize the potential transaction.

As the data marketplace evolves, expect an extensive commercial eco-system to emerge. Soon, there will be a host of services that will take raw data and add value through interpretation, aggregation and filtering. Right now, the onus for data refinement falls on the company who is attempting to embrace Big Data marketing. As we move forward, expect an entire Big Data value chain to emerge. But it will all rely on players like Google, Amazon and Apple who have the front line access to the data itself. Just as natural resources provided the grist that drove the last industrial revolution, expect data to be the resource that fuels the next one.

An Eulogy for “Kathy” – The First Persona

My column last week on the death of the persona seemed to find a generally agreeable audience. But prior to tossing our cardboard cutouts of “Sally the Soccer Mom” in the trash bin, let’s just take a few minutes to remind ourselves why personas were created in the first place.

Alan Cooper – the father of usability personas – had no particular methodology in mind when he created “Kathy,” his first persona. Kathy was based on a real person that Cooper had talked to during his research for a new project management program. Cooper found himself with a few hours on his hands every day when his early 80’s computer chugged away, compiling the latest version of his program. He would use the time to walk around a golf course close to his office and run through the design in his head. One day, he engaged himself in an imaginary dialogue with “Kathy,” a potential customer who was requesting features based on her needs. Soon, he was deep in his internal discussion with Kathy. His first persona was a way to get away from the computer and cubicle and get into the skin of a customer.

There are a few points here that important to note. “Kathy” was based on input from a real person. The creation of “Kathy” had no particular goal, other than to give Cooper a way to imagine how a customer might use his program. It was a way to make the abstract real, and to imagine that reality through the eyes of another person. At the end we realize that the biggest goal of a persona is just that – to imagine the world through someone else’s eyes.

As we transition from personas to data modeling, it’s essential to keep that aspect alive. We have to learn how to live in someone else’s skin. We have to somehow take on the context of their world and be aware of their beliefs, biases and emotions. Until we do this, the holy grail of the “Market of One” is just more marketing hyperbole.

I think the persona started its long decline towards death when it transitioned from a usability tool to a marketing one. Personas were never intended to be a slide deck or a segmentation tool. They were just supposed to be a little mental trick to allow designers to become more empathetic – to slip out of their own reality and into that of a customer. But when marketers got their hands on personas, they do what marketers tend to do. They added the gloss and gutted the authenticity. At that moment, personas started to die.

So, for all the reasons I stated last week, I think personas should be allowed to slip away into oblivion. But if we do so, we have to find a way to understand the reality of our customers on a one to one basis. We have to find a better way to accomplish what personas were originally intended to do. We have to be more empathetic.

Because humans are humans, and not spreadsheets, I’m not sure we can get all the way there with data alone. Data analysis forces us to put on another set of lenses – ones that analyze – not empathize. Those lenses help us to see the “what” but not the “why.” It’s the view of the world that Alan Cooper would have had if he never left his cubicle to walk around the Old Del Monte golf course, waving his arms and carrying on his internal dialogue with “Kathy.” The way to empathize is to make connections with our customers – in the real world – where they live and play.  It’s using qualitative methods like ethnographic research to gain insights that can then be verified with data. Personas may be dead, but qualitative research is more important than ever.

The Persona is Dead, Long Live the Person

First, let me go on record as saying up to this point, I’ve been a fan of personas. In my past marketing and usability work, I used personas extensively as a tool. But I’m definitely aware that not everyone is equally enamored with personas. And I also understand why.

Personas, like any tool, can be used both correctly and incorrectly. When used correctly, they can help bridge the gap between the left brain and the right brain. They live in the middle ground between instinct and intellectualism. They provide a human face to raw data.

But it’s just this bridging quality that tends to lead to abuse. On the instinct side, personas are often used as a short cut to avoid quantitative rigor. Data driven people typically hate personas for this reason. Often, personas end up as fluffy documents and life sized cardboard cutouts with no real purpose. It seems like a sloppy way to run things.

On the intellectual side, because quant people distrust personas, they also leave themselves squarely on data side of the marketing divide. They can understand numbers – people not so much. This is where personas can shine. At their best, they give you a conceptual container with a human face to put data into. It provides a richer but less precise context that allows you to identify, understand and play out potential behaviors that data alone may not pinpoint.

As I said, because personas are intended as a bridging tool, they often remain stranded in no man’s land. To use them effectively, the practitioner should feel comfortable living in this gap between quant and qual. Too far one way or the other and it’s a pretty safe bet that personas will either be used incorrectly or be discarded entirely.

Because of this potential for abuse, maybe it’s time we threw personas in the trash bin. I suspect they may be doing more harm than good to the practice of marketing. Even at their best, personas were meant as a more empathetic tool to allow you to thing through interactions with a real live person in mind. But in order to make personas play nice with real data, you have to be very diligent about continually refining your personas based on that data. Personas were never intended to be placed on a shelf. But all too often, this is exactly what happens. Usually, personas are a poor and artificial proxy for real human behaviors. And this is why they typically do more harm than good.

The holy grail of marketing would be to somehow give real time data a human face. If we could find a way to bridge left brain logic and right brain empathy in real time to discover insights that were grounded in data but centered in the context of a real person’s behaviors, marketing would take a huge leap forward. The technology is getting tantalizingly close to this now. It’s certainly close enough that it’s preferable to the much abused persona. If – and this is a huge if – personas were used absolutely correctly they can still add value. But I suspect that too much effort is spent on personas that end up as documents on a shelf and pretty graphics. Perhaps that effort would be better spent trying to find the sweet spot between data and human insights.

Mad Men: 2065

So, Don Draper is now history. Well, actually, he’s always been history. He started and finished as a half-century look back at what advertising was. Part of the appeal of Mad Men was the anthropological quaintness of the whole thing – “Can you believe they used to do that?” We, smug in our political correctness, can watch an episode secure in the knowledge that the misogynistic, substance-abusive, racist world of Sterling Cooper and Partners is long gone. The world, and with it, advertising, have come a long way!

But, one wonders, what would happen if a similar premise was launched in 2065? What about advertising now would look similarly unacceptable to viewers then?

Draper’s world was the world of the creative spark igniting the big idea. It was the world of the catchy jingle and meme-worthy slogans. The Don Drapers of the world could do no wrong great enough to tarnish the glow of their ability to blow away a client in a pitch or snag a Clio. Creative gods stood firmly astride their kingdoms on Madison Avenue.

Now, of course, we know better. Those were simpler times. Clients, and consumers, are not nearly that naïve. Today, we demand quantitative data and testing to back up our creative inspirations. It’s not just about Big Ideas. Today, advertising is also about Big Data.

But, 50 years from now, will our current preoccupation with data look anachronistic or prescient to that future audience? Are we exhibiting some equally entertaining naiveté? Will the pendulum swing back to the big idea – or will some other alternative present itself? Will data profiling, targeting and programmatic buying look as quaint then as a corny jingle and a three-martini lunch look to us now?

Advertising in the era of Don Draper had gone through its own evolution. At the turn of the century, thanks to the Industrial Revolution, a flood of new products entered the market. Advertising’s first job was to make consumers aware of new offerings, opening new markets in the process. Its primary goal was to inform.

But, by the 50’s and 60’s, mass media had made consumers aware of most product categories. Advertising’s job became to persuade consumers to purchase products they already knew existed. Its primary goal was to persuade. Market share, rather than market expansion, became the end goal. Hence the era of the big idea. You don’t need a big idea to inform, but you do need one to persuade.

Today, however, with the expanding capabilities of technology and micro-manufacturing fueling a new revolution of innovation, we may be coming back to a time where awareness is the primary concern. Advertising’s job seems to be to navigate increasingly complex filters to create awareness in increasingly targeted audiences. The era of branding that found it’s legs in the era of Don Draper already seems to be morphing into something much different that what we’ve known previously. Who knows what that will look like in another 50 years?

The thing about history is that it gives you the intellectual distance required to recognize how silly we once were. The greater the distance, the safer we feel in laughing at ourselves. In the case of advertising, 50 years seems to be an adequate buffer to feel pretty smug with our historical hindsight. Of course, if somehow you could be transported back to 1965 and talk to the average creative director at a big agency, it’s doubtful they would appreciate being enlightened about their ignorance.

So, if we project that forward to today, it makes you wonder. What are the things we do now that our grandchildren will be laughing at in 50 years?

The Mother of All Disruption

Once again fellow Online Spin author Tom Goodwin has piqued my interest. He starts to unwrap a tremendously thorny problem in his column of last Thursday – Time to Think about Regulation for Disruption. Today, I’d like to take this question up one level – do we have to rethink government entirely?

Government is almost entirely a reactionary business. Even far sighted, historic documents such as the Constitution of the United States and the Magna Carta were reactions to the untenable circumstances that preceded them. And these are the exceptions. The vast majority of governing involves a highly bureaucratic and excruciatingly slow process that attempts to respond to emerging breaches in the unspoken code of fairness that our society tries to live by. Realistically, from the time the need for a new law is recognized to the time a bill is passed, months or even years can pass.

Months or years were, practically speaking, adequate in the world we once knew. But today, that is no longer the case. In that time, complex ecosystems can establish around the breach in question, and, as Tom points out, entire industries may have been decimated in the process. This is the reality of disruption.

In a world that seeks order and governance, this is a bad thing. But, now that we have unleashed the technological Kraken, is that a world we can reasonably expect? Slowly but surely we are dismantling every aspect of our hierarchical society and replacing it with a horizontal network. Hierarchies can’t work horizontally. Something has to give.

Disruptions are a characteristic of networked structures. In order for networks to work, each component of that network has to be given the freedom to act. If the action of an individual resonates with other parts of the network, the actions are picked up and amplified. Each individual act has the potential to become a disruption – with corresponding consequences. Everything becomes accelerated in a network.

Government is built on the ideological foundation of a hierarchy. The word “government” means “to steer.” The assumption is that our society is capable of being steered. This, in turn, assumes that our society all wants to go in the same direction. But if we enforce these restrictions on a network, networks cease to work. Yes, we quell the negative disruptions, but we also eliminate the positive ones.

The United States of America is one of the least restrictive societies on the planet. The founding fathers drafted their articles to enshrine that freedom. You (as a Canadian, I have to say “you”) have managed to balance the practical necessities of government with the lack of restrictions typical of a market economy. Markets naturally emerge from networks. Because the U.S. treasures freedom and innovation, it was inevitable that it would emerge as the testing ground for the impacts of technological advances. You are the canary in the coalmine of massive disruption.

Tom urges lawmakers to become more proactive. But historically speaking, that’s just not the way government works. It’s like riding a cow in the Kentucky Derby and wondering why you can’t keep up. I just don’t think that our current hierarchical system of government is up to the job. It’s a great system, with a ton of democratic checks and balances, but it was built for a different era – one built along vertical lines.

The final issue is one of enforcement. Even if laws are passed to deal with emerging disruptions, it’s becoming almost impossible to enforce them. If lawmakers are scrambling to keep up with society, law enforcers have capitulated entirely. We just can’t afford to enforce the laws we already have on the books.

So, if this is the problem, what is the answer? I think, perhaps, it lies in the very same properties of networks. Government and laws became necessary to avoid abuses of power. Power comes from hierarchies. As societies level out the old dictates of fairness become increasingly relevant. We all have universal concepts of fairness. Abuses of what we consider to be fair are generally dealt with quickly and effectively at the network level. Networks tend to police themselves, as long as there is a common understanding of what is acceptable and what is not. In short, we have to think of regulation in terms of market and network dynamics, not hierarchical governance.

I admit this is tough to wrap your head around. In a world of disruptions, this is the Mother of all Disruption. But symptomatically speaking, it appears that our historic notion of government is ailing. As frightening as it may be to contemplate, we should start thinking about what may replace it.

Some Second Thoughts on Mindless Media

When I read Tom Goodwin’s Online Spin last week, I immediately jumped on his bandwagon. How could I not? He played the evolutionary psychology card and then trumped that by applying it to the consumption of media. This was right up my ideological alley.

addict_f1pjr6Here’s a quick recap: Humans evolved to crave high calorie foods because these were historically scarce. In the last century, however, processed food manufacturing has ensured that high calorie foods are abundantly available. The result? We got fat. Really fat. Tom worries that the same thing is happening to our consumption of media. As traditional publishing channels break down, will we become a society of information snackers?

We’re rewarding pieces that are most-clickable or most easily digested, and our news diet shifts from good-for-us to snackable.”

Goodwin also mourns the death of serendipitous discovery – which was traditionally brought to us by our loyalty to a channel and the editorial control exercised by that channel. If we were loyal to the New York Times, then we were introduced to content they thought we should see. But in the age of “filter bubbles” our content becomes increasingly homogenized based on algorithms, which are drawing an ever-narrowing circle bounded by our explicit requests and our implicit behavior patterns. We become further insulated from quality by mindless social media sharing – which tends to favor content pandering to the lowest common denominator.

But the more I thought about it, the more I wondered if this wasn’t a little paradoxical? Tom’s very thoughtful column, which hardly qualifies as intellectual fast-food, didn’t come to us through traditional journalism. Tom, like myself, is not a professional journalist. And while MediaPost does provide some editorial curation, it’s purpose it to provide a fairly transparent connection between industry experts like Tom and other experts like you. Tom’s piece came to us through a much more transparent information marketplace – the very same marketplace that Tom worries is turning us into an audience of mindless media junkies. And I should add that Tom’s piece was shared through social circles over 200 times.

So where is the disconnect here? The problem is that when it comes to human behaviors, there are no universal truths. How we act in almost any given situation will eventually distribute itself across a bell curve. Let’s take obesity, for instance. If we talk trends, Tom is absolutely correct. The introduction of fast food in North America coincided with an explosion of obesity, which as a percentage of the US population rose from about 10% in the 1950’s to almost 35% in 2013. But if we accept the premise that we all mindlessly crave calories, we should all be obese. Obesity rates should also continue to be going up until they reach 100% of the population. But those two things are just not true. Obesity rates have plateaued in the last few years and there are indications that they are starting to decline amongst children. Also, although fast food is now available around the world – obesity rates vary greatly. Japan has one of the highest concentrations of McDonald’s outlets per capita (25 per million) in the world but has an obesity rate of 3.2%, the lowest in all OECD countries. The US has a higher concentration McDonald’s (45 per million) but has an obesity rate 10 times that of Japan. And my own country, Canada, almost matches the US McDonald for McDonald (41 per million) but has an obesity rate half that of the US (14.3%).

My point is not to debate whether we’re getting fatter. We are. But there’s more to it than just the prevalence of fast food. And these factors apply to our consumption of media as well. For example, there is a strong negative correlation between obesity levels and education. There is also a strong negative correlation between obesity and income. Cultural norms have a huge impact on the prevalence of obesity. There are no universal truths here. There are just a lot of nebulous factors at play. So, if we want to be honest when we draw behavioral comparisons, we have to be accepting of those factors.

Much as I believe evolution drives many of our behaviors, I also believe that more open markets are better than more restrictive ones. As the mentality of abundance takes hold, our behaviors take time to adjust. Yes, we do snack on crap. But we also have access to high quality choices we could have never dreamed of before. And the ratio of consumption between those two extremes will be different for all of us. Consider the explosion of TV programming that has happened over the last 3 decades. Yes, there is an over-abundance of mindless dreck, but there is also more quality programming than ever to choose from. The same is true of music and pretty much any other category where markets have opened up through technology.

The way to increase the quality of what we consume, whether it be food, information or entertainment, is not to limit the production and distribution of those consumables through more restrictive markets, but to improve education, access and create a culture of considered consumption. Some of us will choose crap. But some of us will choose the cream that rises to the top. The choice will be ours. The answer is not to take those choices away, but rather to create a culture that encourages wiser choices.