Is Google Slipping, Or Is It Just Our Imagination?

Recently, I’ve noticed a few articles speculating about whether Google might be slipping:

Last month, the American Customer Satisfaction Index notified us that our confidence in search is on the decline. Google’s score dropped 2% to 82. The culprit was the amount of advertising found on the search results page. To be fair, both Google and search in general have had lower scores. Back in 2015, Google scored a 77%, it’s lowest score ever.

This erosion of customer satisfaction may be leading to a drop in advertising ROI. According to a recent report from Analytic Partners, the return on investment from paid search dropped 27% from 2010 to 2016. Search wasn’t alone. All digital ROI seems to be in decline. Analytic’s VP of Marketing, Joe LaSala, predicts that ROI from digital will continue to decline until it converges with ROI from traditional media.

In April of this year, Forbes ran an article asking the question: “Is Google’s Search Quality Starting to Decline?” Contributors to this decline, according to the article, included the introduction of rich snippets and featured news, including popularity as a ranking factor and ongoing black hat SEO manipulation.

But the biggest factor in the drop of Google’s perceived quality was actually in the perception itself. As the Forbes article’s author, Jayson DeMers, stated;

It’s important to realize just how sophisticated Google is, and how far it’s come from its early stages, as well as the impossibility of having a “perfect” search platform. Humans are flawed creatures, and our actions are what are dictating the shape of search.

Google is almost 20 years old. The domain Google.com was registered on September 15, 1997. Given that 20 years is an eternity in internet years, it’s actually amazing that it’s stood up as well as it has for the past two decades. Whether Google’s naysayers care to admit it or not, that’s due to Google’s almost religious devotion to the quality of their search results. That devotion extends to advertising. The balance between user experience and monetization has always been one that Google has paid a lot of attention too.

But it’s not the presence of ads that has led to this perceived decline of quality. It’s a change in our expectations of what a search experience should be. I would argue that for any given search, using objective measures of result relevance, the results Google shows today are far more relevant than the results they showed in 2008, the year it got it’s highest customer satisfaction score (86%). Since then, Google has made great strides in deciphering user intent and providing a results page that’s a good match for that intent. Sometimes it will get it wrong, but when it gets it right, it puts together a page that’s a huge improvement over the vanilla, one size fits all results page of 2008.

The biggest thing that’s changed in the past 10 years is the context from which we’re launching those searches. In 2008, it was almost always the desktop. But today, chances are we’re searching from a mobile device – or our car – or our home through Amazon Echo. This has changed our expectations of search. We are task focused, rather than “browsing” for information. This creates an entirely different mental framework within which we receive the results. We apply a new yardstick of acceptable relevance. Here, we’re not looking for a list of 20 possible answers – we’re looking for one answer. And it had better be the right one. Context based search must be hyper-relevant.

Compounding this trend is the increasing number of circumstances where search is going “under the hood” – something I’ve been forecasting for a long time now. For example, if you use Siri to launch a search through your CarPlay connected device when you’re driving, the results are actually coming from Bing but they’re stripped of the context of the Bing search results page. Here, the presentation of search results is just one step in a multi-step task flow. It’s important that the result that is on top is the one you’re probably looking for.

Unfortunately for Google – and the other search providers – this expectation stays in place even when the context shifts. When we launch a search from our desktop, we are increasingly intolerant of results that are even a little off base from our intent. Ads become the most easily identified culprit. A results set that would have seemed almost frighteningly prescient even a few years ago now seems sub par. Google has come a long way in the past 20 years but it’s still losing ground to our expectations.

 

 

Is Busy the New Alpha?

Imagine you’ve just been introduced into a new social situation. Your brain immediately starts creating a social hierarchy. That’s what we do. We try to identify the power players. The process by which we do this is interesting. The first thing we do is look for obvious cues. In a new job, that would be titles and positions. Then, the process becomes very Bayesian – we form a base understanding of the hierarchy almost immediately and then constantly update it as we gain more knowledge. We watch power struggles and update our hierarchy based on the winners and losers. We start assigning values to the people in this particular social network and; more importantly, start assessing our place in the network and our odds for ascending in the hierarchy.

All of that probably makes sense to you as you read it. There’s nothing really earth shaking or counter intuitive. But what is interesting is that the cues we use to assign standings are context dependent. They can also change over time. What’s more, they can vary from person to person or generation to generation.

In other words, like most things, our understanding of social hierarchy is in the midst of disruption.

An understanding of hierarchy appears to be hardwired into us. A recent study found that humans can determine social standing and the accumulation of power pretty much as soon as they can walk. Toddlers as young as 17 months could identify the alphas in a group. One of the authors of the study, University of Washington psychology professor Jessica Sommerville , said that even the very young can “see that someone who is more dominant gets more stuff.” That certainly squares with our understanding of how the world works. “More stuff” has been how we’ve determined social status for hundreds of years. In sociology, it’s called conspicuous consumption, a term coined by sociologist Thorstein Veblen. And it’s a signaling strategy that evolved in humans over our recorded history. The more stuff we had, and the less we had to do to get that stuff, the more status we had. Just over a hundred years ago, Veblen called this the Leisure Class.

But today that appears to be changing. A recent study seems to indicate that we now associate busyness with status. Here, it’s time – not stuff – that is the scarce commodity. Social status signaling is more apt to involve complaining about how we never go on a vacation than about our “summer on the continent”.

At least, this seems to be true in the U.S. The researchers also ran their study in Italy and there the situation was reversed. Italians still love their lives of leisure. The U.S. is the only developed country in the world without a single legally required paid vacation day or holiday. In Italy, every employee is entitled to at least 32 paid days off per year.

In our world of marketing – which is acutely aware of social signaling – this could create some interesting shifts in messaging. I think we’re already seeing this. Campaigns aimed at busy people seem to equate scarcity of time with success. The one thing missing in all this social scrambling – whether it be conspicuous consumption or working yourself to death – might be happiness. Last year a study out of the University of British Columbia found a strong link between those who value their time more than money and happiness.

Maybe those Italians are on to something.

Curmudgeon, Chicken Little or Cognoscenti?

Apparently I’m old and out of step. Curmudgeonly, even. And this is from people of my own generation. My previous column about the potential shallowness encouraged by social media drew a few comments that indicated I was just being a grumpy old man. One was from an old industry friend – Brett Tabke:

“The rest of the article is like out of the 70’s in that it is devoid of the reality that is the uber-me generation. The selfie is only a reflection of their inward focus.”

The other was from Monica Emrich, whom I’ve never had the pleasure of meeting:

” ’Social Media Is Barely Skin-Deep.’ ho hum. History shows: when new medium hits, civilization as we know it is over.”

These comments seem to telling me, “Relax. You just don’t understand because you’re too old. Everything will be great.” And, if that’s true, I’d be okay with that. I’m more than willing to be proven a doddering old fool if it means technology is ushering us into a new era of human greatness.

But what if this time is different? What if Monica’s facetious comment actually nailed it? Maybe civilization as we know it will be over. The important part of this is “as we know it.” Every technological disruption unleashes a wave of creative destruction that pushes civilization in a new direction. We seem to blindly assume it will always go in the right direction. And it is true that technology has generally elevated the human race. But not uniformly – and not consistently. What if this shift is different? What if we become less than what we were? It can happen. Brexit – Xenophobia – Trump – Populism, all these things are surfing on the tides of new technology.

Here’s the problem. There are some aspects of technology that we’ve never had to deal with before – at least, not at this scale. One these aspects (other aspects will no doubt be the topic of a future Media Insider) is that technology is now immersive and ubiquitous. It creates an alternate reality for us, and it has done in it in a few short decades. Why is this dangerous? It’s dangerous because evolution has not equipped us to deal with this new reality. In the past, when there has been a shift in our physical reality, it has taken place over several generations. Natural selection had the time to reshape the human genome to survive and eventually thrive in this new reality. Along the way, we acquired checks and balances that would allow us to deal with the potentially negative impacts of the environment.

But our new reality is different. It’s happen in the space of a single generation. There is no way we could have acquired natural defenses against it. We are operating in an environment we have been untested for. The consequences are yet to be discovered.

No, your response might be to say that, “Yes, evolution doesn’t move this quickly, but out brains can. They are elastic and malleable.” This is true, but there’s a big “but” that lies hidden in this approach. Our brains rewire to be a better match their environment. This is one of the things that humans excel at. But this rewiring happens on top of a primitive platform with some built in limitations. The assumption is that a better match with our environment provides a better chance for survival of the species.

But what if technology is throwing us a curve ball in this case? No matter what the environment we have adapted to, there has been one constant: The history of humans depends on our success in living together. We have evolved to be social animals but that evolution is predicated on the assumption that our socializing would take place face-to-face. Technology is artificially decoupling our social interactions from the very definition of society that we have evolved to be able to handle. A recent Wharton interview with Eden Collinsworth sounds the same alarm bells.

“The frontal lobes, which are the part of the brain that puts things in perspective and allows you to be empathetic, are constantly evolving. But it is less likely to evolve and develop those skills if you are in front of a screen. In other words, those skills come into play when you have a face-to-face interaction with someone. You can observe facial gestures. You can hear the intonation of a voice. You’re more likely to behave moderately in that exchange, unless it’s a just a knock-down, drag-out fight.”

Collinsworth’s premise – which is covered in her new book, Behaving Badly – is that this artificial reality is changing our concepts of morality and ethics. She reminds us the two are interlinked, but they are not the same thing. Morality is our own personal code of conduct. Ethics are a shared code that society depends on to instill a general sense of fairness. Collinsworth believes both are largely learned from the context of our culture. And she worries that a culture that is decoupled from the physical reality we have evolved to operate in may have dire consequences.

The fact is that if our morality and ethics are intended to keep us socially more cohesive, this works best in a face-to-face context. In an extreme example of this, Lt. Col. Dave Grossman, a former paratrooper and professor of psychology at West Point, showed how our resistance to killing another human in combat is inversely related to our physical distance from them. The closer we are to them, the more resistant we are to the idea of killing them. This makes sense in an evolutionary environment where all combat was hand-to-hand. But today, the killer could be in a drone flight control center thousands of miles from his or her intended target.

This evolved constraint on unethical behavior – the social check and balance of being physically close to the people we’re engaging with – is important. And while the application of the two examples I’ve cited; One – the self-absorbed behavior on social networks – and Two – the moral landscape of a drone strike operator, may seem magnitudes apart in terms of culpability, the underlying neural machinery is related. What we believe is right and wrong is determined by a moral compass set to the bearings of our environment. The fundamental workings of that compass assumed we would be face-to-face with the people we have to deal with. But thanks to technology, that’s no longer the case.

Maybe Brett and Monica are right. Maybe I’m just being alarmist. But if not, we’d better start paying more attention. Because civilization “as we know it” may be ending.

 

Live, From Inside the Gale of Creative Destruction

Talk about cognitive dissonance…

First, Mediapost’s Jack Loechner writes about a Forrester Report, The End of Advertising as We Know It, which was published earlier this year. Seeing as last week I starting ringing the death knell for advertising agencies, I though I should check the report out.

Problem One: The report was only available on Forrester if I was willing to plunk down $499. American. Which is – I don’t know – about 14 zillion Canadian. Much as I love and respect you, my readers, there’s no friggin’ way that’s going to happen. So, I go to Google to see if I can find a free source to get the highlights.

Problem Two: Everyone and Sergio Zyman’s dog has apparently decided to write a book or white paper entitled “The End of Advertising as We Know It.” Where to begin researching the end? Well, here’s one deliciously ironic option – one of those white papers was published by none other than WPP. You know I have to check that out! As it turns out – no surprise here – it’s a sales pitch for the leading edge cool stuff that one of WPP’s agencies, AKQA, can do for you. I tried to sift through the dense text but gave up after continually bumping into buzz-laden phrases like “365 ideas”, “Business Invention” and “People Stories.” I return to the search results page and follow a Forbes link that looks more promising.

Problem Three: Yep! This is it. It’s Forbes summation of the Forrester Report. I start reading and learning that the biggest problem with advertising is that we hate to be interrupted by advertising. Well, I could have told you that. Oh – wait – I did (for free, I might add). But here’s the cognitively dissonant part. As I’m trying to read the article, an autoplay video ad keeps playing on the Forbes page, interrupting me. And you know what? I hated it! The report was right. At least, I think it was, as I stopped reading the article.

I’m guessing you’re going through something similar right now. As you’re trying to glean my pearls of wisdom, you’re tiptoeing around advertising on the page. That’s not Mediapost’s fault. They have a business to run and right now, there’s no viable business model other than interruptive advertising to keep the lights on. So you have the uniquely dissonant experience of reading about the end of advertising while being subjected to advertising.

My experience – which is hardly unique – is a painful reminder about the inconvenient truth of innovative disruption: it’s messy in the middle of it. When Joseph Schumpeter called it a “gale of creative destruction” it made it sound revolutionary and noble in the way that the Ride of the Valkyries or the Starks retaking Winterfell is noble. But this stuff gets messy, especially if you’re trying to hang on to the things being destroyed when the gale hits in full force.

Here’s the problem, in a nutshell. The tension goes back to a comment made back in 1984 from Stewart Brand to Steve Wozniak:

“On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

In publishing, we not only have the value of the information itself, but we have the cost of wrapping insight around that information. Forrester’s business is industry analysis. Someone has to do the analyzing and there are costs associated with that. So they charge $499 for a report on the end of advertising.

Which brings us to the second part of the tension. Because so much information is now free and Google gives me, the information consumer, the expectation that I can find it for free – or, at least, highlights of it for free – I expect all information to be free. I believe I have an alternative to paying Forrester. In today’s age, information tends to seep through the cracks in pay walls, as it did when Forbes and Mediapost published articles on the report. Forrester is okay with that, because it hopes it will make more people willing to pay $499 aware of the report.

For their part Forbes – or Mediapost – relies on advertising to keep the information available to you for free, matching our expectations. But they have their own expenses. Whether we like it or not, interruptive advertising is the only option currently available to them.

So there we have it, a very shaky house of cards built on a rapidly crumbling foundation. Welcome to the Edge of Chaos. A new model will be created from this destruction. That is inevitable. But in the meantime, there’s going to be a lot of pain and WTF moments. Just like the one I had this week.

Social Media is Barely Skin Deep

Here’s a troubling fact. According to a study from the Georgia Institute of Tech, half of all selfies taken have one purpose, to show how good the subject looks. They are intended to show the world how attractive we are: our makeup, our clothes, our shoes, our lips, our hair. The category accounts for more selfies than all other categories combined. More than selfies taken with people or pets we love, more than us doing the things we love, more than being in the places we love, more than eating the food we love. It appears that the one thing we love the most is ourselves. The selfies have spoken

In this study, the authors reference a 1956 work from sociologist Erving Goffman– The Presentation of Self in Everyday Life. Goffman took Shakespeare’s line – “All the World is a Stage and all the men and women merely players” – quite literally. His theory was that we are all playing the part of whom we want to be perceived as. Our lives are divided up into two parts – the front, when we’re “on stage” and playing our part, and the “back” – when we prepare for our role. The roles we play depend on the context we’re in.

 

Goffman’s theory introduces an interesting variable into consideration. The way we play these roles and the importance we place on them will vary with the individual. For some of us, it will be all about the role and less about the actual person who inhabits that role. These people are obsessed about how they are perceived by others. They’re the ones snapping selfies of themselves to show the world just how marvelous they look.

For others, they care little about what the world thinks of them. They are internally centered and are focused on living their lives, rather than acting their way through their lives for the entertainment of – and validation from – others. In between the two extremes is the ubiquitous bell curve of normal distribution. Most of us live somewhere on that curve.

Goffman’s theory was created specifically to provide insight into face-to-face encounters. Technology has again throw a gigantic wrinkle into things – and that wrinkle may explain why we keep taking those narcissistic selfies.

Humans are pretty damned good at judging authenticity in a face-to-face setting. We pick up subtle cues from across a wide swath of interpersonal communication channels: vocal intonations, body language, eye-to-eye contact, micro-expressions. Together, these inputs give us a pretty accurate “bullshit detector.” If someone comes across as an inauthentic “phony” the majority of us will just roll our eyes and simply start avoiding the person. In face-to-face encounters there is a social feedback mechanism that keeps the “actors” amongst us at least somewhat honest in order to remain part of the social network that forms their audience.

But social media platforms provide the idea incubator for inauthentic presentation of our own personas. There are three factors in particular that allow shallow “actors” to flourish – even to the point of going viral.

False Intimacy and Social Distance

In his blog on Psychology Today, counselor Michael Formica talks about two of these factors – social distance and false intimacy. I’ve talked about false intimacy before in another context – the “labelability” of celebrities. Social media removes the transactional costs of retaining a relationship. This has the unfortunate side effect of screwing up the brain’s natural defenses against inauthentic relationships. When we’re physically close to a person, there are no filters for the bad stuff. We get it all. Our brains have evolved to do a cost/benefit analysis of each relationship we have and decide whether it’s worth the effort to maintain it. This works well when we depend on physically proximate relationships for our own well-being.

But social media introduces a whole new context for maintaining social relationships. When the transactional costs are reduced to a scanning of a newsfeed and hitting the “Like” button, the brain says “What the hell, let’s add them to our mental friends list. It’s not costing me anything.” In evolutionary terms, intimacy is the highest status we can give to a relationship and it typically only comes with a thorough understanding of the good and the bad involved in that relationship by being close to the person – both physically and figuratively. With zero relational friction, we’re more apt to afford intimacy, whether or not it’s been earned.

The Illusion of Acceptance

The previous two factors perfectly set the “stage” for false personas to flourish, but it’s the third factor that allows them to go viral. Every actor craves acceptance from his or her audience. Social exclusion is the worst fate imaginable for them. In a face-to-face world, our mental cost/benefit algorithm quickly weeds out false relationships that are not worth the investment of our social resources. But that’s not true online. If it costs us nothing, we may be rolling our eyes – safely removed behind our screen – as we’re also hitting the “Like” button. And shallow people are quite content with shallow forms of acceptance. A Facebook like is more than sufficient to encourage them to continue their act. To make it even more seductive, social acceptance is now measurable – there are hard numbers assigned to popularity.

This is pure cat-nip to the socially needy. Their need to craft a popular – but entirely inauthentic – persona goes into overdrive. Their lives are not lived so much as manufactured to create a veneer just thick enough to capture a quick click of approval. Increasingly, they retreat to an online world that follows the script they’ve written for themselves.

Suddenly it makes sense why we keep taking all those selfies of ourselves. When all the world’s a stage, you need a good head shot.

The Medium is the Message, Mr. President

Every day that Barack Obama was in the White House, he read 10 letters. Why letters? Because form matters. There’s still something about a letter. It’s so intimate. It uses a tactile medium. Emotions seem to flow easier through the use of cursive loops and sound of pen on paper. They balance between raw and reflective. As such, they may be an unusually honest glimpse into the soul of the writer. Obama seemed to get that. There was an entire team of hundreds of people at the White House that reviewed 10,000 letters a day and chose the 10 that made it to Obama, but the intent was to give an unfiltered snapshot of the nation at any given time. It was a mosaic of personal stories that – together – created a much bigger narrative.

Donald Trump doesn’t read letters. He doesn’t read much of anything. The daily presidential briefing has been dumbed down to media more fitting of the President’s 140 character long attention span. Trump likes to be briefed with pictures and videos. His information medium of choice? Cable TV. He has turned Twitter into his official policy platform.

Today, technology has exponentially multiplied the number of communication media we have available to us. And in that multiplicativity, Marshall McLuhan’s 50-year-old trope about the medium being the message seems truer than ever. The channels we chose – whether we’re on the sending or receiving end – carry their own inherent message. They say who we are, what we value, how we think. They intertwine with the message, determining how it will be interpreted.

I’m sad that letter writing is a dying art, but I’m also contributing to its demise. It’s been years since I’ve written a letter. I do write this column, which is another medium. But even here I’m mislabeling it. Technically this is a blog post. A column is a concept embedded in the medium of print – with its accompanying physical restriction of column inches. But I like to call it a column, because in my mind that carries its own message. A column comes with an implicit promise between you – the readers – any myself, the author. Columns are meant to be regularly recurring statements of opinion. I have to respect the fact that I remain accountable for this Tuesday slot that MediaPost has graciously given me. Week after week, I try to present something that I hope you’ll find interesting and useful enough to keep reading. I feel I owe that to you. To me, a “post” feels more ethereal – with less of an ongoing commitment between author and reader. It’s more akin to a drive-by-writing.

So that brings me to one of the most interesting things about letters and President Obama’s respect for them. They are meant to be a thoughtful medium between two people. The thoughts captured within are important enough to the writer that they’re put in print but they are intended just for the recipient. They are one of the most effective media ever created to ask for empathetic understanding from one person in particular. And that’s how Obama’s Office of Presidential Correspondence treated them. Each letter represented a person who felt strongly enough about something that they wanted to share it with the President personally. Obama used to read his ten letters at the end of the day, when he had time to digest and reflect. He often made notations in the margins asking pointed questions of his staff or requesting more investigation into the circumstances chronicled in a letter. He chose to set aside a good portion of each day to read letters because he believed in the message carried by the medium: Individuals – no matter who they are – deserve to be heard.

The Death of Sears and the Edge of Chaos

So, here’s the question: Could Sears – the retail giant who has become the poster child for the death of mall-based retail shopping – have saved themselves? It’s an important question, because I don’t think Sears was an isolated incident.

In 2006, historian Richard Longstreth explored the rise and fall of Sears. The rise is well chronicled. From their beginnings in 1886, Richard Sears and Alvah Roebuck grew to dominate the catalog mail order landscape. They prospered by creating a new way of shopping that catered specifically to the rural market of America, a rapidly expanding opportunity created by the Homestead Act of 1862. The spreading of railroads across the continent through the 1860’s and 70’s allowed Sears to distribute physical goods across the nation. This, combined with their quality guarantee and free return policy, allowed Sears to rapidly grow to a position of dominance.

In the 1920’s and 30’s, Robert E. Wood, the fourth president of Sears, took the company in a new direction. He reimagined the concept of a physical retail store, convincing the reluctant company to expand from its very lucrative catalog business. This was directly driven by Sear’s foundation as a mail order business. In essence, Woods was hedging his bet. He built his stores far from downtown business centers, where land was cheap. And, if they failed as retail destinations, they could always be repurposed as mail order distribution and fulfillment centers. But Wood got lucky. Just about the time he made this call, America fell in love with the automobile. They didn’t mind driving a little bit to get to a store where they could save some money. This was followed by the suburbanization of America. When America moved to the suburbs, Sears was already there.

So, you could say Sears was amazingly smart with its strategy, presciently predicting two massive disruptions in the history of consumerism in America. Or you could also say that Sears got lucky and the market happened to reward them – twice. In the language of evolution, two fortuitous mutations of Sears led to them being naturally selected by the marketplace. But, as Longstreth showed, their luck ran out on the third disruption, the move to online shopping.

A recent article looking back at Longstreth’s paper is titled “Could Sears Have Avoided Becoming Obsolete?”

I believe the answer is no. The article points to one critical strategic flaw as the reason for Sear’s non-relevance: doubling down on their mall anchor strategy as the world stopped going to malls. In hindsight, this seems correct, but the fact is, it was no longer in Sears DNA to pivot into new retail opportunities. They couldn’t have jumped on the e-com bandwagon, just as a whale can’t learn how to fly. It’s easy for historians to cast a gaze backwards and find reasons for organizational failure, just as it’s easy to ascribe past business success to a brilliant strategy or a visionary CEO. But the fact is, as business academic Phil Rosenzweig shows in his masterful book The Halo Effect, we’re just trying to jam history into a satisfying narrative. And narratives crave cause and effect. We look for mistakes that lead to obsolescence. This gives us the illusion that we could avoid the same fate, if only we are smarter. But it’s not that simple. There are bigger forces at play here. And they can be found at the Edge of Chaos.

Edge of Chaos Theory

In his book, Complexity: Life at the Edge of Chaos, Roger Lewin chronicles the growth of the Santa Fe Institute, an academic think tank that has been dedicated to exploring complexity for the last 33 years now. But the “big idea” in Lewin’s book is the Edge of Chaos Theory, a term coined by mathematician Doyne Farmer to describe a discovery by computer scientist Christopher Langton.

The theory, in its simplest form, is this: On one side you have chaos, where there is just too much dynamic activity and instability for anything sustainable to emerge. On the other side you have order, where rules and processes are locked in and things become frozen solid. These are two very different states that can apply to biology, sociology, chemistry, physics, economics – pretty much any field you can think of.

To go from one state – in either direction – is a phase transition. Everything changes when you move from one to the other. On one side, turmoil crushes survivability. One the other, inertia smothers change. But in between there is a razor thin interface, balanced precipitously on the edge of chaos. Theorists believe that it’s in this delicate interface where life forms, where creativity happens and where new orders are born.

For any single player, it’s almost impossible to maintain this delicate balance. As organizations grow, I think they naturally move from chaos to order, at some point moving through this exceptional interface where the magic happens. Some companies manage to move through this space a few times. Apple is such a company. Sears probably moved through the space twice, once is setting their mail order business up and once with their move to suburban retail. But sooner or later, organizations go through their typical life cycle and inevitably choose order over chaos. At this point, their DNA solidifies to the point where they can no longer rediscover the delicate interface between the two.

It’s at the market level where we truly see the Edge of Chaos theory play out. The theory contests that adaptive systems in which there is feedback continually adapt to the Edge of Chaos. But, as in any balancing act, it’s a very dynamic process. In the case of sociological evolution, it’s often a force (or convergence of forces) of technology that catalyzes the phase transition from order back to chaos. This is especially true when we look at markets. But this is an oscillation between order and chaos, with the market switching from phases of consolidation and verticalization to phases of chaos and sweeping horizontal activation. Markets will swing back and forth but will constantly be rewarding winners that live closest to the edge between the two states.

We all love to believe that immortality can be captured in our corporate form, whether it be our company or our own body. But history shows that we all have a natural life cycle. We may be lucky enough to extend our duration in that interface on the edge of chaos, but sooner or later our time there will end. Just as it did with Sears.