The Case for Bringing Marketing Inhouse

“We were just writing a lot of checks to agencies, but digital marketing is now in our brand DNA.”
Blake Cahill – Philips Global Head of Digital

When we talk about disruptions in marketing, one of the elephants in the room is the increasing demand to bring marketing in house. Companies like Phillips are bringing more and more marketing functions in-house. As an ex-agency guy, this will sound either blasphemous or disingenuous, but I suspect that it might be the right way to go. I’ll tell you why. It has a lot to do with the evolution of strategy.

In the past, we did two things when we planned strategy. We planned in relatively straight lines and we planned over long time frames – a minimum of 5 years was not unusual.

Here’s how it would play out. Executives would go through their strategic planning exercise, which may or may not include getting input from the internal and external marketers. Strategic plans would be formed and this would then be broken down into departmental directives. Department heads – including marketing – would then execute against the plan, with periodic progress reviews scheduled. The entire loop, from input to executional plans, could easily span several months or even a year or more.

The extended timeline is just one of the issues with this approach to strategy. The other problem is that this assumes that strategic planning is only something executives can do. The strategic frame is only set at the highest levels of the organization. And it’s the executive’s prerogative to either consider or completely ignore any input from their direct reports. Even if they do consider it, this feedback is likely several steps removed from the source – namely – the market.

I’ve written before about the concept of Bayesian Strategy. There are three basic foundations to this approach:

  • Strategic planning is a continuous and iterative process
  • Strategic plans are nothing more than hypotheses that are then subject to validation through empirical data
  • The span of the loop between the setting of the strategic frame and the data that validates it should be kept as short as possible.

With Bayesian strategy, the corporation needs to maintain a number of acutely aware “sensing” interfaces that provide constant data about the corporation’s current “reality”:

  • The internal “reality” – especially in more qualitative areas that might fall outside typical KPI’s – like moral, satisfaction, communication effectiveness, etc.
  • The external “reality” – What’s happening in the market? What are customer’s perceptions? What is the competition doing?

These “sensing” interfaces create the frame for the organization. As such, they’re integral to the setting and updating of strategy. Just as our brain depends on our senses to define our sense of what’s real, the organization depends on these interfaces. And when it comes to the “external” reality, no department is in a better position to make sense of the world than marketing. The span of distance between marketing and the management of the company should be as short as possible. This is very difficult to achieve when you rely on external partners for that marketing.

When a company like Phillips brings marketing in-house, it’s more than just a cost-saving or consolidation effort. It’s bringing the function of marketing as close as possible to the core brand. It’s not only giving it a seat at the strategic table but making it one of the key contributors to that strategy. Like I said, it’s a move that makes a lot of sense.

But there is another side to this story, and that has to do with perspective. I’ll look at the flip side of this argument next Tuesday.

 

Is Amazon Creating a Personalized Store?

There was a brief Amazon-related flurry of speculation last week. Apparently, according to a podcast posted by Wharton, Amazon is planning on opening 300 to 400 bricks and mortar stores.

That’s right. Stores – actual buildings – with stuff in them.

What’s more, this has been “on the books” at Amazon for a while. Amazon CEO Jeff Bezos was asked by Charlie Rose in 2012 if they would every open physical stores. Bezos replied, ““We would love to, but only if we can have a truly differentiated idea,” he said. “We want to do something that is uniquely Amazon. We haven’t found it yet, but if we can find that idea … we would love to open physical stores.”

With that background, the speculation makes sense. If Amazon is pulling the trigger, they must have “found the idea.” So what might that idea be?

Amazon does have a test store in their own backyard of Seattle. What they have chosen to do there, in a footprint about the tenth of the size of the former Barnes and Noble store that was there, is present a “highly curated” store that caters to “local interests.”

Most of the speculation about the new Amazon experiment in “back-to-the-future” retail centers around potential new supply chain management technology or payment methods. But there was one quote from Amanda Nicholson, professor of retail practice at Syracuse University’s Whitman School of Management, that caught my attention; “she said that space represents ‘a test’ to see if Amazon can create ‘a new kind of experience’ using data analytics about customers’ preferences.”

This becomes interesting if we spend some time thinking about the purchase journey we typically take. What Amazon had done online brilliantly is remove friction from two steps in that journey: filtering options and conducting the actual transaction. For certain kinds of purchases, this is all we need. If we’re buying a product that doesn’t rely on tactile feedback, like a digital file or a book, Amazon has connected all the dots required to take us from awareness to purchase.

But that certainly doesn’t represent all potential purchases. That could be the reason that online purchases only represent 9% of all retail. There are many products that require an “experience” between the filtering of options available to us and the actual purchase. These things still require the human “touch” – literally. Up to now, Amazon has remained emotionally distant from these types of purchases. But perhaps a new type of retail location could change that.

Let me give you an example. If you’re a cyclist (like me) you probably have a favorite bike shop. Bike stores are not simply retail outlets. They are temples of bike worship. Bike shops are usually an independent business run by people who love to talk about their favorite rides, the latest bikes or pretty much anything to do with cycling. Going to a bike store is an experience.

But Trek, one of the largest bike manufacturers in the world, also recognized the efficiency of the online model. In 2015, they announced the introduction of Trek Connect, their attempt to find a happy middle ground between practical efficiency and emotional experience. Through Trek Connect, you can configure and order your bike online, but pick it up and have it serviced at your local bike shop.

However, what Amazon may be proposing is not simply about the tactile requirements of certain types of purchases. What if Amazon could create a personalized real world shopping experience?

Right now, there is a gap between our online research and filtering activity and our real world experiential activity. Typically, we shortlist our candidates, gather required information, often in the form of a page printed off from a website, and head down to the nearest retail location. There, the hand off typically leaves a lot to be desired. We have to navigate a store layout that was certainly not designed with our immediate needs in mind. We have to explain what we want to a floor clerk who seems to have at least a thousand other things they’d rather be doing. And we are not guaranteed that what we’re looking for will even be in stock.

But what if Amazon could make the transition seamless? What if they could pick up all the signals from our online activity and create a physical “experiential bubble” for us when we visited the nearest Amazon retail outlet?

Let me go back to my bike purchasing analogy in way of an example. Let’s say I need a new bike because I’m taking up triathlons. Amazon knows this because my online activity has flagged me as an aspiring triathlete. They know where I live and they have a rich data set on my other interests, which includes my favored travel destinations. Amazon could take this data and, under the pretext of my picking up my bike, create a personalized in store experience for me, including a rich selection of potential add-on sales. With Amazon’s inventory and fulfillment prowess, it would be possible to merchandise a store especially for me.

I have no idea if this is what Amazon has “in store” for the future, but the possibility is tantalizing.

It may even make me like shopping.

 

 

 

A New Definition of Order

The first time you see the University of Texas – Austin’s AIM traffic management simulator in action, you can’t believe it would work. It shows the intersection of two 12 lane, heavily trafficked roads. There are no traffic lights, no stop signs, none of the traffic control systems we’re familiar with. Yet, traffic zips through with an efficiency that’s astounding. It appears to be total chaos, but no cars have to wait more than a few seconds to get through the intersection and there’s nary a collision in site. Not even a minor fender bender.

Oh, one more thing. The model depends on there being no humans to screw things up. All the vehicles are driverless. In fact, if just one of the vehicles had a human behind the wheel, the whole system would slow dramatically. The probability of an accident would also soar.

The thing about the simulation is that there is no order – or, at least – there is no order that is apparent to the human eye. The programmers at the U of T seem to recognize this with a tongue in cheek nod to our need for rationality. This particular video clip is called “insanity.” There are other simulation videos available at the project’s website, including ones where humans drive cars at intersections controlled by stoplights. These seem much saner and controlled. They’re also much less efficient. And likely more dangerous. No simulation that includes a human factor comes even close to matching the efficiency of the 100% autonomous option.

The AIM simulation is complex, but it isn’t complicated. It’s actually quite simple. As cars approach the intersection, they signal to a central “manager” if they want to turn or go straight ahead. The manager predicts whether the vehicles path will intersect another vehicle’s predicted path. If it does, it delays the vehicle slightly until the path is clear. That’s it.

The complexity comes in trying to coordinate hundreds of these paths at any given moment. The advantage the automated solution has is that it is in communication with all the vehicles. What appears chaotic to us is actually highly connected and coordinated. It’s fluid and organic. It has a lot in common with things like beehives, ant colonies and even the rhythms of our own bodies. It may not be orderly in our rational sense, but it is natural.

Humans don’t deal very well with complexity. We can’t keep track of more than a dozen or so variables at any one time. We categorize and “chunk” data into easily managed sets that don’t overwhelm our working memory. We always try to simplify things down by imposing order. We use heuristics when things get too complex. We make gut calls and guesses. Most of the time, it works pretty well, but this system gets bogged down quickly. If we pulled the family SUV into the intersection shown in the AIM simulation, we’d probably jam on the brakes and have a minor mental meltdown as driverless cars zipped by us.

Artificial intelligence, on the other hand, loves complexity. It can juggle amounts of disparate data that humans could never dream of managing. This is not to say that computers are more powerful than humans. It’s just that they’re better at different things. It’s referred to as Moravec’s Paradox: It’s relatively easy to program a computer to do what a human finds hard, but it’s really difficult to get it to do what humans find easy. Tracking the trajectories and coordinating the flow of hundreds of autonomous cars would fall into the first category. Understanding emotions would fall into the second category.

This matters because, increasingly, technology is creating a world that is more dynamic, fluid and organic. Order, from our human perspective, will yield to efficiency. And the fact is that – in data rich environments – machines will be much better at this than humans.   Just like our perspectives on driving, our notions of order and efficiency will have to change.

 

Giving Thanks for The Law of Accelerating Returns

For the past few months, I’ve been diving into the world of show programming again, helping MediaPost put together the upcoming Email Insider Summit up in Park City. One of the keynotes for the Summit, delivered by Charles W. Swift, VP of Strategy and Marketing Operations for Hearst Magazines, is going to tackle a big question, “How do companies keep up with the ever accelerating rate of change of our culture?”

After an initial call with Swift, I did some homework and reacquainted myself with Ray Kurzweil’s Law of Accelerating Returns. Shortly after, I had to stop because my brain hurt. Now, I would like to pass that unique experience along to you.

In an interview that is now 12 years old, Kurzweil explained the concept, using biological evolution as an analogy. I’ll try to make this fast. Earth is about 4.6 billion years old. The very first life appeared about 3.8 billion years ago. It took another 1.7 billion years for multicellular life to appear. Then, about 1.2 billion years later, we had something called the Cambrian Explosion. This was really when the diversity of life we recognize today started. If you’ve been keeping track, you know that it took the earth 4.1 of it’s 4.6 billion year history, or about 90% of the time since the earth was formed, to produce complex life forms of any kind.

Things started to move much quicker at that point. Amphibians and reptiles appeared about 350 million years ago, dinosaurs appeared 225 million years ago, mammals 200 million years ago, dinosaurs disappeared about 70 million years ago, the first great apes appeared about 15 million years ago and we homo sapiens have only been around for 200,000 years or so. And, as a species, we really have only made much of dent in the world in the last 10,000 years of our history. In the entire history of the world, that represents a very tiny 0.00022% slice. But consider how much the world has changed in that 10,000 years.

Accelerating Returns

Kurzweil’s Law says that, like biology, technology also evolves exponentially. It took us a very long time to do much of anything at all. The wheel, stone tools and fire took us tens of thousands of years to figure out. But now, technological paradigms shifts happen in decades or less. And the pace keeps accelerating. The Law of Accelerating Returns states that in the first 20 years of the 21st century, we’ll have progressed as much as we did during the entire 20th century. Then we’ll double that progress again by 2034, and double it once more by 2041.

Let me put this in perspective. At this rate, if my youngest daughter – born in 1995 – lives to be 100 (not an unlikely forecast), she will see more technological change in her life than in previous 20,000 years of human history!

This is one of those things we probably don’t think about because, frankly, it’s really hard to wrap your head around this. The math shows why predictability is flying out the window and why we have to get comfortable reacting to the unexpected. It would also be easy to dismiss it, but Kurzweil’s concepts are sound. Evolution does accelerate exponentially, as has our rate of technological advancement. Unless the later showed a dramatic reversal or slowing down, the future will move much much faster than we can possibly imagine.

The reason change accelerates is that the technology we develop today builds the foundations required for the technological leaps that will happen tomorrow. Agriculture set the stage for industry. Industry enabled electricity. Electricity made digital technology possible. Digital technology enables nanotechnology. And so on. Each advancement sets the stage for the next, and we progress from stage to stage more rapidly each time.

So, for your extended long weekend, if you’re sitting in a turkey-induced tryptophan daze and there’s no game on, try wrapping your head around The Law of Accelerating Returns.

Happy Thanksgiving. You’re welcome.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Google’s New Brand Launch – Function Driving Form

What would happen if you created an advertising agency run by engineers?

You’d have Google. That’s what.

Last week, I was on the road. I went to Google something on my smartphone, and noticed the logo had changed. I thought at first it was a Doodle commemorating some famous typeface designer, so I didn’t spend too much time digging into it. But on the next day, when the new Google word mark was still there, I decided to see if this was deliberate and permanent. Sure enough, Google had quietly swapped out their brand identity. And they did it in classic Google style.

I wasn’t a fan – at first. But I was looking at it from a purely aesthetic perspective. I prefer classic serif faces. I love the elegance of the curvatures and strokes. Sans serif faces always seem to me to be trying too hard to be accessible. They’re like the puppies of the design world, constantly licking your face. Serif faces are like cats – stretching luxuriously and challenging you to love them on their terms.

But the more I thought – and read – about the branding change, the more I realized that the move was driven by function over form. Google was creating a visual and iconic language with the change. It was driven by the realities of maintaining an identity across a fragmentation of platforms and contexts. One can almost imagine the requirements document that had been put forth to the design team by the various Google engineers that decide these things – a logo that minimizes visual friction and cognitive load – scales well on all screens from nano to peta configurations (and eventually yocto to yotta)– acts as a visual wayfinder no matter where you are in the Google universe – and looks just a little whimsical (the last of these being a concession to the fine arts intern that was getting lattes and Red Bull for the group).

In the last month, Google has announced a massive amount of corporate change. Any other company would have taken the opportunity to mount a publicity event roughly the size of the Summer Olympics. But Google just quietly slipped these things into their weekly to do list. The logo dropped on a Tuesday. A Tuesday! Who the hell rebrands themselves on a Tuesday? There was no corporate push from Google other than a fairly muted blog post, but in researching this column, I found commentary on the change on pretty much every major media. And they weren’t just reporting the change. They were debating it, commenting on it, engaging in it. People gave a damn, either for or against.

That’s when I realized the significance of Google’s move. Because function was driving form – because engineers were dictating to designers – the branding had to be closer to its market. The rebranding was being done to make our lives easier. It wasn’t there to launch some misguided agency driven interpretation of an envisioned future, or slide Google into some strategic position in the marketplace. It was done because if wasn’t done, Google couldn’t do all the rest of the stuff it had to do. Google didn’t tell us what we should think of the move. They just did it and let us decide.

If function determines branding, then it’s living in the right place – the intersection between the market and the marketer. I’ve previously chastised Google for their lack of design thinking, but in this case, maybe they got it right. And maybe there’s a lesson there we all need to learn about the new rules of branding.

Who’s Who on the Adoption Curve

For me, the Adoption Curve of the Internet of Things is fascinating to observe. Take the PoloTech shirt from Ralph Lauren, for example. It’s a “smart shirt”. The skintight shirt measures your heart rate, how deeply you’re breathing, how stable you are and a host of other key biometrics. All this is sent to your smart phone. One will set you back a cool 300 bucks. But it’s probably not the price that will separate the adopters from the laggards in this case. In the case of the PoloTech shirt, as with many of the new pieces of wearable tech, it’s likely to be your level of fitness that determines which slope of the adoption curve you’ll end up on.

polotechIf you look at the advertising of the PoloTech, it’s clear who the target is: dudes with 0.3% body fat and ridiculously sculpted torsos who live on protein drinks and 4 hour workouts. Me? Not so much. The same is true, I suspect, for the vast majority of us. Unless we’re looking for a high tech girdle to both hold back and monitor the rate of expansion of our guts, I don’t think this particular smart shirt is in the immediate future for me.

As I said, much of the current generation of wearable technology is designed to tell us just how fit we are. Logic predicts that these devices should offer the greatest benefits to those who are the least fit. They, after all, have the most to gain. But that’s not who’s jumping the adoption curve. In my world, which is recreational cycling, the ones who are religiously tracking a zillion metrics are the ones who are already on top of the statistical heap. The reason? Technology has created an open market of bragging rights. Humans are naturally competitive. We like to know how we stack up against others. But we don’t bother keeping track until we’re reasonably sure we’re well above average. So, if you log onto Strava, where many cyclists upload their tech-tracked rides, you can find out just who is the “King of the Mountain” at your local version of the Alpe d’Huez.

This brings about an interesting variation on Roger’s Technology Adoption Curve. Wearable technology often means the generation of personal data. Therefore, an appetite for that data will accelerate the adoption of those respective technologies. We don’t mind being quantified, as long as that quantification paints us in a good light. We want to live in Lake Wobegon, where all the women are strong, all the men are good-looking and all the children are above average.

Adoption of new technologies, according to Rogers, depends on 5 factors: Relative Advantage, Compatibility, Complexity, Trialability and Observability. To this, Rogers added a sixth factor – the status conferring potential of a new innovation. Physical fitness, by its nature, begs to be quantified. Athletic ability and rankings go hand in hand. Status is literally the name of the game. Therefore, there is a natural affinity between wearable technologies that tracks physical performance and fitness.

This introduces some interesting patterns of adoption for new additions to the Internet of Things. Adoption will rapidly saturate certain niches of the population, but may take much longer to cross the chasm to the general masses. And the defining characteristics of the early adopters could be completely different in each case. As more and more things become “smart” the factors of adoption will become more fragmented and diverse. Early adopters of Coke’s Freestyle vending machine will have little in common with early adopters of the PoloTech shirt.

The absorption rate of technology into our lives has been increasing exponentially, seemingly in lock step with Moore’s Law. Every day, we are introduced to more and more things that have technology embedded in them. The advantages that this technology offers will depend on who is judging it. For some, a given technology will be a perfect fit. For others, it will be like trying to squeeze into a high tech shirt that makes us look like an overstuffed sausage.

Can Alphabet Spark Corporate Innovation?

As I was reading Walter Isaacson’s new book, The Innovators, which chronicles the rise of the digital revolution, something struck me. From Charles Babbage to Sergey Brin, the arc of digital innovation has gone through three very distinct stages.

In the beginning of the digital revolution, some 150 years ago, the innovator was the inventor and the gentleman scientist. They maintained and nurtured academic networks but often worked alone. The primary way they spread ideas was through publishing them in journals. If, as in the case of Charles Babbage and his Differential Engine, there was prototyping required, they would find a patron and then hire the people required to fabricate the prototype. They did this because they could. In this time, innovation was not a particularly resource-intensive endeavor.

But, as we moved into the 20th century, things changed. For the next 6 decades, Isaacson’s innovators tended to be found in one of three places: an university, a government funded lab or a corporate lab. Innovators were generally cogs in much bigger machines. Why? Because the scope of innovation had changed. It had become much more resource hungry. You needed the bulk of a Bell Labs in order to turn out a prototypical transistor.

One also gets the sense that many of the innovators Isaacson profiles were barely tolerated within these more corporate environments. Brilliance often comes coupled with abrasiveness as its dance partner. Many of the forebears of the digital revolution seem to be – not to put too fine a point on it – assholes. If you read between the lines you get the sense that both the innovator and their place of innovation would be immeasurably happier if their paths diverged. But, given the realities of the world at the time, they both needed each other.

Starting in the Sixties, a new breed of innovator emerged – the innovative entrepreneur. Almost without exception, they started within a larger organizational context, but soon found a way to break free and build a company around their innovativeness. Gordon Moore, Robert Noyce, Bill Hewlett, David Packard, Bill Gates, Paul Allen, Steve Jobs, Steve Wozniak, Larry Page and Sergey Brin – all took a new path to innovation. Thanks to the introduction of venture capital, innovation could become the road to riches.

This all becomes more than academically interesting in the light of Google’s announced corporate re-org. Essentially, they’re trying to buck the trend of innovative evolution. Page and Brin feel that innovation can still be contained within the boundaries of a corporate structure, as long as that structure is – well – innovative enough.

In theory, their logic looks sound. The biggest complaint I hear from current Googlers is their feeling of inconsequentiality within a massive organization. Breaking the big boat into a bunch of smaller life rafts could solve that problem. If you could somehow provide innovators with enough room to stretch their mental muscles and yet support them with the enormous resources Google/Alphabet has at their disposal, it seems like a no-lose scenario. Essentially, Alphabet should be able to provide a steroid powered incubator for innovation.

Yet, I remain skeptical. I suspect innovation may defy the best-laid corporate logic. You can sketch out an org-chart that seems like a stable platform for entrepreneurialism, but I think the entrepreneurs may still squeeze out through the cracks. Even if they’re not egotistical jerks, they are, by their very nature, individualistic. They defy authority. Their dreams are tough to contain. Where you see a supportive incubator, they see a restrictive cage. Corporations tend to excel at incremental innovation, but disruptive innovation comes from individuals who don’t play nice at company picnics. And that’s the type of innovation that Alphabet is betting on.

Alphabet is an interesting development in corporate structures. I hope it works. But I’m not sure you can harness entrepreneurialism because it, like information and the human spirit, yearns to be free.

Why Disruptive Change is Disruptive

There were a lot of responses to my last column, looking at why agencies and clients have hit the point of irreconcilable differences. Many of those responses were in agreement. In fact, none were in outright disagreement. This surprised me. A lot of Online Spin readers are people who work for very big agencies. I can only conclude that you elected to show your dissention through your silence.

But there were many that fell in the “Yeah-but” category:

Tiffany Lyman Otten wrote,

“This, like anything, is a sign simply that agencies must evolve – again.

Jill Montaigne adds,

“Yet, our own ongoing advertiser conversations confirm that rather than walking away from their traditional agency relationships, clients desperately need and want their agencies to evolve.”

David Vawter chimes in,

“As long as there is something to sell, people will be needed to create and produce the ideas that sell it.”

Agreed. But…

All of the above comments pointed to a new trend in the marketing ecosystem – that of a network of specialists, often in the form of micro-agencies, that appear to be finding niches to hang on to in the tidal wave of change that is sweeping over our industry.

I used to head one of these agencies. Our area of specialty was in user behavior with search interfaces. We did well in this niche. So well, in fact, that we were eventually acquired by a bigger agency. Bigger agencies are always vertically integrated. As such, they offer clients the one-stop shop model. They move to that model because that is the model they know. It is the model they are programmed to create. It is an organizational form that is dictated by their P&L targets. There is no operational wiggle-room here. They simply can’t become anything else.

Tiffany, Jill and several others all used the word evolve, like it is a magical formula for survival. But evolution is like a tree. Once your branch has been determined, you have to evolve outward from that branch. You can’t suddenly leap to another branch. If you’re a chimpanzee, you can’t suddenly decide one day to evolve into a budgie. You can evolve into a new type of chimpanzee, but you’re still a chimpanzee.

What does happen in evolution, however, is that the environment changes so drastically that the tree is dramatically pruned. Some branches are lopped off, so that new branches can sprout. This is called punctuated equilibrium, and, as I’ve said before, this is what I believe we’re going through right now in marketing. Yes, as David rightly notes, “As long as there is something to sell, people will be needed to create and produce the ideas that sell it.” It’s just that the form that takes may be dramatically different that what we currently know. It could be – correction – will be a marketing ecosystem that will be dominated by new species of marketers.

We tend to equate evolution with change – but evolution is a very specific kind of change. It’s change in response to environmental pressures. And while individual species can evolve, so can entire ecosystems. In that bigger picture, some species will emerge and thrive and others will disappear. What is happening to agencies now is just a ripple effect from a much bigger environmental change – analogous to a planet size asteroid slamming into the business and marketing ecosystem that evolved over the past two centuries.

Big agencies are the result of corporate evolution in the previous ecosystem. We are quick to take them to task for being slow, or dumb, or oblivious to client needs. And perhaps, in the new ecosystem, those things are true. But those are the characteristics of the species. No agency intends to be dumb or unresponsive. It’s just an evolutionary mismatch caused by massive disruption in the environment.

These things happen. It’s actually a good thing. Joseph Schumpeter called it Creative Destruction. But, as the name implies, it’s a zero sum game. For something to be created, something has to be destroyed.

Why Agencies and Clients are Calling It Quits

“Love on the Rocks – ain’t no surprise.”

Neil Diamond

In yesterday’s Online Spin, Maarten Albarda signaled the imminent break up of agencies and clients. Communication is close to zero. Fingers are being pointed. The whisper campaign has turned into outright hostility.

When relationships end, it can be because one of the parties is just not trying. But that isn’t the case here. I believe agencies are truly trying to patch things up. They are trying to understand their one-time life partner. They are desperately gobbling up niche shops and investing in technology in order to respark the flame. And the same is true, I believe, on the client side. They want to feel loved again by their agency of record.

I think what’s happening here is more akin to a break up that happens because circumstances have changed and the respective parties haven’t been able to keep up. This is more like high school sweethearts looking at each other 20 years hence and realizing that what once bonded them is long gone. And, if that’s true, it might be helpful to look back and see what happened.

The problem here is that the agency is a child of a marketplace that is rapidly disappearing. It is the result of the creation of the “Visible Hand” market. In his book of the same name, Alfred Chandler went to great lengths (over 600 pages) to chronicle the rise of the modern organization. The modern concept of an advertising agency was a by-product of that. Vertically integrated organizations came about to overcome some inherent inefficiencies in the market – notably the problem of geography and the lack of a functional marketplace network that came with rapid expansions in production and transportation capabilities. Essentially, markets grew too rapidly for Adam Smith’s “Invisible Hand” to be able to effectively balance through market dynamics. Organizations grew to mammoth size in order to provide internal efficiencies that allowed for greater profitability. You had to be big to be competitive. Agents of all types filled the gaps that were inevitable in a rapidly expanding market place. Essentially an agent bridged the gap between two or more separate nodes in a market network. They were the business equivalent of Mark Granovetter’s “weak tie.”

Through the 20th century advertising agents evolved into creative houses – which is where they hit their golden period. But why was this creativity needed? Essentially, agencies evolved when advances in production and distribution technologies weren’t enough to expand markets anymore. Suddenly, companies needed agencies to create demand in existing and identified markets through the sparking desire. This was the final hurray of the “visible hand” marketplace.

But the explosion of networking technologies and the reduction of transactional friction is turning the “visible hand” market back into the “invisible hand” market of Adam Smith – driven by the natural laws of marketplaces. The networks of the marketplace are becoming more connected than ever.

This is a highly dynamic, cyclical market. Straight line strategic planning doesn’t work here. And straight line strategic planning is a fundamental requirement of an agency relationship. That level of stasis is needed to overcome the inherent gaps in a third party relationship. Even under the best of circumstances, an arm’s length relationship can’t effectively “make sense” of the market environment and react quickly enough to maneuver in this marketplace. And, as Albarda points out, the client-agency relationship is far from healthy.

The ironic part is all of this is that what was once an agency’s strength – its position as a bridge between existing networks, has turned into its greatest vulnerability. Technology has essential removed the gaps in the market itself, allowing clients to become more effectively linked to natural networks of customers through emerging channels that are also increasingly mediated by technology. Middlemen are no longer needed. Those gaps have disappeared. But the gap that has always been there, between the agent and the client, not only still exists, but is widening with the breakdown of the relationship. Agencies are like bridges without a river to span.

If you read the common complaints from both sides in the presentations Albarda references , they all come from the ever-widening schism that has come from a drastic change in the market itself. Simply put, the market has evolved to the point where agency relationships are no longer tenable. We on the agency side keep saying we need to reinvent ourselves, but that’s like saying that a dog has to reinvent itself to become a fish – it’s just not in our DNA.