The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

Disruption 101

We Online Spinners are talking a lot about disruption. Dave Morgan has been talking about disruption in the Advertising and Marketing Technology space. I’ve been looking at disruption in other areas, including academia. Cory Treffiletti, Kaila Colbin, Maarten Albarda have all looked at various aspects of disruption. A quick look back at the past few months’ Spin columns show that well over half of them deal with disruption in one way or another.

Maybe it’s time we did a primer on the idea of disruption.

Disruption is what happens when something stable becomes unstable. That’s kind of a “duh..obviously” statement, but there are some very important concepts lurking in there.

When an environment is stable, it allows for the development of extensive but fragile ecosystems. In a corporate sense, this allows for the development of very complicated supply chains, with several “value niches” emerging along that chain. The more complicated the chain, the higher the potential for profit. Each link adds another level of complication, allowing for someone to be squeezing a little more profit from the end consumer.

In addition to extensive ecosystems, stable environments also allow some members of those ecosystems to achieve significant scale. Things are predictable and this allows organizations to grow, embed processes and systems, thereby improving efficiency and profitability. Often, one organization can establish itself at several levels along the supply chain, maximizing its profit potential.

In our physical world, stability is generally a by-product of friction. The higher the degree of friction – or what economist Ronald Coase called “transactional costs” – the more stable the market becomes. Barriers to entry are higher. Competitive factors are dampened. Capital becomes the main predictor of success.

Then – everything changes. We get hit with instability.

In our current case, we got hit with a double whammy: The disruption we’re experiencing is caused by the removal of friction. Technology is reducing transactional costs in a huge swath of industries.

Technology is an interesting catalyst. We think that technology changes behaviors. I don’t believe so. I think technology enables behaviors to change, in that it allows its users to do something they already wanted to do, but couldn’t because of some obstacle. It allows for an attractive alternative that didn’t previously exist. That technology is usually offered to the broadest base of users available and this triggers the disruption, which starts from the ground up. Typically, technology also removes the friction that enables those delicate hierarchal supply chains to form and flourish.

When the disruption begins and the incumbent ecosystem is threatened, the first casualties are the most fragile members of that ecosystem. These are usually the smaller niche players that rely on the bigger hosts that make up the ecosystem. The bigger hosts can survive longer and often swallow up the first casualties in an attempt to shore up their defenses. They will also often make a half-hearted attempt to respond to the disruption by adopting the technology and going after the disruptors. This never works. Disruption is not in their genetic make up. Their priority is always protecting the status quo, because that’s where their profit lies.

As disruption forever alters the environment, eventually the previous ecosystem withers and dies. A new (temporary) stability emerges – along with a new ecosystem – built on the foundation of the previous disruption and the entire cycle starts again.

Is Amazon Creating a Personalized Store?

There was a brief Amazon-related flurry of speculation last week. Apparently, according to a podcast posted by Wharton, Amazon is planning on opening 300 to 400 bricks and mortar stores.

That’s right. Stores – actual buildings – with stuff in them.

What’s more, this has been “on the books” at Amazon for a while. Amazon CEO Jeff Bezos was asked by Charlie Rose in 2012 if they would every open physical stores. Bezos replied, ““We would love to, but only if we can have a truly differentiated idea,” he said. “We want to do something that is uniquely Amazon. We haven’t found it yet, but if we can find that idea … we would love to open physical stores.”

With that background, the speculation makes sense. If Amazon is pulling the trigger, they must have “found the idea.” So what might that idea be?

Amazon does have a test store in their own backyard of Seattle. What they have chosen to do there, in a footprint about the tenth of the size of the former Barnes and Noble store that was there, is present a “highly curated” store that caters to “local interests.”

Most of the speculation about the new Amazon experiment in “back-to-the-future” retail centers around potential new supply chain management technology or payment methods. But there was one quote from Amanda Nicholson, professor of retail practice at Syracuse University’s Whitman School of Management, that caught my attention; “she said that space represents ‘a test’ to see if Amazon can create ‘a new kind of experience’ using data analytics about customers’ preferences.”

This becomes interesting if we spend some time thinking about the purchase journey we typically take. What Amazon had done online brilliantly is remove friction from two steps in that journey: filtering options and conducting the actual transaction. For certain kinds of purchases, this is all we need. If we’re buying a product that doesn’t rely on tactile feedback, like a digital file or a book, Amazon has connected all the dots required to take us from awareness to purchase.

But that certainly doesn’t represent all potential purchases. That could be the reason that online purchases only represent 9% of all retail. There are many products that require an “experience” between the filtering of options available to us and the actual purchase. These things still require the human “touch” – literally. Up to now, Amazon has remained emotionally distant from these types of purchases. But perhaps a new type of retail location could change that.

Let me give you an example. If you’re a cyclist (like me) you probably have a favorite bike shop. Bike stores are not simply retail outlets. They are temples of bike worship. Bike shops are usually an independent business run by people who love to talk about their favorite rides, the latest bikes or pretty much anything to do with cycling. Going to a bike store is an experience.

But Trek, one of the largest bike manufacturers in the world, also recognized the efficiency of the online model. In 2015, they announced the introduction of Trek Connect, their attempt to find a happy middle ground between practical efficiency and emotional experience. Through Trek Connect, you can configure and order your bike online, but pick it up and have it serviced at your local bike shop.

However, what Amazon may be proposing is not simply about the tactile requirements of certain types of purchases. What if Amazon could create a personalized real world shopping experience?

Right now, there is a gap between our online research and filtering activity and our real world experiential activity. Typically, we shortlist our candidates, gather required information, often in the form of a page printed off from a website, and head down to the nearest retail location. There, the hand off typically leaves a lot to be desired. We have to navigate a store layout that was certainly not designed with our immediate needs in mind. We have to explain what we want to a floor clerk who seems to have at least a thousand other things they’d rather be doing. And we are not guaranteed that what we’re looking for will even be in stock.

But what if Amazon could make the transition seamless? What if they could pick up all the signals from our online activity and create a physical “experiential bubble” for us when we visited the nearest Amazon retail outlet?

Let me go back to my bike purchasing analogy in way of an example. Let’s say I need a new bike because I’m taking up triathlons. Amazon knows this because my online activity has flagged me as an aspiring triathlete. They know where I live and they have a rich data set on my other interests, which includes my favored travel destinations. Amazon could take this data and, under the pretext of my picking up my bike, create a personalized in store experience for me, including a rich selection of potential add-on sales. With Amazon’s inventory and fulfillment prowess, it would be possible to merchandise a store especially for me.

I have no idea if this is what Amazon has “in store” for the future, but the possibility is tantalizing.

It may even make me like shopping.

 

 

 

A New Way to Determine Corporate Value

Last week, I talked about the trend of “hyper” expectations and corporate valuations. Peter Fader, a marketing professor at the Wharton School, commented, “This is why we need to replace the guesswork of tech valuation with the more rigorous, valid, and operational notion of “customer-based corporate valuation.”

I had a chance to look at Professor Fader’s paper. Essentially, he proposes a new model for the valuation of subscription-based businesses based on a calculation of customer lifetime value that uses publicly available information. While interesting in it’s own right, there is a fundamental shift of thinking here that I believe should be explored further.

There are a few standard equations that are used to calculate the value of a firm. If the firm is public, essentially its value is determined by its share price. And that share price is determined by activity in the market – the activity of shareholders. And that activity is dependent on analysts who pass judgment on companies based on projected return to shareholders. At every turn, our entire system of business finance is very heavily weighted towards ownership, which makes sense in a market-based economy. Buyers and sellers determine value.

But what Fader et al are proposing brings another essential stakeholder into the equation – the customer. It’s amazing to me that all the valuation equations we use to determine the value of a corporation don’t involve any direct measure of that corporation’s customer. Sure, we include things like profit, revenue, free cash flow and none of these things would exist without customers, but we never actually attempt to determine the value of a customer. Fader starts the process with the estimation of that value. That simple paradigmatic shift yields a very different view of the world.

For example, if we are to determine the value of a company through the lifetime value of its customers; we have to look at that company in a much different way than the typical financial analyst. We have to look at things like customer loyalty, brand affinity and the likelihood that a company will gain new market share through the disruption of markets. Last week, I used Amazon as an example. Here is a company that has been tremendously disruptive. It has essentially created a new marketplace and, in the process, upended retail as we know it. One would expect this to be taken into account when trying to determine the value of Amazon.

The problem is that things like customer loyalty and brand affinity are emotions. Emotions are not things that are easily quantified. It’s much easier to measure things like quarterly earnings and discounted free cash flow. Most of these things depend on using the past to predict the future. They also rely on the firm’s ability to prognosticate. Typically, all the heavy lifting of factoring in the fuzziness of things like future customer value is left to the company. If a company misses its projections, it is penalized by the analysts, resulting in a decrease of share price.

Ultimately, the gap between how we have historically determined the value of companies and how we might in the future comes down to a matter of our ability to determine what may come to pass. We strive for perfect predictability. We want to place our bets based on solid information and analysis. But, in a disruptive marketplace, this desire for predictability may ultimately sink us. Customers will always determine the value of a company and in a marketplace where transactional and switching costs are both plunging, those customers have the ability to switch buying behaviors instantly. The old saying, “No one ever got fired for buying IBM” has not been true for at least three decades.

Like it or not, if we want to get a true picture of the value of a company, we’re going to have to use some guesswork. And, most importantly, we’re going to have to make sure we include customers in whatever equation we’re using.

 

Living in the Age of “Hyper”

Amazon is a disappointment.

In the fourth quarter of 2015, it made a measly $482 million profit on sales of $35.7 billion. That’s a 22% gain in revenue from a year ago, and over a 100% gain in profit. In that year, Amazon also doubled its market value to over $300 billion.

Bunch of deadbeats…

Last week, Amazon’s share price took a beating in after hours trading, dropping 15%

Serves you right, slackers…

And this all happened because despite Amazon’s healthy performance, it “didn’t meet analyst’s expectations.”

Maybe it’s time to look at those expectations.

Amazon is what those analysts call a “growth” stock. If you compare it against the rest of the Fortune 500, it might even be called a “hyper-growth” stock. It’s doubling of market value outperformed other growth stocks like Apple, which has had it’s own history of disappointment. We expect great things from anything prefaced with “hyper.

You all know what hyper means. It means “above” – as in “above” normal. In terms of growth of revenue and market value, Amazon would certainly qualify. It’s in the top few percent of all companies of the Fortune 500 in both categories.

But we expect more. We expect “hyper” performance. And it you don’t measure up, you disappoint us. It’s like kicking your kid out of the house when they come home with a straight A report card in grade 10 because they didn’t qualify for early admission to Harvard.

Here’s the thing about “hyper.” Not everything can be “hyper.” Something needs to be the opposite of hyper. Do you know what the opposite of “hyper” is? It’s “hypo.” Everyone knows what hyper means, but I bet it’s been a long time since you used “hypo” in a sentence.

hypo hyper

That’s because we’re fixated on “hyper”. But the way we use “hyper” makes it an outlier. It’s a statistical anomaly on the far right of the normal distribution curve. It doesn’t represent reality. But we think it does. We expect everything to measure up to some unrealistic measure of performance. When we start a business, we expect to be as successful as Google. When we look at our bank account, we expect it to be as big as Kanye West’s. When we buy a stock, we want it to outperform every other stock in the market.

We have over-hyped “hyper.”

This tendency is starting to impact other aspects of our lives. As we quantify more of who we are, we tend to measure ourselves against the “hyper” end of the yardstick. It’s becoming a real problem. Even our friendships are now quantified, thanks to Facebook, Twitter and Instagram. The result is that it’s now almost impossible to measure up to expectations.

We, like Amazon, are disappointing. The difference is that Amazon disappoints analysts. We disappoint ourselves.

This can be a real bummer. Tom Magliozzi, co-host of NPR’s Car Talk show, summarized the problem in five words:

“Happiness Equals Reality Minus Expectations.”

If our expectations keep moving to the “hyper” end of the scale, it will never match up to reality. We’ll never be happy. According to this blog post by Tim Urban, it’s a big problem for Generation Y. And Tim should know. He’s a 31-year-old Harvard grad who owns a couple of tutoring businesses and has started a blog that grew virally to over 300,000 subscribers.

Slacker.

 

 

 

 

Giving Thanks for The Law of Accelerating Returns

For the past few months, I’ve been diving into the world of show programming again, helping MediaPost put together the upcoming Email Insider Summit up in Park City. One of the keynotes for the Summit, delivered by Charles W. Swift, VP of Strategy and Marketing Operations for Hearst Magazines, is going to tackle a big question, “How do companies keep up with the ever accelerating rate of change of our culture?”

After an initial call with Swift, I did some homework and reacquainted myself with Ray Kurzweil’s Law of Accelerating Returns. Shortly after, I had to stop because my brain hurt. Now, I would like to pass that unique experience along to you.

In an interview that is now 12 years old, Kurzweil explained the concept, using biological evolution as an analogy. I’ll try to make this fast. Earth is about 4.6 billion years old. The very first life appeared about 3.8 billion years ago. It took another 1.7 billion years for multicellular life to appear. Then, about 1.2 billion years later, we had something called the Cambrian Explosion. This was really when the diversity of life we recognize today started. If you’ve been keeping track, you know that it took the earth 4.1 of it’s 4.6 billion year history, or about 90% of the time since the earth was formed, to produce complex life forms of any kind.

Things started to move much quicker at that point. Amphibians and reptiles appeared about 350 million years ago, dinosaurs appeared 225 million years ago, mammals 200 million years ago, dinosaurs disappeared about 70 million years ago, the first great apes appeared about 15 million years ago and we homo sapiens have only been around for 200,000 years or so. And, as a species, we really have only made much of dent in the world in the last 10,000 years of our history. In the entire history of the world, that represents a very tiny 0.00022% slice. But consider how much the world has changed in that 10,000 years.

Accelerating Returns

Kurzweil’s Law says that, like biology, technology also evolves exponentially. It took us a very long time to do much of anything at all. The wheel, stone tools and fire took us tens of thousands of years to figure out. But now, technological paradigms shifts happen in decades or less. And the pace keeps accelerating. The Law of Accelerating Returns states that in the first 20 years of the 21st century, we’ll have progressed as much as we did during the entire 20th century. Then we’ll double that progress again by 2034, and double it once more by 2041.

Let me put this in perspective. At this rate, if my youngest daughter – born in 1995 – lives to be 100 (not an unlikely forecast), she will see more technological change in her life than in previous 20,000 years of human history!

This is one of those things we probably don’t think about because, frankly, it’s really hard to wrap your head around this. The math shows why predictability is flying out the window and why we have to get comfortable reacting to the unexpected. It would also be easy to dismiss it, but Kurzweil’s concepts are sound. Evolution does accelerate exponentially, as has our rate of technological advancement. Unless the later showed a dramatic reversal or slowing down, the future will move much much faster than we can possibly imagine.

The reason change accelerates is that the technology we develop today builds the foundations required for the technological leaps that will happen tomorrow. Agriculture set the stage for industry. Industry enabled electricity. Electricity made digital technology possible. Digital technology enables nanotechnology. And so on. Each advancement sets the stage for the next, and we progress from stage to stage more rapidly each time.

So, for your extended long weekend, if you’re sitting in a turkey-induced tryptophan daze and there’s no game on, try wrapping your head around The Law of Accelerating Returns.

Happy Thanksgiving. You’re welcome.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Who’s Who on the Adoption Curve

For me, the Adoption Curve of the Internet of Things is fascinating to observe. Take the PoloTech shirt from Ralph Lauren, for example. It’s a “smart shirt”. The skintight shirt measures your heart rate, how deeply you’re breathing, how stable you are and a host of other key biometrics. All this is sent to your smart phone. One will set you back a cool 300 bucks. But it’s probably not the price that will separate the adopters from the laggards in this case. In the case of the PoloTech shirt, as with many of the new pieces of wearable tech, it’s likely to be your level of fitness that determines which slope of the adoption curve you’ll end up on.

polotechIf you look at the advertising of the PoloTech, it’s clear who the target is: dudes with 0.3% body fat and ridiculously sculpted torsos who live on protein drinks and 4 hour workouts. Me? Not so much. The same is true, I suspect, for the vast majority of us. Unless we’re looking for a high tech girdle to both hold back and monitor the rate of expansion of our guts, I don’t think this particular smart shirt is in the immediate future for me.

As I said, much of the current generation of wearable technology is designed to tell us just how fit we are. Logic predicts that these devices should offer the greatest benefits to those who are the least fit. They, after all, have the most to gain. But that’s not who’s jumping the adoption curve. In my world, which is recreational cycling, the ones who are religiously tracking a zillion metrics are the ones who are already on top of the statistical heap. The reason? Technology has created an open market of bragging rights. Humans are naturally competitive. We like to know how we stack up against others. But we don’t bother keeping track until we’re reasonably sure we’re well above average. So, if you log onto Strava, where many cyclists upload their tech-tracked rides, you can find out just who is the “King of the Mountain” at your local version of the Alpe d’Huez.

This brings about an interesting variation on Roger’s Technology Adoption Curve. Wearable technology often means the generation of personal data. Therefore, an appetite for that data will accelerate the adoption of those respective technologies. We don’t mind being quantified, as long as that quantification paints us in a good light. We want to live in Lake Wobegon, where all the women are strong, all the men are good-looking and all the children are above average.

Adoption of new technologies, according to Rogers, depends on 5 factors: Relative Advantage, Compatibility, Complexity, Trialability and Observability. To this, Rogers added a sixth factor – the status conferring potential of a new innovation. Physical fitness, by its nature, begs to be quantified. Athletic ability and rankings go hand in hand. Status is literally the name of the game. Therefore, there is a natural affinity between wearable technologies that tracks physical performance and fitness.

This introduces some interesting patterns of adoption for new additions to the Internet of Things. Adoption will rapidly saturate certain niches of the population, but may take much longer to cross the chasm to the general masses. And the defining characteristics of the early adopters could be completely different in each case. As more and more things become “smart” the factors of adoption will become more fragmented and diverse. Early adopters of Coke’s Freestyle vending machine will have little in common with early adopters of the PoloTech shirt.

The absorption rate of technology into our lives has been increasing exponentially, seemingly in lock step with Moore’s Law. Every day, we are introduced to more and more things that have technology embedded in them. The advantages that this technology offers will depend on who is judging it. For some, a given technology will be a perfect fit. For others, it will be like trying to squeeze into a high tech shirt that makes us look like an overstuffed sausage.

Is Brand Strategy a Myth?

BrandStrategyThemeOn one side of the bookshelf, you have an ever growing pile of historic business best sellers, with promising titles like In Search of Excellence, 4 +2: What Really Works, Good to Great and Built to Last. Essentially, they’re all recipes for building a highly effective company. They are strategic blueprints for success.

On the other side of the bookshelf, you have books like Phil Rosenweig’s “The Halo Effect.” He trots out a couple of sobering facts: In a rigorous study conducted by Marianne Bertrand at the University of Chicago and Antoinette Schoar at MIT, they isolated and quantified the impact of a leader on the performance of a company. The answer, as it turned out, was 4%. That’s right, on the average, even if you have a Jack Welch at the helm, it will only make about 4% difference to the performance of your company. Four percent is not insignificant, but it’s hardly the earth shaking importance we tend to credit to leadership.

The other fact? What if you followed the instructions of a Jim Collins or Tom Peters? What if you transformed your company’s management practices to emulate those of the winning case studies in these books? Surely, that would make a difference? Well, yes – kind of. Here, the number is 10. In a study done by Nick Bloom of the London School of Economics and Stephen Dorgan at McKinsey, the goal was the test the association between specific management practices and company performance. There was an association. In fact, it explained about 10% of the total variation in company performance.

These are hard numbers for me to swallow. I’ve always been a huge believer in strategy. But I’m also a big believer in good research. Rosenweig’s entire book is dedicated to poking holes in much of the “exhaustive” research we’ve come to rely on as the canonical collection of sound business practices. He doesn’t disagree with many of the resulting findings. He goes as far as saying they “seem to make sense.” But he stops short of given them a scientific stamp of endorsement. The reality is, much of what we endorse as sound strategic thinking comes down to luck and the seizing of opportunities. Business is not conducted in a vacuum. It’s conducted in a highly dynamic, competitive environment. In such an environments, there are few absolutes. Everything is relative. And it’s these relative advantages that dictate success or failure.

Rosenweig’s other point is this: Saying that we just got lucky doesn’t make a very good corporate success story. Humans hate unknowns. We crave identifiable agents for outcomes. We like to assign credit or blame to something we understand. So, we make up stories. We create heroes. We identify villains. We rewrite history to fit into narrative arcs we can identify with. It doesn’t seem right to say that 90% of company performance is due to factors we have no control over. It’s much better to say it came from a well-executed strategy. This is the story that is told by business best sellers.

So, it caught my eye the other day when I saw that ad agencies might not be very good at creating and executing on brand strategies.

First of all, I’ve never believed that branding should be handled by an agency. Brands are the embodiment of the business. They have to live and breathe at the core of that business.

Secondly, brands are not “created” unilaterally – they emerge from that intersection point where the company and the market meet. We as marketers may go in with a predetermined idea of that brand, but ultimately the brand will become whatever the market interprets it to be. Like business in general, this is a highly dynamic and unpredictable environment.

I suspect that if we ever found a way to quantify the impact of brand strategy on the ultimate performance of the brand, we’d find that the number would be a lot lower than we thought it would be. Most of brand success, I suspect, will come down to luck and the seizing of opportunities when they arise.

I know. That’s probably not the story you wanted to hear.