The Apple Watch – More Than Just a Pretty Face

wpid-iwatch-goldI just caught Tim Cook’s live streaming introduction of the Apple Watch (I guess they’ve given up the long running “i” naming theme). What struck me most is how arduously Apple has stuck with traditional touch points in introducing a totally new product category (well, new for Apple anyway).

If you glanced quickly across the room at someone wearing Apple’s new wonder, you probably wouldn’t even know they’re wearing technology. The Apple Watch looks a lot like an analog watch. There is even a Mickey Mouse face you can choose. The interchangeable bracelets smack of tradition. Jon Ive verified this point in the video that ran at the introduction, saying they borrowed heavily from the “watchmaker’s vocabulary” in the design process. They even consulted “horological experts from around the world” to provide a time keeping experience rooted in cultural nuance. The primary interface to the watch is a modified version of the very old fashioned watch-winding crown.

Now, appearances can be deceiving. As Cook, Ive and Kevin Lynch put the watch through its paces, it was clear that this is an impressive little piece of technology. Particular attention has been paid to making this an intimate device, with new advances in touch technology, biometric and motion sensors and the ability to personalize interfaces and hardware to make it uniquely yours. Watching, I couldn’t help but compare this to Google’s introduction of Google Glass. In many ways, Glass is the more revolutionary device. But the Apple Watch will have a much faster adoption path.

Google impresses first with sheer brute-force technological effort. Design is an afterthought. Google uses UI testing and design to try to corral a Pandora’s box full of raw innovation into a usable package. Apple takes a much different approach. They look first at the user experience and then they pick and choose the technologies required to deliver the intended experience. They lavish ridiculous amounts of time on seemingly miniscule design details but the end result is typically nothing less than breathtaking. We’re impressed with the technology, sure, but the overriding emotion is one of lust. We just have to have what ever the hell it is that is being introduced on the main stage of the Flint Center.

larrygiseleDespite the many who have said otherwise, including the late Steve Jobs, Apple has never really made a revolutionary device. Others have always been there first. What they have done, however, is taken raw innovation and packaged it in a way that resonates with its audience at a deep and hormonal level. Apple products are stylish and sexy – the Gisele Bündchen of technology – yet attainable to mere mortals. They take the “next big thing” and push them past the tipping point by kindling lust in the hearts and wallets of the market. Google products, despite their geeky technical prowess, have a nasty habit of getting stuck on the wrong side of the adoption curve. They are the – well, let’s face it – they are the Larry Page of technology – smart, but considerably less sexy.

Apple times entrance to the adoption curve to near perfection. They have a knack of positioning just ahead of the masses. Google’s target is much further down the road. They release betas well ahead of any market demand. That’s why most of us can’t wait to wear an Apple Watch, but wouldn’t be caught dead in a pair of Google Glass.

One last thought on this week’s introduction of the Apple Watch. Wearable technology is following an interesting path. Your smartphone now acts as a connected main base for more intimate pieces of tech like the Apple Watch or Google Glass. Increasingly, the actual user interfaces will be on these types of devices, but the heavy lifting will happen on a smart phone tucked into a pocket, purse or backpack. Expect specific purpose devices to proliferate, all connected to increasingly powerful MPUs (Mobile Processing Units) that will orchestrate the symphony of tech that you’re wearing.

Learning about Big Data from Big Brother

icreach-search-illo-feature-hero-bYou may not have heard of ICREACH, but it has probably heard of you. ICREACH is the NSA’s own Google-like search engine.  And if Google’s mission is to organize the world’s information, ICREACH’s mission is to snoop on the world.  After super whistle blower Edward Snowden tipped the press off to the existence of ICREACH, the NSA fessed up last month. The amount of data we’re talking about is massive. According to The Intercept website, the tool can handle two to five billion new records every day, including data on the US’s emails, phone calls, faxes, Internet chats and text messages. It’s Big Brother meets Big Data.

I’ll leave aside for the moment the ethical aspect of this story.  What I’ll focus on is how the NSA deals with this mass of Big Data and what it might mean for companies who are struggling to deal with their own Big Data dilemmas.

Perhaps no one deals with more big data than the Intelligence Community. And Big Data is not new for them. They’ve been digging into data trying to find meaningful signals amongst the noise for decades. Finally, the stakes of successful data analysis are astronomically high here. Not only is it a matter of life and death – a failure to successfully connect the dots can lead to the kinds of nightmares that will haunt us for the rest of our lives. When the pressure is on to this extent, you can be sure that they’ve learned a thing or two. How the Intelligence community handles data is something I’ve been looking at recently. There are a few lessons to be learned here.

Owned Data vs Environmental Data

The first lesson is that you need different approaches for different types of data. The Intelligence Community has their own files, which include analyst’s reports, suspect files and other internally generated documentation. Then you have what I would call “Environmental” data. This includes raw data gathered from emails, phone calls, social media postings and cellphone locations. Raw data needs to be successfully crunched, screened for signals vs. noise and then interpreted in a way that’s relevant to the objectives of the organization. That’s where…

You Need to Make Sense of the Data – at Scale

Probably the biggest change in the Intelligence community has been to adopt an approach called “Sense making.”  Sense making really mimics how we, as humans, make sense of our environment. But while we may crunch a few hundred or thousand sensory inputs at any one time, the NSA needs to crunch several billion signals.

Human intuition expert Gary Klein has done much work in the area of sense making. His view of sense making relies on the existence of a “frame” that represents what we believe to be true about the world around us at any given time.  We constantly update that frame based on new environmental inputs.  Sometimes they confirm the frame. Sometimes they contradict the frame. If the contradiction is big enough, it may cause us to discard the frame and build a new one. But it’s this frame that allows us to not only connect the dots, but also to determine what counts as a dot. And to do this…

You Have to Be Constantly Experimenting

Crunching of the data may give you the dots, but there will be multiple ways to connect them. A number of hypothetical “frames” will emerge from the raw data. You need to test the validity of these hypotheses. In some cases, they can be tested against your own internally controlled data. Sometimes they will lie beyond the limits of that data. This means adopting a rigorous and objective testing methodology.  Objective is the key word here, because…

You Need to Remove Human Limitations from the Equation

When you look at the historic failures of Intelligence gathering, the fault usually doesn’t lie in the “gathering.” The signals are often there. Frequently, they’re even put together into a workable hypothesis by an analyst. The catastrophic failures in intelligence generally arise because some one, somewhere, made an intuitive call to ignore the information because they didn’t agree with the hypothesis. Internal politics in the Intelligence Community has probably been the single biggest point of failure. Finally…

Data Needs to Be Shared

The ICREACH project came about as a way to allow broader access to the information required to identify warning signals and test out hunches. ICREACH opens up this data pool to nearly two-dozen U.S. Government agencies.

Big Data shouldn’t replace intuition. It should embrace it. Humans are incredibly proficient at recognizing patterns. In fact, we’re too good at it. False positives are a common occurrence. But, if we build an objective way to validate our hypotheses and remove our irrational adherence to our own pet theories, more is almost always better when it comes to generating testable scenarios.

Twitch – Another Example of a Frictionless Market

twitch_logo3Twitch just sold for $1 billion dollars. That’s not really news. We’ve become inured to the never-ending stream of tech acquisitions that instantly transforms entrepreneurial techies into some of the richest people on the planet. No, what’s interesting about Twitch is if we slow down long enough to think about how this particular start up managed to create $1 billion in value.

A billion dollars is a lot of money. If we looked back just 50 years, a billion dollars in assets would make a company number 40 on the Fortune 500. If Twitch were somehow teleported back to 1964, it would rank just eight slots under Procter and Gamble (assets worth $1.15 billion) and three slots above Sunoco (assets of $0.88 billion). Coca-Cola would be left in the dust with a mere $485 million in assets. Today a half billion dollars is chump change in Silicon Valley terms.

This becomes more amazing when you consider that Twitch is only 3 years old. And it really started as an accident.

justin_kanRemember EDtv? Probably not. It was a pretty forgettable 1999 movie (based on a 1994 Quebec film called Louis 19, King of the Airwaves) starring Matthew McConaughey. The idea was that Ed would be followed by cameras 24 hours a day, 7 days a week, making his life a reality TV show. 1998’s The Truman Show had a similar theme (albeit with better ratings). Anyway, the point made in both movies was that an average life, if televised, could be entertaining enough to make people watch. In 2006, Emmett Shear and Justin Kan decided to test the premise. They launched Justin.tv. Soon they invited others to simulcast their lives as well.

What Kan and Shear did, although they probably weren’t intending to at the time, was create a platform that allowed anyone to be a real-time broadcaster with zero transactional costs. They created a perfect market for live TV. Last week I talked about AirBnB, TripAdvisor and VRBO.com creating a more perfect market for tourism. The key characteristic of a perfect market is that barriers to entry are reduced to zero, turning the market into an emergent sandbox from which new things tend to pop up. And that’s exactly what happened with Twitch.

Shear and Kan found that one group in particular embraced the idea of livecasting – gamers. They could communicate with other gamers, but they could also show off their mad gaming skills. Using the Justin.tv platform, Twitch was launched for the gaming industry in 2011. And thanks to Twitch, gaming has become a spectator sport – at a massive scale.

Twitch’s “stars” – like 30-year-old Tessa Brooks, who goes by “Tessachka” and broadcasts an average of 42 hours of programming a week – post their schedules so that their audiences can tune in. Twitch has about 55 million viewers per month who consume over 16 billion minutes of video programming. According to SocialBlade.com, this month, “Riotgames” is the top ranked Twitch broadcaster, with almost a million followers and over 18 million channel views.

Again, those are big numbers. A network show that pulls in 18 million viewers would be number 5 in the Nielsen ratings. And while Netflix’s House of Cards or Orange is the New Black may have made waves at the Emmies, The Atlantic estimates that only 2 – 3 million people watch a newly posted episode in the first week. On a good week, Riotgames could blow that away without twitching a trigger finger.

Twitch not only created a platform that generates audiences, it also generated a marketplace. Where there are eyeballs, there’s revenue potential. Twitch cuts its gamers in for a cut of the advertising revenue. I couldn’t find numbers on how lucrative this could be, but I suspect Justin may be able to quit his day job.

Like I said, the Twitch story is interesting, but what is vastly more interesting is the market dynamics that it has unleashed. Amazon’s $1 billion bid is not for the technology. It’s for the community and the market that comes with that community. When it comes to leveraging the potential of zero transactional cost markets, Amazon knows a thing or two. And one of the things it knows is that in frictionless markets, if you can navigate the turbulence, tremendous value can be created in an amazing short time. Say, for instance, $1 billion in just 3 years. It took Procter and Gamble 127 years to be worth that much.

Technology is Moving Us Closer to a Perfect Market

I have two very different travel profiles. When I travel on business, I usually stick with the big chains, like Hilton or Starwood. The experience is less important to me than predictability. I’m not there for pleasure; I’m there to sleep. And, because I travel on business a lot (or used to), I have status with them. If something goes wrong, I can wave my Platinum or Diamond guest card around and act like a jerk until it gets fixed.

But, if I’m traveling for pleasure, I almost never stay in a chain hotel. In fact, more and more, I stay in a vacation rental house or apartment. It’s a little less predictable than your average Sheraton or Hampton Inn, but it’s almost always a better value. For example, if I were planning a last minute get away to San Francisco for Labor Day weekend, I’d be shelling out just under $400 for a fairly average hotel room at the Hilton by Union Square. But for about the same price, I could get an entire 4 bedroom house that sleeps 8 just two blocks from Golden Gate park. And that was with just a quick search on AirBnB.com. I could probably find a better deal with the investment of a few minutes of my time.

perfect_market_1Travel is just one of the markets that technology has made more perfect. And when I say “perfect” I use the term in its economic sense. A perfect market has perfect competition, which means that the barriers of entry have been lowered and most of the transactional costs have been eliminated. The increased competition lowers prices to a sustainable minimum. At that point, the market enters a state called the Pareto Optimal, which means that nothing can be changed without it negatively impacting some market participants and positively impacting others.

Whether a perfect market is a good thing or not depends on your perspective. If you’re a long-term participant in the market and your goal is to make the biggest profit possible, a perfect market is the last thing you want. If you’re a new entrant to the market, it’s a much rosier story – any shifts that take the market closer to a Pareto Optimal will probably be to your benefit. And if you’re a customer, you’re in the best position of all. Perfect markets lead inevitably to better value.

Since the advent of VRBO.com and, more recently, AirBnB.com, the travel marketplace has moved noticeably closer to being perfect. Sites like these, along with travel review aggregators like TripAdvisor.com, have significantly reduced the transaction costs of the travel industry. The first wave was the reduction of search costs. Property owners were able to publish listings in a directory that made it easy to search and filter options. Then, the publishing of reviews gave us the confidence we needed to stray beyond the predictably safe territory of the big chains.

But, more recently, a second wave has further reduced transaction costs independent vacation property owners. I was recently talking to a cousin who rents his flat in Dublin through AirBnB, which takes all the headaches of vacation property management away in return for a cut of the action. He was up and running almost immediately and has had no problem renting his flat during the weeks he makes it available. He found the barriers to entry to be essentially zero. A cottage industry of property managers and key exchange services has sprung up around the AirBnB model.

What technology has done to the travel industry is essentially turned it into a Long Tail business model. As Chris Anderson pointed out in his book, Long Tail markets need scale free networks. Scale free networks only work when transaction costs are eliminated and entry into the market is free of friction. When this happen, the Power Law distribution still stays in place but the tail becomes longer . The Long Tail of Tourism now includes millions of individually owned vacation properties. For example, AirBnB has almost 800 rentals available in Dublin alone. According to Booking.com, that’s about 7 times the total number of hotels in the city.

Another thing that happens is, over time, the Tail becomes fatter. More business moves from the head to the tail. The Pareto Principle states that in Power Law distributions, 20 % of the businesses get 80% of the business. Online, the ratio is closer to 72/28.

These shifts in the market are more than just interesting discussion topics for economists. They mark a fundamental change in the rules of the game. Markets that are moving towards perfection remove the advantages of size and incumbency and reward nimbleness and adaptability. They also, at least in this instance, make life more interesting for customers.

Why Cognitive Computing is a Big Deal When it comes to Big Data

IBM-Watson

Watson beating it’s human opponents at Jeopardy

When IBM’s Watson won against humans playing Jeopardy, most of the world considered it just another man against machine novelty act – going back to Deep Blue’s defeat of chess champion Garry Kasporov in 1997. But it’s much more than that. As Josh Dreller reminded us a few Search Insider Summits ago, when Watson trounced Ken Jennings and Brad Rutter in 2011, it ushered in the era of cognitive computing. Unlike chess, where solutions can be determined solely with massive amounts of number crunching, winning Jeopardy requires a very nuanced understanding of the English language as well as an encyclopedic span of knowledge. Computers are naturally suited to chess. They’re also very good at storing knowledge. In both cases, it’s not surprising that they would eventually best humans. But parsing language is another matter. For a machine to best a man here requires something quite extraordinary. It requires a machine that can learn.

The most remarkable thing about Watson is that no human programmer wrote the program that made it a Jeopardy champion. Watson learned as it went. It evolved the winning strategy. And this marks a watershed development in the history of artificial intelligence. Now, computers have mastered some of the key rudiments of human cognition. Cognition is the ability to gather information, judge it, make decisions and problem solve. These are all things that Watson can do.

 

Peter Pirolli - PARC

Peter Pirolli – PARC

Peter Pirolli, one of the senior researchers at Xerox’s PARC campus in Palo Alto, has been doing a lot of work in this area. One of the things that has been difficult for machines has been to “make sense” of situations and adapt accordingly. Remember, a few columns ago where I talked about narratives and Big Data, this is where Monitor360 uses a combination of humans and computers – computers to do the data crunching and humans to make sense of the results. But as Watson showed us, computers do have to potential to make sense as well. True, computers have not yet matched humans in the ability to sense make in an unlimited variety of environmental contexts. We humans excel at quick and dirty sense making no matter what the situation. We’re not always correct in our conclusions but we’re far more flexible than machines. But computers are constantly narrowing the gap and as Watson showed, when a computer can grasp a cognitive context, it will usually outperform a human.

Part of the problem machines face when making sense of a new context is that the contextual information needs to be in a format that can be parsed by the computer. Again, this is an area where humans have a natural advantage. We’ve evolved to be very flexible in parsing environmental information to act as inputs for our sense making. But this flexibility has required a trade-off. We humans can go broad with our environmental parsing, but we can’t go very deep. We do a surface scan of our environment to pick up cues and then quickly pattern match against past experiences to make sense of our options. We don’t have the bandwidth to either gather more information or to compute this information. This is Herbert Simon’s Bounded Rationality.

But this is where Big Data comes in. Data is already native to computers, so parsing is not an issue. That handles the breadth issue. But the nature of data is also changing. The Internet of Things will generate a mind-numbing amount of environmental data. This “ambient” data has no schema or context to aid in sense making, especially when several different data sources are combined. It requires an evolutionary cognitive approach to separate potential signal from noise. Given the sheer volume of data involved, humans won’t be a match for this task. We can’t go deep into the data. And traditional computing lacks the flexibility required. But cognitive computing may be able to both handle the volume of environmental Big Data and make sense of it.

If artificial intelligence can crack the code on going both broad and deep into the coming storm of data, amazing things will certainly result from it.

Rethinking the Channelization of Advertising

Anybody who has been a regular reader of my column knows I very seldom write a column exclusively about search, even though it runs every Thursday under the masthead of “Search Insider.” I’ve been fortunate in that Ken Fadner and the editorial staff of Mediapost has never restricted my choice of subject matter. But the eclecticism of my column isn’t simply because I’m attention deficit. It’s because the subject that interests me most is the intersection between human behavior and technology. Although that often involves search, it also includes mobile, social, email and a number of other channels. I simply couldn’t write about what interests me if I was restricted to a single channel.

So why is Mediapost divided into the subject areas it is? Why, when you go to navigate the site, do you choose from email marketing, search marketing, mobile marketing, real time marketing, video marketing or social media marketing? Mediapost is structured this way because it’s a reflection of the industry it serves. Online marketing is divvied up in exactly the same way. We are an industry of channels.

the_rhine_color_coverThe problem here is one of perspective – the industry perspective vs. the customer perspective. Let me use another example to make my point. One of the best things about cruising the Rhine is that there is a stunning medieval castle or fortress around every bend. From Rüdesheim to Koblenz (the Middle Rhine) there are over 40 of these fortifications sprinkled along 40 miles of the river. As picturesque as they are, they were not put there to enhance the views for generations of sightseers yet to come. They were put there because the river was one of the major thoroughfares of Europe and anyone who owned land along the river had the opportunity to make some money. They exacted tolls from travellers to guarantee safe passage.

While this build up along the Rhine probably made sense for the German land barons, it did nothing to make life easier for the poor souls who had to get up the Rhine to reach their eventual destination. Unfortunately, they had few alternatives. They were stuck with paying the tolls.

The advertising business is divided up into channels for exactly the same reason the Rhine has a castle every mile. Channels are there to show ownership of property. Advertising is a way to generate revenue from that ownership. It is a toll that customers have to pay. Mediapost is divided up the way it is because its readers are the modern day equivalent of medieval land barons and that’s they way they think. If it were published in 1224 its sections may have been labeled Pfalzgrafenstein, Sterrenberg and Reichenstein (3 of the Rhine castles).

But if you’re like me, you’re not as interested in the castles as in the journey itself. And, in this way, I think we have built our industry in exactly the wrong way. We should all be more interested in the journey than in ownership of individual destinations along that journey. If you asked a traveller from Rüdesheim to Koblenz in 1205 which they would prefer; paying 40 separate tolls or paying one guide to safely escort them to the destination, I’m pretty sure they would chose the later. That is what our industry should aspire to.

The reason our industry is channel obsessive is because we had no option previously. In a pre-digital world, all we could do is own or control a channel. But technology is rapidly allowing us an option. Today, it is possible for us to map a customer’s journey and act as a guide along the way. All that is required is a change of perspective.

I believe it’s time to consider it.

The Human Stories that Lie Within Big Data

storytelling-boardIf I wanted to impress upon you the fact that texting and driving is dangerous, I could tell you this:

In 2011, at least 23% of auto collisions involved cell phones. That’s 1.3 million crashes, in which 3331 people were killed. Texting while driving makes it 23 times more likely that you’ll be in a car accident.

Or, I could tell you this:

In 2009, Ashley Zumbrunnen wanted to send her husband a message telling him “I love you, have a good day.” She was driving to work and as she was texting the message, she veered across the centerline into oncoming traffic. She overcorrected and lost control of her vehicle. The car flipped and Ashley broke her neck. She is now completely paralyzed.

After the accident, Zumbrunnen couldn’t sit up, dress herself or bath. She was completely helpless. Now a divorced single mom, she struggles to look after her young daughter, who recently said to her “I like to go play with your friends, because they have legs and can do things.”

The first example gave you a lot more information. But the second example probably had more impact. That’s because it’s a story.

We humans are built to respond to stories. Our brains can better grasp messages that are in a narrative arc. We do much less well with numbers. Numbers are an abstraction and so our brains struggle with numbers, especially big numbers.

One company, Monitor360, is bringing the power of narratives to the world of big data. I chatted with CEO Doug Randall recently about Monitor360’s use of narratives to make sense of Big Data.

“We all have filters through which we see the world. And those filters are formed by our experiences, by our values, by our viewpoints. Those are really narratives. Those are really stories that we tell ourselves.”

For example, I suspect the things that resonated with you with Ashley’s story were the reason for the text – telling her husband she loved him – the irony that the marriage eventually failed after her accident and the pain she undoubtedly felt when her daughter said she likes playing with other moms who can still walk. All of those things, while they don’t really add anything to our knowledge about the incidence rate of texting and driving accidents, are all things that strike us at a deeply emotional level because we can picture ourselves in Ashley’s situation. We empathize with her. And that’s what a story is, a vehicle to help us understand the experiences of another.

Monitor360 uses narratives to tap into these empathetic hooks that lie in the mountain of information being generating by things like social media. It goes beyond abstract data to try to identify our beliefs and values. And then it uses narratives to help us make sense of our market. Monitor360 does this with a unique combination of humans and machines.

“A computer can collect huge amounts of data and the compute can even sort that data. But “sense making” is still very, very difficult for computers to do. So human beings go through that information, synthesize that information and pull out what the underlying narrative is.”

Monitor360 detects common stories in the noisy buzz of Big Data. In the stories we tell, we indicate what we care about.

“This is what’s so wonderful about Big Data. The Data actually tells us, by volume, what’s interesting. We’re taking what are the most often talked about subjects…the data is actually telling us what those subjects are. We then go in and determine what the underlying belief system in that is.”

Monitor360’s realization that it’s the narratives that we care about is an interesting approach to Big Data. It’s also encouraging to know that they’re not trying to eliminate human judgment from the equation. Empathy is still something we can trump computers at.

At least for now.

Want to Be More Strategic? Stand Up!

article-1388357-0050C69D00000258-771_472x345One of the things that always frustrated me in my professional experience was my difficulty in switching from tactical to strategic thinking. For many years, I served on a board that was responsible for the strategic direction of an organization. A friend of mine, Andy Freed, served as an advisor to the board. He constantly lectured us on the difference between strategy and tactics:

“Strategy is your job. Tactics are mine. Stick to your job and I’ll stick to mine.”

Despite this constant reminder, our discussions always seemed to quickly spiral down to the tactical level. We all caught ourselves doing it. It seemed that as soon as we started thinking about what needed to be done and why, we automatically shifted gears and thought about how it should be done.

A recent study may have found the problem. We were sitting down. We should have stood up. Better yet, we should have taken the elevator to the top of the building (we actually did do this at one board retreat in Scottsdale, Arizona). Two researchers at the University of Toronto (home, I should point out, of what was the tallest free standing structure in the world for many years – the CN Tower), Pankaj Aggarwal and Min Zhao, found that a subject’s physical situation impacted how strategic they were. When subjects were physically higher up, say standing on a tall stool, they were more likely to look at the “big picture.”

Our physical context has more than a little impact on how we think. It’s a phenomenon called Mental Construal. And it’s not just restricted to how strategic our thinking is. It can impact thinks like social judgment as well. In a 2006 paper, University of Michigan professor Norbert Schwartz gave some examples that fall under the category called “situated concepts.” For example, the mental images you retrieve when I say “chair” might be different if we’re standing in a living room rather than an airplane or movie theatre. Another example, which unfortunately speaks to a darker side of human nature, is how you would respond to the face of a young African American when shown in the context of a church scene versus the context of a street corner scene.

Schwartz also talks about levels of construal. We’re more successful staying at strategic levels when our planning is trouble free. The minute we hit a problem, we tend to revert to finer grained tactical thinking. Again, in my board experience, the minute we started hitting problems we immediately tried to solve them, which effectively derailed any strategic discussion.

In his book, Creativity: Flow and the Psychology of Discovery and Invention, Mihaly Csikszentmihalyi found that physical contexts can also impact creativity. Physicist Freeman Dyson found that walking was essential to drive the creative process,

“Again, I never went to a class that (Richard) Feynman taught. I never had any official connection with him at all, in fact. But we went for walks. Most of the time that I spent with him was actually walking, like the old style of philosophers who used to walk around under the cloisters.”

In a study where subjects were given pagers and were signaled at random times of the day, they were asked to rate how creative they felt. It turned out the highest level of creativity came while they were walking, driving or swimming. Perhaps it was the physical stimulation, but it may have also been mental construal at work. Perhaps physical movement primed the brain for mental movement.

So, if you need to be strategic, find the highest vantage point possible, with room to walk around, preferably with the smartest person you know.

Social Media: Matching Maturity to the Right Business Model

socialmediaLast week, I talked about the maturity continuum of social media. This week, I’d like to recap and look at the business model implications of each phase

Phase One – It’s a Fad. Here, we use a new social media tool simply because it is new. This is a classic early adopter model. The business goal here is to drive adoption as fast and far as possible, hoping that acceptance will go viral. There is no revenue opportunity at this point, as you don’t want to do anything to slow adoption. It’s all about getting it into as many hands as possible.

Phase Two – It’s a statement. You use the tool because it says something about who you are. Revenue opportunities are still limited, but this is the time for cross-promotion with brands that make a similar statement. Messaging and branding become essential at this point. You have to carve a unique niche for yourself and hope that it resonates with segments of your market. The goal is to create an emotional connection with your audience to help shore up loyalty in the next phase. This is the time to start laying the foundations of an user community.

Phase Three – It’s a tool. You use it because it offers the best functionality for a particular task. Here, things have to get more practical. This is where user testing and new feature development has to move as quickly as possible. Revenue opportunities at this point are possible, depending on the usage profile of your app. If there’s high frequency of usage, advertising sponsorship is a possibility. But be aware that this will bring inevitable push back from your users, especially if there has been no advertising up to this point. This shakes the loyalty of the “Statement” users, as they feel you’re selling out. The functionality will have to be rock solid to prevent attrition of your user base during this phase. Essentially, it will have to be good enough to “lock out” the competition. But there’s another goal here as well. Introducing new functionality allows you to move beyond being a one-trick pony. This is where you have to start moving from being a tool to the next phase…

Phase Four – It’s a Platform. If you’ve successfully transitioned to being a social media platform, you should have the opportunity to finally turn a profit. The stability of the revenue model will be wholly dependent on how high you’ve been able to raise the cost of switching. The more “sticky” your platform is, the more stable your revenue will be. But, be aware that using advertising as your revenue channel is fraught with issues in the world of social media. Unlike search, where we are used to dealing with a crystal clear indication of consumer interest, social media usage seldom comes tied to clear buyer intent. You have to worry about modality and social norms, along with the erosion of your “cool” factor.

In the last two phases, the best revenue opportunities should be directly tied to functionality and intent. The closer you can align your advertising message to the intent of the users “in the moment” the more stable your revenue model will be. In fact, if you can introduce tools that are focused on users when they are in social modes where commercial messaging is appropriate, you will find revenue opportunities dropping into your lap. For example, if users use LinkedIn to crowdsource opinions on B2B purchases, you have a natural monetization opportunity. If they’re using your app to post pictures of their cat playing a xylophone, you’re going to find it much harder to make a buck. Not impossible, but pretty damned difficult.

The Maturity Continuum of Social Media

facebook-twitter-635Social channels will come and go. Why are we still surprised by this? Just last week, Catharine Taylor talked about the ennui that’s threatening to silence Twitter. Frankly, the only thing surprising about this is that Twitter has had as long a run as they have. Let’s face it. If every there was a social media one trick pony, it’s Twitter.

The fact is, if you are a player in the social media space, you have to accept that there’s a unique maturity evolution in usage patterns. It’s a much more fickle audience than you would find in something like content publishing or search. The channels we use to express ourselves socially are subject to an extraordinary amount of irrational behavior. We project our own beliefs about who we are and how we fit into our own social networks on them. This leaves them vulnerable to sudden shifts in usage, simply because large chunks of the audience may suddenly have changed their minds about what is socially acceptable. And this is what’s currently happening to Twitter.

This is compounded by the fact that we’re talking about technology here, so where we perceive ourselves to be on the technology acceptance curve will have an impact on the social channels we find acceptable to us. If we think we’re early adopters, we’ll be quicker to move to whatever is new. Not only this, we’ll be unduly influenced by what we see other early adopters doing.

The Maturity Continuum for Social is as follows:

It’s a Fad – You use it because everyone (in your circle of influence) else is doing it. Early adopters are particularly susceptible to this effect. They’ll be the ones to test out new channels and tools, simply because they are new. But that momentum doesn’t last long. New entrants will also have to prove that they have at least a certain amount of functionality, and, more importantly, something unique that users can identify with. If this is the case, they will transition to the second phase:

It’s a Statement – You use it because it makes a statement about who you are. And with technology, it’s usually about how cutting edge you are. This makes it particularly prone to abandonment. But there are other factors at play here. Is it all business (LinkedIn) or all fun (Snapchat)? A small percentage of the user base will stick in this phase, becoming brand loyalists. The majority, however, will move on to the third phase:

It’s a Tool – You use it because it’s the best tool for the job. Here, functionality trumps all. It’s in these last two phases where rationality finally takes hold. The sheen of the BSOS (Bright Shiny Object Syndrome) has faded and we’ll only continue using it if it provides better functionality for the task at hand than any of the other alternatives. The problem here is the functional supremacy is a never-ending arms race. Sooner or later, something better will come along (if it successfully navigates through the first two phases). This is typically the end of the road for most social media one-trick ponies, and this is what is currently staring Twitter in the face.

It’s a Platform – You use it because the landscape is familiar. Here you rely on becoming a habitual “stickiness” with users and something called UI Cognitive Lock in. Essentially, this is an online real estate play. If you’ve had a long run as a single purpose tool and have developed a large user base, you have to expand that into a familiar landscape before a new contender unseats you as the tool of choice. This is what Facebook and LinkedIn are currently trying to do. And, to survive, it’s what Twitter must do as well. By assembling a number of tools, you increase the cost of switching to the point where it doesn’t make sense for most users.

Each of these phases has different usage profiles, which directly impact their respective business models. More on that next week.