The Trouble with Trying to Stand on The Shoulders of Giants

Standing-on-GiantsIt has long been thought that academia provided a refuge from the sordid world of business. But when a Nobel prize-winning academic says that if he had to do it all over again, he wouldn’t publish, you know something is rotten in the state of Denmark. Laureate Peter Higgs (of Higgs-Boson fame) told the Guardian:

“Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”

The whole point of publishing is to share knowledge. But academic publishers don’t seem to have received that memo. For the past two decades, publishers like Reed Elsevier, John Wiley and Springer, who got in on a good gig early, have propped up ridiculous profit margins by slowly squeezing non-profit publishers out of the picture. In the process, they’ve turned academic publishing into a hamster wheel that stresses quantity over quality. Most academic research is rushed out to a limited audience that has been designated as the ones who “count” and the rest of us have to pony up ridiculous sums to access an article that lies on the far side of a barricaded pay wall. Academic publishing is one of the few bastions that has managed to resist the digital tide of declining transaction costs.

I love academic research. I am a big believer in scientific inquiry. I am an avid reader of blogs like Science Daily and Big Think. But 9 times out of 10 (or 99 times out of a hundred), when you actually read an academic paper (if you can get your hands on one), it’s hopelessly mired in academic jargon and the actual findings fall disappointingly short of remarkable. What should be a reflection of the best of who we are has turned into a sordid little business run by shortsighted people who are only in it for a quick buck. If one of the pre-eminent physicists of our generation would rather become a used car salesman or worse yet, a marketer, than follow his passion, we know something is seriously wrong.

Google tried to remain true to the spirit of academic publishing when they introduced Google Scholar. I use Scholar a lot, and have found it very useful for accessing landmark papers from a few decades back that have managed to seep into the public domain. But if you use it to try to access more recent papers, you typically run headlong into one of the afore-mentioned pay walls. I tried to see how academics feel about Google Scholar and was amazed to find this quote from the McKinney Engineering Library blog at the University of Texas:

Google Scholar has an ambiguous status in the library and research world. Obviously, it is powered by the Google, which is kind of a dirty word in academic research. Also, the fact that it is free throws further suspicion on its quality, particularly when libraries pay lots of money for database access.”

WTF? Forget for a moment that Google is referred to as “the Google” – which I hope is a joke aimed at fellow Texan George W. Bush. Since when should knowledge be judged by the size of its price tag? Stewart Brand identified the disconnect 30 years ago when he said,

“On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

The rest of the world seems to have moved in the right direction. What the hell is the problem with academia?

If you’re not mad about this, you should be. The vast majority of academic research is funded directly by your tax dollars. Academic publishers don’t pay anyone for content. They have done nothing but agree to publish, which, in today’s world, costs virtually nothing. But somehow they still feel entitled to charge $50 to access an electronic version of an article. Reasonable profits are the right of an honest businessperson, but academic publishing doesn’t even come close to passing the “smell-test.”

One of the big Academic publishers, MacMillan, is at least considering loosening the drawstrings a touch. They’re lowering the drawbridge of their pay wall just a smidge by offering the ability to read and annotate articles on line. But academic publishing still has a long way to go before it approaches the accessibility that marks almost every other form of publishing in the digital world. So far for most researchers, the draw of being published in a prestigious journal has outweighed the idealism of openly publishing their work for all to see on a digital platform.

I suspect this is an area just waiting for disruption. I hope that the academics that are creating the content agree. It seems that academic publishing has been hiding in a previously overlooked nook that has escaped the relentless liberation of information driven by technology. But if MacMillan is feeling threatened enough to lower their defenses, however slightly, I suspect that the tide is beginning to turn. I, for one, thinks that day can’t come soon enough.

Publishers as Matchmakers

gatekeeperI’m a content creator. And, in this particular case, I’ve chosen MediaPost as the distribution point for that content. If we’re exploring the role of publishing in the future, the important question to ask here is why? After all, I could publish this post in a couple clicks to my blog. And, thanks to my blogging software, it will automatically notify my followers that there’s a new post. So, what value does Mediapost add to that?

Again, we come back to signal and noise. I generate content primarily to reach both a wide and interested audience. As a digital marketing consultant, there is a financial incentive to grow my own personal brand, but to be honest, my reward is probably more tied up in the concepts of social capital and my own ego. I publish because I want to be heard. And I want to be heard by people who find my content valuable. I have almost 2000 followers between my blog, Twitter feed and other social networks, but those people already know me. Hopefully, Mediapost will introduce me to new people that don’t know me. I want Mediapost to be my matchmaker.

Now, the second question to ask is, why are you reading this post on Mediapost? While I don’t presume to be able to know your own personal intentions, I can take a pretty good shot at generalizing – you are a Mediapost reader because you find the collection of content they publish interesting. It’s certainly not the only place online you can find content about marketing and media. And, if they chose to, any of the MediaPost writers could easily publish their content on their own blogs. You have chosen MediaPost because it acts as both a convenient access point and an effective filter.

This connection between content and audience is where publishers like MediaPost add value. Because you trust MediaPost to deliver content you find interesting, it passes the first level of your filtering threshold. I, as a content creator, get the benefit of MediaPost’s halo effect. The odds are better that I can connect with new readers under the MediaPost banner than they are if you’re introduced to me through a random, unfiltered tweet or alert in your newsfeed. And here we have a potential clue in the future of revenue generation for publishers. If publishing is potentially a match making service, perhaps we need to look at other matchmakers to see how they generate revenue.

In the traditional publishing world, it would be blasphemous to suggest that content creators should be charged for access to an audience. After all, we used to get paid to generate content by the publishers. But that was then and this is now. Understand, I’m not talking about native advertising or advertorials here. In fact, it would be the publisher’s responsibility to filter out unacceptably commercial editorials. I’m talking about creating an audience market for true content generators. In this day of personal branding, audiences have value. The better the audience, the higher the value. It should be worth something to me to reach new audiences. Publishers, in turn, act as the reader’s filter, ensuring the content they provide matches the user’s interest. Again, if the match is good enough, that has value for the reader.

Of course, the problem here is quantifying value on both sides of the relationship. I would imagine that both the content creators and content consumers that are reading my suggestions are probably saying, “There is no way I would pay for that!” And, in the current state of online publishing, I wouldn’t either – as a creator nor a consumer. The value isn’t there because the match isn’t strong enough. But if publishers focused on building the best possible audience and on presenting the best possible content, it might be a different story. More importantly, it would be a revenue model that would realign publishers with their audience, rather than pit them against it.

From the reader’s perspective, if a publisher was acting as your own private information filter, and not as a platform for poorly targeted advertising, you would probably be more willing to indicate your preferences and share information. If the publisher was discriminating enough, you might even be willing to allow them to introduce very carefully targeted offers from advertiser’s, filtering down to only the offers you’re highly likely to be interested in. This provides three potential revenue sources to the publisher: content creators looking for an audience, readers looking for an effective filtering service and advertisers looking for highly targeted introductions to prospects. In the last case, the revenue should be split with the prospect, with the publisher taking a percentage for handling the introduction and the rest going to the prospect in return for agreeing to accept the advertiser’s introduction.

While radically different than today’s model, what I’ve proposed is not a new idea. It was first introduced in the book Net Worth, by John Hagel and Marc Singer. They introduced the idea in 1999. Granted, my take is less involved than theirs is, but the basic idea is the same – a shift from a relentless battering of prospects with increasingly overt advertising messages to a careful filtering and matching of interests and appropriate content. And, when you think about it, the matching of intent and content is what Google has been doing for two decades.

Disruptive innovations tend to change the ways that value is determined. They take previous areas of scarcity and change them to ones of abundance. They upend markets and alter existing balances between forces. When the markets shift to this extent, trying to stick to the old paradigm guarantees failure. The challenge is that there is no new paradigm to follow. Experimentation is the only option. And to experiment you have to be willing to explore the boundaries. The answer won’t be found in the old, familiar territory.

Same Conversation. Different Location.

online_publishing_vxwndNote: This is my first OnlineSpin column for MediaPost.

First of all, let’s get the pleasantries out of the way. I’m Gord. I’m new to Online Spin, but not to MediaPost. If you don’t know me, I have been writing over on the Search Insider side of the house for the past 10 and a half years.

Nice to meet you.

Now, on to business. Just before the switch, I took online publishing to task for sacrificing it’s ability to communication for the sake of advertising revenue. The user experience on most online publications is so littered with intrusive ads and misleading click bait that it becomes almost impossible to actually read the content. My point, which is probably obvious, is that the short-term quest for revenue is jeopardizing the long-term health of the business model.

Among the comments posted were a few asking for guidance rather than just criticism. Fair enough. It’s much easier to criticize that it is to create. So, where does the future of publishing lie?

The problem, as it is in so many other cases, is that technology has annihilated the proverbial publishing apple cart. Publishing as an industry began because of the high transactional cost of publicizing information. Information began to be stacked vertically, because that was the only cost effective way to do it. These vertical stacks of information attracted audiences because it was the only place they could get this information. Limited access points created large and loyal audiences which in turn allowed ad supported revenue models. Because transactional costs were high, information was scarce. Scarcity enabled profit.

Today, technology is, one by one, leveling the vertical stacks of information. Transactional costs of publishing have dropped to essentially zero. Yes, I’m publishing this post through a “publisher” but it would be just as easy for me to publish to my own blog. And while MediaPost’s audience is probably larger than my own bog’s, the gap between the two grows less every day. The lower transactional costs of publishing have erased the scarcity of information.

This disruptive change has flipped the publishing model on its head. The problem with information used to be that we had too little access. The problem today is that we have too much. What we need now are filters. We need a way to separate the signal from the ever-increasing noise.

Now, think of what this reversal does for revenue models of publishers. If the problem before were access, we would value any source of information that provided this access. We would be loyal to it. We would spend a significant amount of time with it. But if the problem becomes one of filtering, our loyalty level drops significantly. We just want to get to the information that is most interesting to us as quickly and efficiently as possible. If we have any allegiance to publishers at all, it is as a content filter. This is exactly why publishing empires are fragmenting into more and more specific vertical niches. We don’t need access points – we need effective filters.

Now, back to my original point. If the only way to make revenue from publishing is to introduce more noise – in the form of intrusive advertising – we quickly see the problem. We want publishers to eliminate extraneous noise and they add more. And to compound the problem, they intentionally blur the line between signal and noise in an attempt to generate more click-throughs. And, as Joe Marchese rightly points out, this vicious cycle is exacerbated by the bogus metric of “impressions” that publishers seem to have latched on to. The reader’s intent and the publisher’s intent are on a collision course with each other.

Given this, is there a way to save publishing? Perhaps, but it will be in a form much different than any we currently see. Publishing’s role may be in serving both as a filter and a matchmaker. More to come next Tuesday

The Sorry State of Online Publishing

ss-publishingDynamic tension can be a good thing. There are plenty of examples of when this is so. Online publishing isn’t one of them. The plunging transaction costs of publishing and the increasingly desperate attempts to shore up some sort of sustainable revenue model is creating a tug-of-war that’s threatening to tear apart the one person that this whole sorry mess is revolving around – the reader. Somebody better get their act together soon, because I’m one reader that’s getting sick of it.

Trying to read an article on most online is like trying to tiptoe through a cognitive minefield. The publishers have squeezed every possible advertising opportunity onto the page and in doing so, has sacrificed credibility, cohesiveness and clarity. The job of publishing is communication, but these publishers seem to think its actually sacrificing communication for revenue. Methinks if you have to attack your own business model to make a profit, you should be taking a long hard look at said model.

Either Fish or Cut Click Bait

The problem has grown so pervasive that academia is even piling on. In the past few months, a number of studies have looked at the dismal state of online publishing.

clickbaitIn the quest for page views, publishers have mastered the trick of pushing our subconscious BSO (Bright Shiny Object) buttons with clickbait. Clickbait is essentially brain porn – headlines, often misleading – that you can’t resist clicking on. The theory is more page views – more advertising opportunities. The problem is that clickbait essential derails the mind from its predetermined focus. And worse, clickbait often distracts the brain with a misleading headline the subsequent article fails to deliver on. As Jon Stewart recently told New York Magazine, “It’s like carnival barkers, and they all sit out there and go, “Come on in here and see a three-legged man!” So you walk in and it’s a guy with a crutch.”

A recent study from The Journal of Experimental Psychology showed that misleading headlines and something called “false balance” – where publishers give equal airtime to sources with very different levels of credibility – can negatively impact the reader’s ability to remember the story, create a cohesive understanding of the story and cognitively process the information. In other words, the publisher’s desperate desire to grab eyeballs gets in the way of their ability to communicate effectively.

Buzzfeed Editor-in-Chief Ben Smith has publicly gone on the record about why he doesn’t use click-bait headlines: “Here is a trade secret I’d decided a few years ago we’d be better off not revealing — clickbait stopped working around 2009.” He references Facebook engineer Khalid El-Arini in the post, saying “readers don’t want to be tricked by headlines; instead, they want to be informed by them.”

Now You Read Me, Now You Don’t

If you ever wanted to test your resolve, try getting to the end of an online article. What content there is is shoehorned into a format littered with ads and clickbait of every description. Many publishers even try to squeeze revenue from the content itself by using Text Enhance, an ad serving platform that hyperlinks keywords in the copy and shows ads if your cursor strays anywhere near these links. Users like me often use their cursor both as a place marker and a quick way to vet sources of embedded links. Text Enhance makes reading in this way an incredibly frustrating experience as it continually pops up poorly targeted ads while you try to tiptoe through the advertising landmines to piece together what the writer was originally trying to say. It turns reading content into a virtual game of “Whac-a-Mole.”

Of course, this is assuming you’ve made it past the page take-over and auto-play video ads that litter the “mind-field” between you and the content you want to access on a site like Forbes or The Atlantic. These interruptions in our intent create a negative mental framework that is compounded by having to weave through increasingly garish ad formats in order to piece together the content we’re trying to access.

A new study from Microsoft and Northwestern University shows that aggressive and annoying advertising may prop up short-term revenues, but at a long-term price that publishers should be thinking twice about paying, ““The practice of running annoying ads can cost more money than it earns, as people are more likely to abandon sites on which they are present. In addition, in the presence of annoying ads, people were less accurate in remembering what they had read. None of these effects on users is desirable from the publisher’s perspective.”

Again, we have this recurring theme about revenue getting in the way of user experience. This is a conflict from which there can be no long-term benefit. When you frustrate users, you slowly kill your revenue source. You engage in a vicious cycle from which there is no escape.

I understand that online publishers are desperate. I get that. They should be. I suspect the ad-supported business platform they’re trying to prop up is hopelessly damaged. Another will emerge to take its place. But the more they frustrate us, the faster that will happen.

 

 

Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

Letting the Foxes into Journalism’s Hen(Hedgehog) House

First published March 27, 2014 in Mediapost’s Search Insider

fanhI am rooting for Nate Silver and fivethirtyeight.com, his latest attempt to introduce a little data-driven veracity into the murky and anecdotal world of journalism. But I may be one of the few, at least if we take the current backlash as a non-scientific, non-quantitative sample:

I have long been a fan of Nate Silver, but so far I don’t think this is working. – Tyler Cowen, Marginal Revolution

Nate Silver’s new venture may become yet another outlet for misinformation when it comes to the issue of human-caused climate change, Michael Mann, director of the Earth System Science Center at Pennsylvania State University.

Here’s hoping that Nate Silver and company up their game, soon. – Paul Krugman, NY Times

Krugman also states:

You can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking.

Now..Nate Silver doesn’t disagree with this. In fact, he says pretty much the same thing in his book, The Signal and the Noise:

The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.

But he goes on,

Like Caesar, we may construe them in self-serving ways that are detached from their objective reality.

And it’s this construal that Silver is hoping to nip in the bud with FiveThirtyEight. In essence, he wants to do it by being a Fox, to borrow from Isaiah Berlin’s analogy.

‘The fox knows many things, but the hedgehog knows one big thing.’ We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.

Silver thinks the media’s preoccupation with punditry is a dangerous thing. Pundits, whether they’re coming from the right or left, are Hedgehogs. They get paid for their expertise on “one big thing.” And the more controversial their stand, the more attention they get. This can lead to a dangerous spiral, as researcher Philip Tetlock found out:

What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox—those who “know many little things,” draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life.

Tetlock was researching how expertise correlated with the ability to make good predictions. What he found was that it was actually an inverse relationship. The higher the degree of expertise, the more likely the person in question was a hedgehog. Media pundits are usually extreme versions of hedgehogs, which not only have one worldview, but also love to talk about it. Nate Silver believes that to get an objective view of world events, you need to be a fox, first, but secondly; you should be a fox that’s good at sifting through data:

Conventional news organizations on the whole are lacking in data journalism skills, in my view. Some of this is a matter of self-selection. Students who enter college with the intent to major in journalism or communications have above-average test scores in reading and writing, but below-average scores in mathematics.

So, all this makes sense. The problem in Silver’s approach is that journalism is the way it is because that’s the way humans want it. While I applaud Silver’s determination to change it, he may be trying to push water up hill. Pundits exist not just because the media keeps pushing them in front of us – they exist because we keep listening. Humans like opinions and anecdotes. We’re not hardwired to process data and objectively rationalize. We connect with stories and we’re drawn to decisive opinion leaders. Silver will have to find some middle ground here, and that seems to be where the problems arise. The minute writers add commentary to data; they have to impose an ideological viewpoint. It’s impossible not to. And when you do that, you introduce a degree of abstraction.

The backlash against Fivethirtyeight.com generally falls into two camps: Foxes like Silver that have no problem with the approach but disagree with the specific data put forward and Hedgehogs that just don’t like the entire concept. The first camp may come onside as Silver and his team work out the inevitable hiccups in their approach. The second, which, it should be noted, have a large number of pundits in their midst, will never become fans of Silver and his foxlike approach.

In the end though, it really doesn’t matter what columnists and journalists think. It’s up to the consumers of news media. We’ll decide what we like better – hedgehogs or foxes.

Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.

What is this “Online” You Speak Of?

First published September 12, 2013 in Mediapost’s Search Insider.

I was in an airport yesterday, and I was eavesdropping. That’s what I do in airports. It’s much more entertaining than watching the monitors. In this particular case, I was listening to a conversation between a well-dressed elderly gentleman, probably in his late ’80s, and what appeared to be his son. They were waiting for pre-boarding. The son was making that awkward small talk — you know, the conversation you have when you don’t really know your parent well enough anymore to be able to talk about what they’re really interested in, but you still feel the need to fill the silence. In this case, the son was talking to his dad about a magazine: “I used to get a copy every time I flew to London,” he said. “But they don’t publish it anymore. It’s all done online.”

The father, who had the look and appearance of a retired university professor, looked at his son quizzically for a few minutes. It’s as if the son had suddenly switched from English to Swahili midstream in his conversation.

“What’s ‘online’?”

“Online — on the Internet. It’s published electronically. There’s no print version anymore?”

The father grappled with the impact of this statement, then shook his head slowly and sadly. “That’s very sad. I suppose the mail service’s days are numbered too.”

The son replied, “Oh yes, I’m sure. No one mails things anymore.”

“But what will I do? I still buy things from catalogs.” It was as if the entire weight of the last two-and-a-half decades had suddenly settled on the frail gentleman’s shoulders.

At first, I couldn’t believe that anyone still alive didn’t know what “online” was. Isn’t that pretty much equivalent to oxygen or gravity now? Hasn’t it reached the point of ubiquity at which we all just take it for granted, no longer needing to think about it?

But then, because in the big countdown of life, I’m also on the downhill slope, closer to the end than to the beginning, I started thinking about how wrenching technological change has become. If you don’t keep up, the world you know is swept away, to be replaced with a world where your mail carrier’s days are numbered, the catalogs you depend on are within a few years of disappearing, and everything seems to be headed for the mysterious destination known as “online.”

As luck would have it, my seat on the airplane was close enough to this gentleman’s that I was able to continue my eavesdropping (if you see me at an airport, I advise you to move well out of earshot). You might have thought, as I first did, that he was in danger of losing his marbles. I assure you, nothing could be further from the truth. For over four hours, he carried on intelligent, informed conversations on multiple topics, made some amazing sketches in pencil, and generally showed every sign of being the man I hope to be when I’m approaching 90. This was not a man who had lost touch with reality; this was a man who is continually surprised (and, I would assume, somewhat frustrated) to find that reality seems to be a moving target.

We, the innovatively smug, may currently feel secure in our own technophilia, but our ability to keep up with the times may slip a little in the coming years. It’s human to feel secure with the world we grew up and functioned in. Our evolutionary environment was substantially more stable than the one we know today. As we step back from the hectic pace, don’t be surprised if we lose a little ground. Someday, when our children speak to us of the realities of their world, don’t be surprised if some of the terms they use sound a little foreign to our ears.

The Swapping of the Old “Middle” for the New

First published November 8, 2012 in Mediapost’s Search Insider

For the past several columns, I’ve been talking about disintermediation. My hypothesis is that technology is driving a general disintermediation of the marketplace (well, it’s not really my hypothesis — it’s a pretty commonly held view) and is eliminating a vast “middle” infrastructure that has accounted for much of the economic activity of the past several decades. It’s a massive shift (read “disruption”) in the market that will play out over the next several years.

But every good hypothesis must stand up to challenge, and an interesting one came from a recent article in Slate, which talks about the growth of a brand new kind of “gatekeeper,” the new “bots” that crawl the Web and filter (or, in some cases, generate) content based on a preset algorithm. These bots can crawl blog posts, pinpointing spam and malicious posts so they can be removed. The sophistication is impressive, as the most advanced of these tap into the social graph to learn, in real time, the context of posts so it can make nuanced judgment calls about what is and isn’t spam.

But these bots don’t simply patrol the online frontier, they also contribute to it. They can generate automated social content based on pre-identified themes. In other words, they can become propaganda generators. So now we have a new layer of “middle” that acts both as censor and propagandist. Have we gained anything here?

The key concept here is one of control. The “middle” used to control both ends of the market. It did so because it controlled the bridge between the producers and consumers.  This was control in every sense: control of the flow of finance, control of the physical market itself, and control of communication.

With disintermediation, direct connections are being built between producers and consumers. With this comes a redefinition of control. In terms of financial control, disintermediation should (theoretically) produce a more efficient marketplace, resulting in more profit for producers and better prices for consumers. That drastically oversimplifies the pain involved in getting to a more efficient marketplace, but you get the idea.  In this case, the only loser is the middle, so there’s no real incentive for the producers or consumers to ensure its survival.

Disintermediation of the physical market essentially works itself out. If the product needs a face-to-face representative, the middle will survive. If not, then we’ll figure out how to facilitate the sale online, and you can expect to see a lot of UPS vans in your neighborhood. We consumers may mourn the loss of a “face” in some segments of our marketplace, but we’ll get over it.

When it comes to control of communication, it’s more difficult to crystal-ball what might happen in the future. This area is also where new gatekeepers are most likely to appear.

Communication between marketers and the market used to be tightly channeled and controlled by the “middle.” It also used to flow in essentially one direction – from the marketer to the market. It was always very difficult for true communication to flow the other way.

But now, content is sprouting everywhere and becomes publicly accessible through a multitude of online touch points. It could soon become overwhelming to navigate through, both for consumers and producers. In this case, arguably, the middle served a very real service to both producers and consumers. The middle could edit communication, saving us from wading through a mountain of content to get what we were looking for.  It could also ensure that the messages producers wanted to get to the market were effectively delivered. The channels were under the control of the marketplace. For this reason, both marketers and the market may be reluctant to see disintermediation when it comes to communication.

The new gatekeepers, such as those featured in the Slate article, seem to serve both ends of the market. They help consumers access higher quality information by weeding out spam and objectionable content. And they help producers exercise some degree of control over negative content generated by the marketplace. In the absence of tight control of channels, a concept that’s gone the way of the dodo, this scalable, automated gatekeeper seems to serve a purpose.

If the need is great enough on both sides of the market, we are likely to find a new “middle” emerge: an “infomediary,” to use the term coined by John Hagel, Marc Singer and Jeffrey Rayport. According to this definition of the middle, Google emerges as the biggest of the “infomediaries.”

The question is, how much control are we willing to give this new evolution of the middle? In return for hacking some semblance of sanity out of the chaos that is an unmediated information marketplace, how much are we willing to pay in return? And, where does this control (and with it, the associated power) now live?  Who owns the new gatekeepers?  And who are those gatekeepers accountable to?

Living a B-Rated Life

First published August 16, 2012 in Mediapost’s Search Insider

I love ratings and reviews — and I’m not alone.  4.7 people out of 5 people love reviews. We give them two thumbs up. They rate 96.5% on the Tomato-meter.  I find it hard to imagine what my life would be without those ubiquitous 5 stars to guide me.

This past weekend, I was in Banff, Alberta for my sister’s wedding. My family decided to find a place to go for breakfast. The first thing I did was check with Yelp, and soon we were stacking up the Eggs Benny at a passable breakfast buffet less than two miles from our hotel. I never knew said buffet existed before checking the reviews — but once I found it, I trusted the wisdom of crowds. It seldom steers me wrong.

Now, you do have to learn how to read between the lines of a typical review site. Just before heading to my sister’s wedding, I spent the day in Seattle at the Bazaar Voice user event and was fascinated to learn that their user research shows that the typical number of reviews scanned is generally about seven. Once people hit seven reviews, they feel they have a good handle on the overall tone, even if there are 1,000 reviews in total. This seems right to me. It’s about the number of reviews I scan if possible.

But we also rely on the average rating summaries that typically show above the individual reviews and comments. When I read a review, I tend to follow these rules of thumb:

  • Look for the entry with the most reviews.
  • Find one that has a high average, but be suspicious of ones that have absolutely no negative reviews (unusual if you follow Rule One).
  • Scan the top six or seven reviews to get an overall sense of what people like and dislike.
  • Sort by the most negative reviews and read at least one to see what people hate.
  • Decide whether the negative reviews are the result of a one-off bad experience, or possibly an impossible-to-please customer (you can usually pick them out by their comments).
  • Do the “sniff test” to see if there are planted reviews (again, they’re not that hard to pick out).

I’ve used the same approach for restaurants, hotels, consumer electronics, cars, movies, books, hot tubs – pretty much anything I’ve had to open my wallet for in the past five or six years. It’s made buying so much easier. Ratings and reviews are like the Cole’s notes of word of mouth. They condense the opinions of the marketplace down to the bare essentials.

It’s little wonder that Google is starting to invest heavily in this area, with recent acquisitions of Zagat and Frommer’s. These are companies that built entire businesses on eliminating risk through reviews. The aggregation and organization of opinion is a natural extension for search engines. Of course, we should give it a fancy name, like “social graph,”, so we can sound really smart at industry conferences, but the foundations are built on plain common sense. Our attraction to reviews is hardwired into our noggins. We are social animals and like to travel in packs.  Language evolved so we could point each other to the best cassava root patch and pass along the finer points of mastodon hunting.

As Google acquires more and more socially informed content, it will be integrated into Google’s algorithms. This is why Google had to launch its own social network. Unfortunately, Google+ hasn’t gained the critical mass needed to provide the signals Google is looking for. I personally haven’t had a Google+ invite in months. Despite Larry Page’s insistence that it’s a roaring success, others have pointed out that Google+ seems to be a network of tire kickers, with little in the way of ongoing engagement. Contrast that with Pinterest, which is all the various women in my life seem to talk about — and is outperforming even Twitter when it comes to driving referrals.

I personally love the proliferation of structured word-of-mouth. Some say it negates serendipity, but I actually believe I will be more apt to explore if there is some reassurance I won’t have a horrible experience. Otherwise, this weekend my family and I would have been having Egg McMuffins at the Banff McDonald’s — and really, is that the life you want?