Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Ode to a Grecian Eurozone

comm-crisis I’d like to comment on the Greek debt crisis. But I don’t know anything about it. Zip..or, as they say in Athens – μηδέν. I do, however, know how to say zero in Greek, thanks to Google Translate. At least for the next few minutes. I also happen to know rather a lot right now about the Tour de France, how to wire RV batteries, how to balance pool chemicals, how to write obituaries and most of the plotlines for the Showtime series Homeland. I certainly know more about all those things than the average person. Tomorrow, I’ll probably know different stuff. And I will retain almost nothing. But if you ask me what in the world is happening right now, I’ll likely draw a blank. I’d say it’s all Greek to me, but a certain Mediapost columnist already stole that line. Damn you Bob Garfield!

I’m not really sure if I’m concerned about this. After all, I’m the one who has chosen not to watch the news for a long time. My various information sources feed me a steady diet of information, but it’s all been predetermined based on my interests. I’m in what they call a “filter bubble.” I’ve become my own news curator and somewhere along the line, I’ve completely filtered out anything to do with the Greek economy. It’s because I’m not really interested in the Greek economy, but I’m thinking maybe I should be.

(Incidentally, am I the only one who finds it a bit ironic that the word “economy” comes from – you guessed it – the Greek words for “house” and “management”)

The problem is that I have a limited attention span. My memory capacity is a little more voluminous, but there are definite limits to that, as well. To make matters worse, Google is making me intellectually lethargic. I don’t try as hard to remember stuff because I don’t have to. Why learn how to count to 10 in Greek when I can just look it up when I need to. I’m not alone in this. We’re all going down the same blind cornered path together. Sooner or later, we’ll all run into a major crisis we never saw coming. And it’s because we’ve all been looking in different places.

40 years ago, to be well informed, you had to pay attention to mainstream news sources. It was the only option we had. We all got feed the same diet of information. Some of us retained more than others, but we all dined at the same table. Our knowledge capacity was first filled from these common news sources. Then, after that, we’d fill whatever nooks and crannies were left with whatever our unique interests might be. But we all, to some extent, shared a common context. Knowledge may not have been deep, but it was definitely broad.

Now, if I choose to learn more about the Greek economy, I certainly have plenty of opportunities to do so. But I’d be starting with a blank slate. It would take some work to get up to speed. So I have to decide whether it’s worth the effort for me to inform myself. Is the return worth the investment? Something has to tip the balance to make it important enough to learn more about whatever it is the Greeks are referendumming (referendering?) about. And in the meantime, there will be a lot of other things competing for that same limited supply of information gathering attention. Tomorrow, for instance, it might become really important for me to find out how close BC is to legalizing pot, or what the wild fire hazard is in Northern Saskatchewan, or what July’s weather is like in Chiang Mai. All of these things are relatively easy to find, but I have to reserve enough retention capacity to use the information once I find it. Information may want to be free, but the resources required to utilize it depletes our limited stores of cognitive ability.

Perhaps we’re saving more of our attention for on demand information requirements. Or maybe we’re just filtering out more of what we used to call news. Whatever the cause, I think we’re loosing our common cultural context, bit by byte. A community is defined by what it has in common, and the more technology allows us to pursue our individual interests, the more we surrender the common narratives that used to bind us.

The Messy Part of Marketing

messymarketingMarketing is hard. It’s hard because marketing reflects real life. And real life is hard. But here’s the thing – it’s just going to get harder. It’s messy and squishy and filled with nasty little organic things like emotions and human beings.

For the past several weeks, I’ve been filing things away as possible topics for this column. For instance, I’ve got a pretty big file of contradicting research on what works in B2B marketing. Videos work. They don’t work. Referrals are the bomb. No, it’s content. Okay, maybe it’s both. Hmmm..pretty sure it’s not Facebook though.

The integration of marketing technology was another promising avenue. Companies are struggling with data. They’re drowning in data. They have no idea what to do with all the data that’s pouring in from smart watches and smart phones and smart bracelets and smart bangles and smart suppositories and – okay, maybe not suppositories, but that’s just because no one thought of it till I just mentioned it.

Then there’s the new Google tool that predicts the path to purchase. That sounds pretty cool. Marketers love things that predict things. That would make life easier. But life isn’t easy. So marketing isn’t easy. Marketing is all about trying to decipher the mangled mess of living just long enough to shoehorn in a message that maybe, just maybe that will catch the right person at the right time. And that mangled mess is just getting messier.

Personally, the thing that attracted me to marketing was its messiness. I love organic, gritty problems with no clear-cut solutions. Scientists call these ill-defined problems. And that’s why marketing is hard. It’s an ill-defined problem. It defies programmatic solutions. You can’t write an algorithm that will spit out perfect marketing. You can attack little slivers of marketing that lend themselves to clearer solutions, which is why you have the current explosion of ad-tech tools. But the challenge is trying to bring all these solutions together into some type of cohesive package that actually helps you relate to a living, breathing human.

One of the things that has always amazed me is how blissfully ignorant most marketers are about concepts that I think should be fundamental to understanding customer behaviors: things like bounded rationality, cognitive biases, decision theory and sense-making. Mention any of these things in a conference room full of marketers and watch eyes glaze over as fingers nervously thumb through the conference program, looking for any session that has “Top Ten” or “Surefire” in it’s title.

Take Information Foraging Theory, for instance. Anytime I speak about a topic that touches on how humans find information (which is almost always), I ask my audience of marketers if they’ve ever heard of I.F.T. Generally, not one hand goes up. Sometimes I think Jakob Nielsen and I are the only two people in the world that recognize I.F.T. for what it is: “the most important concept to emerge from Human-Computer Interaction research since 1993.” (Jakob’s words). If you take the time to understand this one concept I promise it will fundamentally and forever change how you look at web design, search marketing, creative and ad placement. Web marketers should be building a shrine to Peter Pirolli and Stuart Card. Their names should be on the tips of every marketer’s tongue. But I venture to guess that most of you reading this column never heard of them until today.

None of these fundamental concepts about human behavior are easy to grasp. Like all great ideas, they are simple to state but difficult to understand. They cover a lot of territory – much of it ill defined. I’ve spent most of my professional life trying to spread awareness of things like Information Foraging Theory. Can I always predict human behavior? Not by a long shot. But I hope that by taking the time to learn more about the classic theories of how we humans tick, I have also learned a little more about marketing. It’s not easy. It’s not perfect. It’s a lot like being human. But I’ve always believed that to be an effective marketer, you first need to understand humans.

Justine Sacco, Twitter and the End of Irony

Justine Sacco is in the news again. Not that she wants to. She’d like nothing more than to fade from the spotlight. As she recently said in an interview, “Someday you’ll Google me and my LinkedIn will be the first thing that pops up.” But today, over 15 months after she launched the tweet that just won’t go away, she’s still the poster child for career ruination via social media. The recent revival of Justine’s story comes ahead of the release of a new book by Jon Ronson, “So You’ve Been Publicly Shamed.”

Justine SaccoIf you’ve never heard of Justine Sacco, I’ll recap quickly. Just before boarding an 11-hour flight to South Africa, in what can only be called a monumental melt down of discretion, she tweeted this, “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” This touched off a social media feeding frenzy looking for Sacco’s blood. The world waited for her to land (#HasJustineLandedYet? became the top trender) and meet her righteous retribution.

Oh, did I mention that Justine was IAC’s Corporate Head of Communications? Yeah, I know. WTF – right?

But the point here is not whether or not Justine Sacco was wrong. I think even she’ll admit that it was a momentarily brain-dead blurb of 64-character stupidity. The point here is whether or not Sacco was a racist, cold-hearted bitch. And to that, the answer is no.  Justine meant the comment to be ironic – a satirical poke at white privilege and comfort. She never intended for it to be taken seriously. And that was where the wheels came off.

A_Modest_Proposal_1729_CoverSatire has been around for a long time. The Greeks and Romans invented it, but it was the British that perfected it. The satirical essay became an art form in the hands of Alexander Pope, John Gay and the greatest of the satirists, Jonathon Swift. Through them, irony became honed to a razor sharp scythe for social change.  Swift’s A Modest Proposal is perhaps the greatest satirical piece ever written. In it, he proposed a solution for the starving beggars of Ireland – they should sell their children, of which there was an abundant supply, to the upper classes as a food source.

Now, did the pamphlet reading public of 1729 England call for Swift’s head? Did they think he was serious when he wrote:

“A young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout.”

Well, perhaps a few missed the irony, but for the vast majority of Swift’s audience, the pamphlet helped make his reputation, rather than ruin it. There was no “HasSwiftreturnedfromLilliputYet?” trend on Twitter. People got it.

There is no way Sacco’s work should be compared to Swift’s in terms of literary merit, but there are some other fundamental differences we should pay attention too.

First of all, Swift was known as a satirist. Satire was an established literary form in the Age of Enlightenment. The context was in place for the audience. They were able to manage the flip of perspective required to understand the irony. But before December 20, 2013, we had never heard of Justine Sacco. The tweet was stripped of any context. There was nothing to tell us that she wasn’t being serious. Twitter fragments our view of the world into tiny missives that float unconnected and unsupported.  Twitter, by its very nature, forces us to take its messages out of context. This is not the place to hope for a nuanced understanding.

Also, Sacco’s entire tweet totaled 64 characters. Swift’s essay comes in at 3405 words, or 19,373 characters. That’s about 300 times the literary volume of Sacco’s tweet. Swift had ample opportunity to expound on his irony and make sure readers got his point.  Even Swift’s title, at a hefty 169 characters, couldn’t have squeezed into the limits of a tweet.  Tweets beg to be taken at face value, because there’s no room to aim for anything other than that.

And that brings us to the biggest difference here – the death of thoughtfulness. You can’t get irony or satire unless you’re thoughtful. You have to spend some time thinking about what you’ve read. To use Daniel Kahneman’s terminology, you have to use System 2, which specializes in slow thinking. Sacco’s tweet takes about 2 seconds to read, from beginning to end. There is no time for thought there. But there is time for visceral reaction. That’s all System 1, and System 1 doesn’t understand irony.

At the average reading speed of 300 words a minute, you’d have to invest 11.3 minutes to get through Swift’s essay. That’s plenty of time for System 2 to digest what it’s read and to look for meaning beyond face value. You have to read it in a thoughtful manner.  But it’s not only in our reading where we don’t have to be thoughtful. We can also abandon thoughtfulness in our response. We can retweet in a matter of seconds and add our own invectives. This starts a chain reaction of indignation that starts a social media brush fire. Careful consideration is not part of the equation.

Sacco’s sin wasn’t that she was being racist. Her sin was trying to be ironic in a medium that couldn’t support it. By her own admission, she had been experimenting with Twitter to see if edgy tweets got retweeted more often. The answer, as it turned out, was yes, but the experiment damned near killed her. As a communication expert, she should have known better. Justine Sacco painfully discovered that in the split second sound-bite world of social media, thoughtful reading is extinct.  And with it, irony and satire have died as well.

Mourning Becomes Electric

dreamstime_19503560Last Friday was a sad day. A very dear and lifelong friend of mine, my Uncle Al, passed away. And so I did what I’ve done before on these occasions. I expressed my feelings by writing about it. The post went live on my blog around 10:30 in the morning. By mid afternoon, it had been shared and posted through Facebook, Twitter and many other online channels. Many were kind enough to send comments. The family, in the midst of their grief, forwarded my post to their family and friends. Soon, there was an extended network of mourning that sought to heal each other, all through channels that didn’t exist just a few years ago. Mourning had moved online.

As you probably know, I’m fascinated by how we express our innate human needs through digital technologies. And death, together with birth, is the most universal of human experiences. It was inevitable that we would use online channels to grieve. So I, as I always do, asked the question – why?

First of all – why do we mourn? Well, we mourn because we are social animals. We are probably the most social of animals. So we grieve to an according degree. We miss the departed terribly. It is natural to try to fill the hole a death tears inside of us by reaching out to others who may share the same grief. James R. Averill believed we communally mourn because it cements the social bonds that make it more likely that we will survive as a species. When it comes to dealing with death, misery loves company.

Secondly, why do we grieve online? Well, here, I think it has something to do with Granovetter’s weak ties. Death is one of those life events where we reach beyond the strong ties that define our day-to-day social existence. Certainly we seek comfort from those closest to us, but the death also triggers the existence of a virtual community – defined and united by their grieving for the one who has passed away. Our digital networks allow us to eliminate the six degrees of separation in one fell swoop. We can share our grief almost instantaneously and simultaneously with family, friends, acquaintances and even people we have never met.

There are two other aspects of grief that I believe lend themselves well to online channels: the need to chronicle and the comfort of emotional distance.

Part of the healing process is sharing memories of the departed love one. And, for those like myself, just writing about our feelings helps overcome the pain. Online provides a perfect platform for chronicling. We can share our own thoughts and, in the expressing of them, start the healing process.

The comfort of physical distance seems a contradictory idea, but almost everyone I know who has gone through a deep loss has one common dread – dealing with a never-ending stream of condolences over the coming weeks and months, triggered by each new physical encounter.

When you’ve been in the middle of the storm, you are typically a few days ahead of everyone else in dealing with your grief. Your mind has been occupied with nothing else as you have sat vigil by the hospital bed. While the condolences are given with the best of intentions, you feel compelled to give a response. The problem is, each new expression of grief forces you to replay your loop of very painful memories. The amplitude of this pain increases when it’s a face-to-face encounter. Condolences that reach you through a more detached channel, such as online, can be dealt with at your discretion. You can wait until you marshall the emotional reserves necessary to respond. You can also respond to several people at a time. How many times have you heard this from a grieving loved one, “I just wish I could record my message and play it whenever I meet someone who wants to tell me how sorry they are for my loss?” It may seem callous, but no one wants to relive that pain over and over again. And let’s face it – almost no one knows the right things to say at a moment like this.

By the end of last Friday, my online social connections had helped me ease a very deep pain. I hope I was able to return the favor for others that were dealing with their own grief. There are many things about technology that I treat with suspicion, but in this case, turning online seemed like the most natural thing in the world.

Consuming in Context

npharris-oscarsIt was interesting watching my family watch the Oscars Sunday night. Given that I’m the father of two millennials, who have paired with their own respective millennials, you can bet that it was a multi-screen affair. But to be fair, they weren’t the only ones splitting their attention amongst the TV and various mobile devices. I was also screen hopping.

As Dave Morgan pointed out last week, media usage no longer equates to media opportunity. And it’s because the nature of our engagement has changed significantly in the last decade. Unfortunately, our ad models have been unable to keep up. What is interesting is the way our consumption has evolved. Not surprisingly, technology is allowing our entertainment consumption to evolve back to its roots. We are watching our various content streams in much the same way that we interact with our world. We are consuming in context.

The old way of watching TV was very linear in nature. It was also divorced from context. We suspended engagement with our worlds so that we could focus on the flickering screen in front of us. This, of course, allowed advertisers to buy our attention in little 30-second blocks. It was the classic bait and switch technique. Get our attention with something we care about, and then slip in something the advertiser cares about.

The reason we were willing to suspend engagement with the world was that there was nothing in that world that was relevant to our current task at hand. If we were watching Three’s Company, or the Moon Landing, or a streaker running behind David Niven at the 1974 Oscar ceremony, there was nothing in our everyday world that related to any of those TV events. Nothing competed for the spotlight of our attention. We had no choice but to keep watching the TV to see what happened next.

But imagine if a nude man suddenly appeared behind Matthew McConaughey at the 2015 Oscars. We would immediately want to know more about the context of what just happened. Who was it? Why did it happen? What’s the backstory? The difference is now, we have channels at our disposal to try to find answers to those questions. Our world now includes an extended digital nervous system that allows us to gain context for the things that happen on our TV screens. And because TV no longer has exclusive control of our attention, we switch to the channel that is the best bet to find the answers we seek.

That’s how humans operate. Our lives are a constant quest to fill gaps in our knowledge and by doing so, make sense of the world around us. When we become aware of one of these gaps we immediate scan our environment to find cues of where we might find answers. Then, our senses are focused on the most promising cues. We forage for information to satiate our curiosity. A single-minded focus on one particular cue, especially one over which we have no control, is not something we evolved to do. The way we watched TV in the 60s and 70s was not natural. It was something we did because we had no option.

Our current mode of splitting attention across several screens is much closer to how humans naturally operate. We continually scan our environment, which, in this case, included various electronic interfaces to the extended virtual world, for things of interest to us. When we find one, our natural need to make sense sends us on a quest for context. As we consume, we look for this context. The diligence of our quest for that context will depend on the degree of our engagement with the task at hand. If it is slight, we’ll soon move on to the next thing. If it’s deep, we’ll dig further.

On Sunday night, the Hotchkiss family quest for context continually skipped around, looking for what other movies J.K. Simmons had acted in, watching the trailer for Whiplash, reliving the infamous Adele Dazeem moment from last year and seeing just how old Benedict Cumberbatch is (I have two daughters that are hopelessly in love, much to the chagrin of their boyfriends). As much as the advertisers on the 88th Oscars might wish otherwise, all of this was perfectly natural. Technology has finally evolved to give our brain choices in our consumption.

 

 

 

 

 

 

Why More Connectivity is Not Just More – Why More is Different

data-brain_SMEric Schmidt is predicting from Davos that the Internet will disappear. I agree. I’ve always said that Search will go under the hood, changing from a destination to a utility. Not that Mr. Schmidt or the Davos crew needs my validation. My invitation seems to have got lost in the mail.

Laurie Sullivan’s recent post goes into some of the specifics of how search will become an implicit rather than an explicit utility. Underlying this is a pretty big implication that we should be aware of – the very nature of connectivity will change. Right now, the Internet is a tool, or resource. We access it through conscious effort. It’s a “task at hand.” Our attention is focused on the Internet when we engage with it. The world described by Eric Schmidt and the rest of the panel is much, much different.   In this world, the “Internet of Things” creates a connected environment that we exist in. And this has some pretty important considerations for us.

First of all, when something becomes an environment, it surrounds us. It becomes our world as we interpret it through our assorted sensory inputs. These inputs have evolved to interpret a physical world – an environment of things. We will need help interpreting a digital world – an environment of data. Our reality, or what we perceive our reality to be, will change significantly as we introduce technologically mediated inputs into it.

Our brains were built to parse information from a physical world. We have cognitive mechanisms that evolved to do things like keep us away from physical harm. Our brains were never intended to crunch endless reams of digital data. So, we will have to rely on technology to do that for us. Right now we have an uneasy alliance between our instincts and the capabilities of machines. We are highly suspicious of technology. There is every rational reason in the world to believe that a self-driving Google car will be far safer than a two ton chunk of accelerating metal under the control of a fundamentally flawed human, but who of us are willing to give up the wheel? The fact is, however, that if we want to function in the world Schmidt hints at, we’re going to have to learn not only to trust machines, but also to rely totally on them.

The other implication is one of bandwidth. Our brains have bottlenecks. Right now, our brain together with our senses subconsciously monitor our environment and, if the situation warrants, they wake up our conscious mind for some focused and deliberate processing. The busier our environment gets, the bigger this challenge becomes. A digitally connected environment will soon exceed our brain’s ability to comprehend and process information. We will have to determine some pretty stringent filtering thresholds. And we will rely on technology to do the filtering. As I said, our physical senses were not built to filter a digital world.

It will be an odd relationship with technology that will have to develop. Even if we lower our guard on letting machines do much of our “thinking” (in terms of processing environmental inputs for us) we still have to learn how to give machines guidelines so they know what our intentions are. This raises the question, “How smart do we want machines to become?” Do we want machines that can learn about us over time, without explicit guidance from us? Are we ready for technology that guesses what we want?

One of the comments on Laurie’s post was from Jay Fredrickson, “Sign me up for this world, please. When will this happen and be fully rolled out? Ten years? 20 years?” Perhaps we should be careful what we wish for.  While this world may seem to be a step forward, we will actually be stepping over a threshold into a significantly different reality. As we step over that threshold, we will change what it means to be human. And there will be no stepping back.

Publishers as Matchmakers

gatekeeperI’m a content creator. And, in this particular case, I’ve chosen MediaPost as the distribution point for that content. If we’re exploring the role of publishing in the future, the important question to ask here is why? After all, I could publish this post in a couple clicks to my blog. And, thanks to my blogging software, it will automatically notify my followers that there’s a new post. So, what value does Mediapost add to that?

Again, we come back to signal and noise. I generate content primarily to reach both a wide and interested audience. As a digital marketing consultant, there is a financial incentive to grow my own personal brand, but to be honest, my reward is probably more tied up in the concepts of social capital and my own ego. I publish because I want to be heard. And I want to be heard by people who find my content valuable. I have almost 2000 followers between my blog, Twitter feed and other social networks, but those people already know me. Hopefully, Mediapost will introduce me to new people that don’t know me. I want Mediapost to be my matchmaker.

Now, the second question to ask is, why are you reading this post on Mediapost? While I don’t presume to be able to know your own personal intentions, I can take a pretty good shot at generalizing – you are a Mediapost reader because you find the collection of content they publish interesting. It’s certainly not the only place online you can find content about marketing and media. And, if they chose to, any of the MediaPost writers could easily publish their content on their own blogs. You have chosen MediaPost because it acts as both a convenient access point and an effective filter.

This connection between content and audience is where publishers like MediaPost add value. Because you trust MediaPost to deliver content you find interesting, it passes the first level of your filtering threshold. I, as a content creator, get the benefit of MediaPost’s halo effect. The odds are better that I can connect with new readers under the MediaPost banner than they are if you’re introduced to me through a random, unfiltered tweet or alert in your newsfeed. And here we have a potential clue in the future of revenue generation for publishers. If publishing is potentially a match making service, perhaps we need to look at other matchmakers to see how they generate revenue.

In the traditional publishing world, it would be blasphemous to suggest that content creators should be charged for access to an audience. After all, we used to get paid to generate content by the publishers. But that was then and this is now. Understand, I’m not talking about native advertising or advertorials here. In fact, it would be the publisher’s responsibility to filter out unacceptably commercial editorials. I’m talking about creating an audience market for true content generators. In this day of personal branding, audiences have value. The better the audience, the higher the value. It should be worth something to me to reach new audiences. Publishers, in turn, act as the reader’s filter, ensuring the content they provide matches the user’s interest. Again, if the match is good enough, that has value for the reader.

Of course, the problem here is quantifying value on both sides of the relationship. I would imagine that both the content creators and content consumers that are reading my suggestions are probably saying, “There is no way I would pay for that!” And, in the current state of online publishing, I wouldn’t either – as a creator nor a consumer. The value isn’t there because the match isn’t strong enough. But if publishers focused on building the best possible audience and on presenting the best possible content, it might be a different story. More importantly, it would be a revenue model that would realign publishers with their audience, rather than pit them against it.

From the reader’s perspective, if a publisher was acting as your own private information filter, and not as a platform for poorly targeted advertising, you would probably be more willing to indicate your preferences and share information. If the publisher was discriminating enough, you might even be willing to allow them to introduce very carefully targeted offers from advertiser’s, filtering down to only the offers you’re highly likely to be interested in. This provides three potential revenue sources to the publisher: content creators looking for an audience, readers looking for an effective filtering service and advertisers looking for highly targeted introductions to prospects. In the last case, the revenue should be split with the prospect, with the publisher taking a percentage for handling the introduction and the rest going to the prospect in return for agreeing to accept the advertiser’s introduction.

While radically different than today’s model, what I’ve proposed is not a new idea. It was first introduced in the book Net Worth, by John Hagel and Marc Singer. They introduced the idea in 1999. Granted, my take is less involved than theirs is, but the basic idea is the same – a shift from a relentless battering of prospects with increasingly overt advertising messages to a careful filtering and matching of interests and appropriate content. And, when you think about it, the matching of intent and content is what Google has been doing for two decades.

Disruptive innovations tend to change the ways that value is determined. They take previous areas of scarcity and change them to ones of abundance. They upend markets and alter existing balances between forces. When the markets shift to this extent, trying to stick to the old paradigm guarantees failure. The challenge is that there is no new paradigm to follow. Experimentation is the only option. And to experiment you have to be willing to explore the boundaries. The answer won’t be found in the old, familiar territory.

Same Conversation. Different Location.

online_publishing_vxwndNote: This is my first OnlineSpin column for MediaPost.

First of all, let’s get the pleasantries out of the way. I’m Gord. I’m new to Online Spin, but not to MediaPost. If you don’t know me, I have been writing over on the Search Insider side of the house for the past 10 and a half years.

Nice to meet you.

Now, on to business. Just before the switch, I took online publishing to task for sacrificing it’s ability to communication for the sake of advertising revenue. The user experience on most online publications is so littered with intrusive ads and misleading click bait that it becomes almost impossible to actually read the content. My point, which is probably obvious, is that the short-term quest for revenue is jeopardizing the long-term health of the business model.

Among the comments posted were a few asking for guidance rather than just criticism. Fair enough. It’s much easier to criticize that it is to create. So, where does the future of publishing lie?

The problem, as it is in so many other cases, is that technology has annihilated the proverbial publishing apple cart. Publishing as an industry began because of the high transactional cost of publicizing information. Information began to be stacked vertically, because that was the only cost effective way to do it. These vertical stacks of information attracted audiences because it was the only place they could get this information. Limited access points created large and loyal audiences which in turn allowed ad supported revenue models. Because transactional costs were high, information was scarce. Scarcity enabled profit.

Today, technology is, one by one, leveling the vertical stacks of information. Transactional costs of publishing have dropped to essentially zero. Yes, I’m publishing this post through a “publisher” but it would be just as easy for me to publish to my own blog. And while MediaPost’s audience is probably larger than my own bog’s, the gap between the two grows less every day. The lower transactional costs of publishing have erased the scarcity of information.

This disruptive change has flipped the publishing model on its head. The problem with information used to be that we had too little access. The problem today is that we have too much. What we need now are filters. We need a way to separate the signal from the ever-increasing noise.

Now, think of what this reversal does for revenue models of publishers. If the problem before were access, we would value any source of information that provided this access. We would be loyal to it. We would spend a significant amount of time with it. But if the problem becomes one of filtering, our loyalty level drops significantly. We just want to get to the information that is most interesting to us as quickly and efficiently as possible. If we have any allegiance to publishers at all, it is as a content filter. This is exactly why publishing empires are fragmenting into more and more specific vertical niches. We don’t need access points – we need effective filters.

Now, back to my original point. If the only way to make revenue from publishing is to introduce more noise – in the form of intrusive advertising – we quickly see the problem. We want publishers to eliminate extraneous noise and they add more. And to compound the problem, they intentionally blur the line between signal and noise in an attempt to generate more click-throughs. And, as Joe Marchese rightly points out, this vicious cycle is exacerbated by the bogus metric of “impressions” that publishers seem to have latched on to. The reader’s intent and the publisher’s intent are on a collision course with each other.

Given this, is there a way to save publishing? Perhaps, but it will be in a form much different than any we currently see. Publishing’s role may be in serving both as a filter and a matchmaker. More to come next Tuesday

The Sorry State of Online Publishing

ss-publishingDynamic tension can be a good thing. There are plenty of examples of when this is so. Online publishing isn’t one of them. The plunging transaction costs of publishing and the increasingly desperate attempts to shore up some sort of sustainable revenue model is creating a tug-of-war that’s threatening to tear apart the one person that this whole sorry mess is revolving around – the reader. Somebody better get their act together soon, because I’m one reader that’s getting sick of it.

Trying to read an article on most online is like trying to tiptoe through a cognitive minefield. The publishers have squeezed every possible advertising opportunity onto the page and in doing so, has sacrificed credibility, cohesiveness and clarity. The job of publishing is communication, but these publishers seem to think its actually sacrificing communication for revenue. Methinks if you have to attack your own business model to make a profit, you should be taking a long hard look at said model.

Either Fish or Cut Click Bait

The problem has grown so pervasive that academia is even piling on. In the past few months, a number of studies have looked at the dismal state of online publishing.

clickbaitIn the quest for page views, publishers have mastered the trick of pushing our subconscious BSO (Bright Shiny Object) buttons with clickbait. Clickbait is essentially brain porn – headlines, often misleading – that you can’t resist clicking on. The theory is more page views – more advertising opportunities. The problem is that clickbait essential derails the mind from its predetermined focus. And worse, clickbait often distracts the brain with a misleading headline the subsequent article fails to deliver on. As Jon Stewart recently told New York Magazine, “It’s like carnival barkers, and they all sit out there and go, “Come on in here and see a three-legged man!” So you walk in and it’s a guy with a crutch.”

A recent study from The Journal of Experimental Psychology showed that misleading headlines and something called “false balance” – where publishers give equal airtime to sources with very different levels of credibility – can negatively impact the reader’s ability to remember the story, create a cohesive understanding of the story and cognitively process the information. In other words, the publisher’s desperate desire to grab eyeballs gets in the way of their ability to communicate effectively.

Buzzfeed Editor-in-Chief Ben Smith has publicly gone on the record about why he doesn’t use click-bait headlines: “Here is a trade secret I’d decided a few years ago we’d be better off not revealing — clickbait stopped working around 2009.” He references Facebook engineer Khalid El-Arini in the post, saying “readers don’t want to be tricked by headlines; instead, they want to be informed by them.”

Now You Read Me, Now You Don’t

If you ever wanted to test your resolve, try getting to the end of an online article. What content there is is shoehorned into a format littered with ads and clickbait of every description. Many publishers even try to squeeze revenue from the content itself by using Text Enhance, an ad serving platform that hyperlinks keywords in the copy and shows ads if your cursor strays anywhere near these links. Users like me often use their cursor both as a place marker and a quick way to vet sources of embedded links. Text Enhance makes reading in this way an incredibly frustrating experience as it continually pops up poorly targeted ads while you try to tiptoe through the advertising landmines to piece together what the writer was originally trying to say. It turns reading content into a virtual game of “Whac-a-Mole.”

Of course, this is assuming you’ve made it past the page take-over and auto-play video ads that litter the “mind-field” between you and the content you want to access on a site like Forbes or The Atlantic. These interruptions in our intent create a negative mental framework that is compounded by having to weave through increasingly garish ad formats in order to piece together the content we’re trying to access.

A new study from Microsoft and Northwestern University shows that aggressive and annoying advertising may prop up short-term revenues, but at a long-term price that publishers should be thinking twice about paying, ““The practice of running annoying ads can cost more money than it earns, as people are more likely to abandon sites on which they are present. In addition, in the presence of annoying ads, people were less accurate in remembering what they had read. None of these effects on users is desirable from the publisher’s perspective.”

Again, we have this recurring theme about revenue getting in the way of user experience. This is a conflict from which there can be no long-term benefit. When you frustrate users, you slowly kill your revenue source. You engage in a vicious cycle from which there is no escape.

I understand that online publishers are desperate. I get that. They should be. I suspect the ad-supported business platform they’re trying to prop up is hopelessly damaged. Another will emerge to take its place. But the more they frustrate us, the faster that will happen.