The Magic of the Internet Through My Dad’s Eyes

senioriphone

“Would you rather lose a limb or never be able to access the Internet?” My daughter looked at me, waiting for my answer.

“Well?”

We were playing the game “Would You Rather” during a lull in the Christmas festivities. The whole point of the game is to pose two random and usually bizarre alternatives to choose from. Once you do, you see how others have answered. It’s a hard game to take seriously.

Except for this question. This one hit me like a hammer blow.

“I have to say I’d rather lose a limb.”

Wow. I would rather lose an arm or a leg than lose something I didn’t even know existed 20 years ago. That’s a pretty sobering thought. I am so dependent on this technical artifact that I value it more than parts of my own body.

During the same holiday season, my stepdad came to visit. He has two cherished possessions that are always with him. One is a pocketknife his father gave him. The other is an iPhone 3 that my sister gave him when she upgraded. Dad doesn’t do much on his phone. But what he does do is critically important to him. He texts his kids and he checks the weather. If you grew up on a farm on the Canadian prairies during the 1930’s, you literally lived and died according to the weather. So, for Dad, it’s magic of the highest sort to be able to know what the temperature is in the places where his favorite people live. We kids have added all our home locations to his weather app, as well as that of his sister-in-law. Dad checks the weather in Edmonton (Alberta), Calgary (Alberta), Kelowna (BC), Orillia (Ontario) and his hometown of Sundre (Alberta) constantly. It’s his way of keeping tabs on us when he can’t be with us.

I wonder what Dad would say if I asked him to choose between his iPhone and his right arm. I suspect he’d have to think about it. I do know the first thing I have to do when he comes to our place is set him up on our home wifi network.

It’s easy to talk about how Millennials or Gen-X’s are dependent on technology. But for me, it really strikes home when I watch people of my parent’s generation hold on to some aspect of technology for dear life because it enables them to do something so fundamentally important to them. They understand something we don’t. They understand what Arthur C. Clarke meant when he said,

“Any sufficiently advanced technology is indistinguishable from magic.”

To understand this, look for a moment through the eyes of my Dad when he was a child. He rode a horse to school – a tiny one room building that was heated with a wood stove. Its library consisted of two bookshelves on the back wall. A circle whose radius was defined by how far you could drive the wagon in a single day bound the world of which he was aware. That world consisted of several farms, the Eagle Hill Co-op store, the tiny town of Sundre, his school and the post office. The last was particularly important, because that’s where the packages you ordered from the Eaton’s catalogue (the Canadian equivalent of Sears Roebuck) would come.

It’s to this post office that my step-dad dragged his sleigh about 75 years ago. He didn’t know it at the time, but he was picking up his Christmas present. His mother, whose own paternal grandfather was a contemporary and friend of Charles Darwin, had saved milk money for several months to purchase a three-volume encyclopaedia for the home. Nobody else they knew had an encyclopaedia. Books were rare enough. But for Isobel (Buckman) Leckie, knowledge was an investment worth making. Those three books became the gift of a much bigger world for my Dad.

It’s easy to make fun of seniors for their simultaneous amazement of and bewilderment by technology. We chuckle when Dad does his third “weather round-up” of the day. We get frustrated when he can’t seem to understand how wifi works. But let’s put this in the context of the change he has seen in his life on this earth. This is not just an obsolete iPhone 3 that he holds in his hand. This is something for which the adjective “magical” seems apt.

Perhaps it’s even magic you’d pay an arm and a leg for.

Back to the Coffee House: Has Journalism Gone Full Circle?

coffeehouse

First, let’s consider two facts about Facebook that ran in Mediapost in the last two weeks. The first:

“A full 65% of people find their next destination through friends and family on Facebook.”

Let’s take this out of the context of just looking for your next travel destination. Let’s think about it in terms of a risky decision. Choosing somewhere to go on a vacation is a big decision. There’s a lot riding on it. Other than the expense, there’s also your personal experience. The fact that 2 out of 3 people chose Facebook as the platform upon which to make that decision is rather amazing when you think about it. It shows just how pervasive and influential Facebook as become.

Now, the next fact:

“Facebook users are two-and-a-half times more likely to read fake news fed through the social network than news from reputable news publishers.”

There’s really no reason to elaborate on the above – ‘nuff said. It’s pretty clear that Facebook has emerged at the dominant public space in our lives. It is perhaps the most important platform in our culture today for forming beliefs and opinions.

Sorry Mark Zuckerberg, but not matter what you may have said in the past about not being a media outlet, you can’t duck this responsibility. If our public opinions are formed on your private property that is a unimaginably powerful platform then – as Spidey’s Uncle Ben said (or the French National Convention of 1793; depending on whom you’re prefer to quote as a source) – “With great power comes great responsibility.” If you provide a platform and an audience to news providers – fake or real, you are, ipso facto, a media outlet.

But Facebook is more than just an outlet. It is also the forum where news is digested and shared. It is both a gristmill and a cauldron where beliefs are formed and opinions expressed. This isn’t the first time something like this has happened, although the previous occurrence was in a different time and a very different place. It actually contributed directly to the birth of modern journalism – which is, ironically – under threat from this latest evolution of news.

If you were an average citizen London in 1700 your sources for news were limited. First of all, there was a very good chance that you were illiterate, so reading the news wasn’t an option. The official channel for the news of the realm was royal proclamations read out by town criers. Unfortunately, this wasn’t so much news as whatever the ruling monarch felt like proclaiming.

There was another reality of life in London – if you drank the water it could possibly kill you. You could drink beer in a pub – which most did – or if you preferred to stay sober you could drink coffee. Starting in the mid 1600’s coffee houses started to pop up all over London. It wasn’t the quality of the coffee that made these public spaces all the rage. It was the forum they provided for the sharing of news. Each new arrival was greeted with, “Your servant, sir. What news have you?” Pamphlets, journals, broadsheets and newsletters from independent (a.k.a “non-royal”) publishers were read aloud, digested and debated. Given the class-bound society of London, coffee houses were remarkably democratic. “Pre-eminence of place none here should mind,” proclaimed the Rules and Orders of the Coffee-House (1674), “but take the next fit seat he can find.” Lords, fishmongers, baronets, barristers, butchers and shoe-blacks could and did all share the same table. The coffee houses of London made a huge contribution to our current notion of media as a public trust, with all that entails.

In a 2011 article the Economist made the same parallel between coffee houses and digitally mediated news. In it, they foreshadowed a dramatic shift in our concept of news:

“The internet is making news more participatory, social, diverse and partisan, reviving the discursive ethos of the era before mass media. That will have profound effects on society and politics.”

The last line was prescient. Seismic disruption has fundamentally torn the political and societal landscape asunder. But I have a different take on the “discursive ethos” of news consumption. I assume the Economist used this phrase to mean a verbal interchange of thought related to the news. But that doesn’t happen on Facebook. There is no thought and there is little discourse. The share button is hit before there is any chance to digest the news, let alone vet it for accuracy. This is a much different atmosphere of the coffee house. There is a dynamic that happens when our beliefs are called on the mat in a public forum. It is here where beliefs may be altered but they can never change in a vacuum. The coffee house provided the ideal forum for the challenging of beliefs. As mentioned, it was perhaps the most heterogeneous forum in all of England at the time. Most of all it was an atmosphere infused with physicality and human interaction – a melting pot of somatic feedback. Debate was civil but passionate. There was a dynamic totally missing from it’s online equivalent. The rules and realities of the 18th century coffee house forced thoughtfulness and diverse perspectives upon the discourse. Facebook allows you to do an end run around it as you hit your share button.

The Calcification of a Columnist

waves

First: the Caveat. I’m old and grumpy. That is self-evident. There is no need to remind me.

But even with this truth established, the fact is that I’ve noticed a trend. Increasingly, when I come to write this column, I get depressed. The more I look for a topic to write about, the more my mood spirals downward.

I’ve been writing for Mediapost for over 12 years now. Together, between the Search Insider and Online Spin, that’s close to 600 columns. Many – if not most – of those have been focused on the intersection between technology and human behavior. I’m fascinated by what happens when evolved instincts meet technological disruption.

When I started this gig I was mostly optimistic. I was amazed by the possibilities and – somewhat naively it turns out – believed it would make us better. Unlimited access to information, the ability to connect with anyone – anywhere, new ways to reach beyond the limits of our own DNA; how could this not make humans amazing?

Why, then, do we seem to be going backwards? What I didn’t realize at the time is that technology is like a magnifying glass. Yes, it can make the good of human nature better, but it can also make the bad worse. Not only that, but Technology also has a nasty habit of throwing in unintended consequences; little gotchas we never saw coming that have massive moral implications. Disruption can be a good thing, but it can also rip things apart in a thrice that took centuries of careful and thoughtful building to put in place. Black Swans have little regard for ethics or morality.

I have always said that technology doesn’t change behaviors. It enables behaviors. When it comes to the things that matter, our innate instincts and beliefs, we are not perceptibly different than our distant ancestors were. We are driven by the same drives. Increasingly, as I look at how we use the outcomes of science and innovation to pursue these objectives, I realize that while it can enable love, courage and compassion, technology can also engender more hate, racism and misogyny. It makes us better while it also makes us worse. We are becoming caricatures of ourselves.

800px-diffusion_of_ideas

Everett Rogers, 1962

Everett Rogers plotted the diffusion of technology through the masses on a bell curve and divided us up into innovators, early adopters, early majority, late majority and laggards. The categorization was defined by our acceptance of innovation. Inevitably, then, there would be a correlation between that acceptance and our sense of optimism about the possibilities of technology. Early adopters would naturally see how technology would enable us to be better. But, as diffusion rolls through the curve we would eventually hit those for which technology is just there – another entitlement, a factor of our environment, oxygen. There is no special magic or promise here. Technology simply is.

So, to recap, I’m old and grumpy. As I started to write yet another column I was submerged in a wave of weariness.   I have to admit – I have been emotionally beat up by the last few years. I’m tired of writing about how technology is making us stupider, lazier and less tolerant when it should be making us great.

But another thing usually comes with age: perspective. This isn’t the first time that humans and disruptive technology have crossed paths. That’s been the story of our existence. Perhaps we should zoom out a bit from our current situation. Let’s set aside for a moment our navel gazing about fake news, click bait, viral hatred, connected xenophobia and erosion of public trusts. Let’s look at the bigger picture.

History isn’t sketched in straight lines. History is plotted on a curve. Correction. History is plotted in a series of waves. We are constantly correcting course. Disruption tends to swing a pendulum one way until a gathering of opposing force swings it the other way. It takes us awhile to absorb disruption, but we do – eventually.

I suspect if I were writing this in 1785 I’d be disheartened by the industrial blight that was enveloping the world. Then, like now, technology was plotting a new course for us. But in this case, we have the advantage of hindsight to put things in perspective. Consider this one fact: between 1200 and 1600 the life span of a British noble didn’t go up by even a single year. But, between 1800 and today, life expectancy for white males in the West doubled from thirty eight years to seventy six. Technology made that possible.

stevenpinker2Technology, when viewed on a longer timeline, has also made us better. If you doubt that, read psychologist and author Steven Pinker’s “Better Angels of Our Nature.” His exhaustively researched and reasoned book leads you to the inescapable conclusion that we are better now than we ever have been. We are less violent, less cruel and more peaceful than at any time in history. Technology also made that possible.

It’s okay to be frustrated by the squandering of the promise of technology. But it’s not okay to just shrug and move on. You are the opposing force that can cause the pendulum to change direction. Because, in the end, it’s not technology that makes us better. It’s how we choose to use that technology.

 

 

 

You’ve got a Friend in Me – Our Changing Relationship with A.I.

applesiri1tnw

Since Siri first stepped into our lives in 2011, we’re being introduced to more and more digital assistants. We’ve met Amazon’s Alexa, Microsoft’s Cortana and Google’s Google Now. We know them, but do we love them?

Apparently, it’s important that we bond with said digital assistants and snappy comebacks appear to be the surest path to our hearts. So, if you ask Siri if she has a boyfriend, she might respond with, “Why? So we can get ice cream together, and listen to music, and travel across galaxies, only to have it end in slammed doors, heartbreak and loneliness? Sure, where do I sign up?” It seems to know a smart-assed digital assistant is to love her – but just be prepared for that love to be unrequited.

Not to be outdone, Google is also brushing up on its witty repartee for it’s new Digital Assistant – thanks to some recruits from the Onion and Pixar. A recent Mediapost article said that Google had just assembled a team of writers from those two sources – tapping the Onion for caustic sarcasm and Pixar for a gentler, more human touch.

But can we really be friends with a machine, even if it is funny?

Microsoft thinks so. They’ve unveiled a new chatbot in China called Xiaoice (pronounced Shao-ice). Xiaoice takes on the persona of a 17 year old girl that responds to questions like “How would you like others to comment on you when you die one day?” with the plaintiff “The world would not be much different without me.” Perhaps this isn’t as clever as Siri’s comebacks, but there’s an important difference: Siri’s responses were specifically scripted to respond to anticipated question; while Xiaoice actually talks with you by using true artificial intelligence and linguistic processing.

In a public test on WeChat, Xiaoice received 1.5 million chat group invitations in just 72 hours. As of earlier this year, she had had more than 10 billion conversations. In a blog post, Xiaoice’s “father”, Yongdong Wang, head of the Microsoft Application and Services Group East Asia, said, “Many see Xiaoice as a partner and friend, and are willing to confide in her just as they do with their human friends. Xiaoice is teaching us what makes a relationship feel human, and hinting at a new goal for artificial intelligence: not just analyzing databases and driving cars, but making people happier.”

When we think of digital assistants, we naturally think of the advantages that machines have over humans: unlimited memory, access to the entire web, vastly superior number crunching skills and much faster processing speeds. This has led to “cognitive offloading” – humans transferring certain mental processing tasks to machines. We now trust Google more than our own memory for retrieving information – just as we trust calculators more than our own limited mathematical abilities. But there should be some things that humans are just better at. Being human, for instance. We should be more empathetic – better able to connect with other people. A machine shouldn’t “get us” better than our spouse or best friend.

For now, that’s probably still true. But what if you don’t have a spouse, or even a best friend? Is having a virtual friend better than nothing at all? Recent studies have shown that robotic pets seem to ease loneliness with isolated seniors. More research is needed, but it’s not really surprising to learn that a warm, affectionate robot is better than nothing at all. What was surprising was that in one study, seniors preferred a robotic dog to the real thing.

The question remains, however: Can we truly have a relationship with a machine? Can we feel friendship – or even love – when we know that the machine can’t do the same? This goes beyond the high-tech flirtation of discovering Siri’s or Google’s “easter egg” responses to something more fundamental. It’s touching on what appears to be happening in China, where millions are making a chatbot their personal confident. I suspect there are more than a few lonely Chinese who would consider Xiaoice their best friend.

And – on many levels – that scares the hell out of me.

 

Prospect Theory, Back Burners and Relationship Risk

remove-burners-electric-stove_2f318104ceb1138d

What does relationship infidelity and consumer behavior have in common? Both are changing, thanks to technology – or, more specifically – the intersection between technology and our brains. And for you regular readers, you know that stuff is right in my wheelhouse.

drouin

Dr. Michelle Drouin

So I was fascinated by a recent presentation given by Dr. Michelle Drouin from Purdue University. She talked about how connected technologies are impacting the way we think about relationship investment.

The idea of “investing” in a relationship probably paints in a less romantic light then we typically think of, but it’s an accurate description. We calculate odds and evaluate risk. It’s what we do. Now, in the case of love, an admittedly heuristic process becomes even less rational. Our subliminal risk appraisal system is subjugated by a volatile cocktail of hormones and neurotransmitters. But – at the end of the day – we calculate odds.

If you take all this into account, Dr. Drouin’s research into “back burners” becomes fascinating, if not all that surprising. In the paper, back burners are defined as “a desired potential or continuing romantic/sexual partner with whom one communicates, but to whom one is not exclusively committed.” “Back burners” are our fall back bets when it comes to relationships or sexual liaisons. And they’re not exclusive to single people. People in committed relationships also keep a stable of “back burners.” Women keep an average of 4 potential “relationship” candidates from their entire list of contacts and 8 potential “liaison” candidates. Men, predictably, keep more options open. Male participants in the study reported an average of over 8 “relationship” options and 26 liaison “back burners.” Drouin’s hypothesis is that this number has recently jumped thanks to technology, especially with the connectivity offered through social media. We’re keeping more “back burners” because we can.

What does this have to do with advertising? The point I’m making is that this behavior is not unique. Humans treat pretty much everything like an open marketplace. We are constantly balancing risk and reward amongst all the options that are open to us, subconsciously calculating the odds. It’s called Prospect Theory. And, thanks to technology, that market is much larger than it’s ever been before. In this new world, our brain has become a Vegas odds maker on steroids.

In Drouin’s research, it appears that new technologies like Tinder, What’sapp and Facebook have had a huge impact on how we view relationships. Our fidelity balance has been tipped to the negative. Because we have more alternatives – and it’s easier to stay connected with those alternatives and keep them on the “back burner” – the odds are worth keeping our options open. Monogamy may not be our best bet anymore. Facebook is cited in one-third of all divorce cases in the U.K. And in Italy, evidence from the social messaging app What’sapp shows up in nearly half of the divorce proceedings.

So, it appears that humans are loyal – until a better offer with a degree of risk we can live with comes along.

This brings us back to our behaviors in the consumer world. It’s the same mental process, applied in a different environment. In this environment, relationships are defined as brand loyalty. And, as Emanuel Rosen and Itamar Simonson show in their book Absolute Value, we are increasingly keeping our options open in more and more consumer decisions. When it comes to buying stuff – even if we have brand loyalty – we are increasingly aware of the “back burners” available to us.

 

 

 

Why Our Brains are Blocking Ads

The brain.

On Mediapost alone in the last three months, there have been 172 articles written that have included the words “ad blockers” or “ad blocking.” That’s not really surprising, given that Mediapost covers the advertising biz and ad blocking is killing that particular biz, to the tune of an estimated loss of $41 billion in 2016. eMarketer estimates 70 million Americans, or 1 out of every 4 people online, uses ad blockers.

Paul Verna, an eMarketer Senior Analyst said “Ad blocking is a detriment to the entire advertising ecosystem, affecting mostly publishers, but also marketers, agencies and others whose businesses depend on ad revenue.” The UK’s culture Secretary, John Whittingdale, went even further, saying that ad blocking is a “modern-day protection racket.”

Here’s the problem with all this finger pointing. If you’re looking for a culprit to blame, don’t look at the technology or the companies deploying that technology. New technologies don’t cause us to change our behaviors – they enable behaviors that weren’t an option before. To get to the bottom of the growth of ad blocking, we have to go to the common denominator – the people those ads are aimed at. More specifically, we have to look at what’s happening in the brains of those people.

In the past, the majority of our interaction with advertising was done while our brain was idling, with no specific task in mind. I refer to this as bottom up environmental scanning. Essentially, we’re looking for something to capture our attention: a TV show, a book, a magazine article, a newspaper column. We were open to being engaged by stimuli from our environment (in other words, being activated from the “bottom up”).

In this mode, the brain is in a very accepting state. We match signals from our environment with concepts and beliefs we hold in our mind. We’re relatively open to input and if the mental association is a positive or intriguing one – we’re willing to spend some time to engage.

We also have to consider the effect of priming in this state. Priming sets a subconscious framework for the brain that then affects any subsequent mental processing. The traditional prime that was in place when we were exposed to advertising was a fairly benign one: we were looking to be entertained or informed, often the advertising content was delivered wrapped in a content package that we had an affinity for (our favorite show, a preferred newspaper, etc), and advertising was delivered in discrete chunks that our brain had been trained to identify and process accordingly.

All this means that in traditional exposures to ads, our brain was probably in the most accepting state possible. We were looking for something interesting, we were primed to be in a positive frame of mind and our brains could easily handle the contextual switches required to consider an ad and it’s message.

We also have to remember that we had a relatively static ad consumption environment that usually matched our expectations of how ads would be delivered. We expected commercial breaks in TV shows. We didn’t expect ads in the middle of a movie or book, two formats that required extended focusing of attention and didn’t lend themselves to mental contextual task switches. Each task switch brings with it a refocusing of attention and a brief burst of heightened awareness as our brains are forced to reassess its environment. These are fine in some environments – not in others.

Now, let’s look at the difference in cognitive contexts that accompany the deliver of most digital ads. First of all, when we’re online on our desktop or engaged with a mobile device, it’s generally in what I’ll call a “top down foraging” mode. We’re looking for something specific and we have intent in mind. This means there’s already a task lodged in our working memory (hence “top down”) and our attentional spotlight is on and focused on that task. This creates a very different environment for ad consumption.

When we’re in foraging mode, we suddenly are driven by an instinct that is as old as the human race (actually, much older than that): Optimal Foraging Theory. In this mode, we are constantly filtering the stimuli of our environment to see what is relevant to our intent. It’s this filtering that causes attentional blindness to non-relevant factors – whether they be advertising banners or people dressed up like gorillas. This filtering happens on a subconscious basis and the brain uses a primal engine to drive it – the promise of reward or the frustration of failure. When it comes to foraging – for food or for information – frustration is a feature, not a bug.

Our brains have a two loop learning process. It starts with a prediction – what psychologists and economists call “expected utility.” We mentally place bets on possible outcomes and go with the one that promises the best reward. If we’re right, the reward system of the brain gives us a shot of dopamine. Things are good. But if we bet wrong, a different part of the brain kicks in: the right anterior insula, the adjacent right ventral prefrontal cortex and the anterior cingulate cortex. Those are the centers of the brain that regulate pain. Nature is not subtle about these things – especially when the survival of the species depends on it. If we find what we’re looking for, we get a natural high. If we don’t, it’s actually causes us pain – but not in a physical way. We know it as frustration. Its purpose is to encourage us to not make the same mistake twice

The reason we’re blocking ads is that in the context those ads are being delivered, irrelevant ads are – quite literally – painful. Even relevant ads have a very high threshold to get over. Ad blocking has little to do with technology or “protection rackets” or predatory business practices. It has to do with the hardwiring of our brains. So if the media or the ad industry want to blame something or someone, let’s start there.

The Bermuda Triangle of Advertising

hurricane

In the past few weeks, via the comments I’ve received on my two (1,2) columns looking at the possible future of media selection and targeting, it’s become apparent to me that we’re at a crisis point when it comes to advertising. I’ve been fortunate enough to have some of the brightest minds and sharpest commentators in the industry contributing their thoughts on the topic. In the middle of all these comments lies a massive gap. This gap can be triangulated by looking at three comments in particular:

Esther Dyson: “Ultimately, what the advertisers want is sales…  attention, engagement…all these are merely indicators for attribution and waypoints on the path to sales.”

Doc Searls: “Please do what you do best (and wins the most awards): make ads that clearly sponsor the content they accompany (we can actually appreciate that), and are sufficiently creative to induce positive regard in our hearts and minds.”

Ken Fadner: “I don’t want to live in a world like this one” (speaking of the hyper targeted advertising scenario I described in my last column).

These three comments are all absolutely right (with the possible exception of Searls, which I’ll come back to in a minute) and they draw a path around the gaping hole that is the future of advertising.

So let’s strip this back to the basics to try to find solid ground from which to move forward again.

Advertising depends on a triangular value exchange: We want entertainment and information – which is delivered via various media. These media need funding – which comes from advertising. Advertising wants exposure to the media audience. So, if we boil that down – we put up with advertising in return for access to entertainment and information. This is the balance that is deemed “OK” by Doc Searls and other commenters

The problem is that this is no longer the world we live in – if we ever did. The value exchange requires all three sides to agree that the value is sufficient for us to keep participating. The relatively benign and balanced model of advertising laid out by Searls just doesn’t exist anymore.

The problem is the value exchange triangle is breaking down on two sides – for advertisers and the audience.

As I explained in an earlier Online Spin, value exchanges depend on scarcity and for the audience, there is no longer a scarcity of information and entertainment. Also, there are now new models for information and entertainment delivery that disrupt our assessment of this value exchange. The cognitive context that made us accepting of commercials has been broken. Where once we sat passively and consumed advertising, we now have subscription contexts that are entirely commercial free. That makes the appearance of advertising all the more frustrating. Our brain has been trained to no longer be accepting of ads. The other issue is that ads only appeared in contexts where we were passively engaged. Now, ads appear when we’re actively engaged. That’s an entirely different mental model with different expectations of acceptability.

This traditional value exchange is also breaking down for advertisers. The inefficiencies of the previous model have been exposed and more accountable and effective models have emerged. Dyson’s point was probably the most constant bearing point we can navigate to – companies want sales. They also want more effective advertising. And much as we may hate the clutter and crap that litters the current digital landscape, when it works well it does promise to deliver a higher degree of efficiency.

So, we have the previous three sided value exchange collapsing on two of the sides, bringing the third side – media- down with it.

Look, we can bitch about digital all we want. I share Searls frustration with digital in general and Fadner’s misgivings about creepy and ineffective execution of digital targeting in particular. But this horse has already left the barn. Digital is more than just the flavor of the month. It’s the thin edge of a massive wedge of change in content distribution and consumption. For reasons far too numerous to name, we’ll never return to the benign world of clearly sponsored content and creative ads. First of all, that benign world never worked that well. Secondly, two sides of the value-exchange triangle have gotten a taste of something better- virtually unlimited content delivered without advertising strings attached and a much more effective way to deliver advertising.

Is digital working very well now? Absolutely not. Fadner and Searls are right about that, It’s creepy, poorly targeted, intrusive and annoying. And it’s all these things for the very same reason that Esther Dyson identified – companies want sales and they’ll try anything that promises to deliver it. But we’re at the very beginning of a huge disruptive wave. Stuff isn’t supposed to work very well at this point. That comes with maturity and an inevitable rebalancing. Searls may rail against digital, just like people railed against television, the telephone and horseless carriages. But it’s just too early to tell what a more mature model will look like. Corporate greed will dictate the trying of everything. We will fight back by blocking the hi-jacking of our attention. A sustainable balance will emerge somewhere in between. But we can’t see it yet from our vantage point.