Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.

 

 

Why The Paradox of Choice Doesn’t Apply to Netflix

A recent article in Mediapost reported that Millennials – and Baby Boomers for that matter – prefer broad choice platforms like Netflix and YouTube to channels specifically targeted to their demo. A recent survey found that almost 40% of respondents aged 18 – 24 used Netflix most often to view video content.

Author Wayne Friedman mused on the possibility that targeted channels might be a thing of the past: “This isn’t the mid-1990s. Perhaps audience segmentation into different networks — or separately branded, over-the-top digital video platforms  — is an old thing.”

It is. It’s aimed at an old world that existed before search filters. It was a world where Barry Schwartz’s Paradox of Choice was a thing. That’s not true in a world where we can quickly filter our choices.

Humans in almost every circumstance prefer the promise of abundance to scarcity. It’s how we’re hardwired. The variable here is our level of confidence in our ability to sort through the options available to us. If we feel confident that we can heuristically limit our choices to the most relevant ones, we will always forage in a richer environment.

In his book, Schwartz used the famous jam experiment of Sheena Iyengar to show how choice can paralyze us. Iyengar’s research team set up a booth with samples of jam in a gourmet food market. They alternated between a display of 6 jams and one of 24 options. They found that in terms of actually selling jams, the smaller display outperformed the larger one by a factor of 10 to 1. The study “raised the hypothesis that the presence of choice might be appealing as a theory,” Dr. Iyengar later said, “but in reality, people might find more and more choice to actually be debilitating.”

Yes, and no. What isn’t commonly cited is that in the study 60% of shoppers were drawn to the larger display, while only 40% were hooked by the smaller one. Yes, fewer bought, but that probably came down to a question of being able to filter, not the attraction of the display itself. Also, other researchers (Scheibehenne, Griefeneder and Todd, 2010) have ran into problems trying to verify the findings of the original study. They found that “on the basis of the data, no sufficient conditions could be identified that would lead to a reliable occurrence of choice overload.”

We all have a subconscious “foraging algorithm” that we use to sort through the various options in our environment. One of the biggest factors in this algorithm is the “cost of searching” – how much effort do we need to expend to find the thing we’re looking for? In today’s world, that breaks down into two variables: “finding” and “filtering.” A platform that’s rich in choice – like Netflix – virtually eliminates the cost of “finding.” We are confident that a platform that offers a massive number of choices will have something we will find interesting. So now it comes to “filtering.” If we feel confident enough in the filtering tools available to us, we will go with the richest environment available to us.  The higher our degree of confidence in our ability to “filter”, the less we will want our options limited for us.

So, when does it make sense to limit the options available to an audience? There are some conditions identified by Scheibehenne at al where the Paradox of Choice is more likely to happen:

Unstructured Choices – The harder it is to categorize the choices available, the more likely it is that it will be more difficult to filter those options.

Choices that are Hard to Compare to Each Other – If you’re comparing apples and oranges, either figuratively or literally, the cognitive load required to compare choices increases the difficulty.

The Complexity of Choices – The more information we have to juggle when we’re making a choice, the greater the likelihood that our brains may become overtaxed in trying to make a choice.

Time Pressure when Making a Choice – If you hear the theme song of Jeopardy when you’re trying to make a choice you’re more likely to become frustrated when trying to sort through a plethora of options.

If you are in the business of presenting options to customers, remember that the Paradox of Choice is not a hard and fast rule. In fact, the opposite is probably true – the greater the perception of choice, the more attractive it will be to them. The secret is in providing your customers the ability to filter quickly and efficiently.

 

The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

Why Do Good People Become Bad Online?

Here are some questions I have:

  • When do crowds turn ugly?
  • Why do people become Trolls online?
  • When do opinions suddenly become moralizing and what is the difference between the two?

Whether we like it or not, online connection engenders some decidedly bad behavior. It’s one of those unintended consequences that I like to talk about – a behavioral side effect that’s catalyzed by technology.  And, if this is the case, we should know a little more about the psychology behind this behavior.

Modified Mob Behavior

So, when does a group become a mob? And when does a mob turn ugly? There are some aspects of herd mentality that seem to be particularly conducive to online connections. A group turns into a mob when their behaviors become synced to a common purpose. A recent study from the University of Southern California found two predictive signals in social media behavior that indicate when a group protest may become a violent mob.

Tipping Over the Threshold from an Opinion to a Moral

One of the things they found is that when we go from talking about our opinions to preaching morality, things can take a nasty turn. Let’s imagine a spectrum from loosely held opinions on the left end – things you’re not that emotionally invested in – to beliefs and then on the morals at the right end. This progression also correlates to different ways the brain processes the respective thoughts. At the least intense left end of the spectrum – opinions – we can process them with relative detached rationality. But as we move to the right, different parts of the brain start kicking in and begin to raise the emotional stakes. When we believe we’re talking about morals, we suddenly have strongly held beliefs about what is right and what is wrong. Morals are defined as “concerned with the principles of right and wrong behavior and the goodness or badness of human character.”

This triggers our ancient and universal feelings about fairness, Harm, betrayal, subversion and degradation – the planks of moral foundation theory. The researchers in the USC study found that people are more likely to endorse violence when they moralize the issue. When there are clearly held beliefs about right and wrong, violence seems acceptable.

Violence Needs Company

This moralizing signal is not necessarily tied to being online. But the second predictive signal is. The researchers also found that if people believe others share their views, they are more likely to tip over the threshold from peaceful protest to violence. This is Mark Granovetter’s crowd threshold effect that I’ve talked about before. In social media, this effect is amplified by content filtering and the structure of your network. Like-minded people naturally link to each other and their posts make for remarkably efficient indicators of their beliefs. It’s very easy in a social network to feel that everybody you know feels the same way that you do. The degree of violent language can escalate quickly through online posts until the entire group is pushed over the threshold into a model of behavior that would be unthinkable as a disconnected individual.

Trolls, Trolls Everywhere

Another study, this time from Stanford, shows that any of us can become a troll. We would like to think that trolls are just a particularly ubiquitous small group of horrible people. But this research indicates that trollism is more situational than previously thought. In other words, if we’re in a bad mood, we’re more likely to become a troll.

But it’s not just our mood. Here again Granovetter’s threshold model plays a part. Negative comments beget more negative comments, starting a downward spiral of venom. The researchers did a behavioral test where participants had to do either an easy or a difficult task and then had to read an online article that had either three neutral comments or three negative, troll-like comments. The results were eye-opening. In the group that was assigned an easy task and read the article that had the neutral comments, about 35% posted a negative comment. Knowing that one in three of us seem to have a low threshold for becoming a troll is not exactly encouraging, but it gets worse. If participants either did the difficult test or read negative comments, the likelihood for posting a troll-like comment jumped to 50%. And if participants got both the difficult test and read negative comments, the number climbed to 68%! In the three-part study, another factor that could lead to becoming a troll included the time that posts were made. Late Sunday and Monday nights are the worst time of the week for negative posts and Twitter bullying hit its peak between 5 pm and 8 pm on Sunday. While we’re on the subject, Donald Trump tends to tweet early in the morning and his most inflammatory tweets come on Saturdays.

But when it comes to trolling, there’s something else at play here as well. Yet another study, this time from Mt. Helen University in Australia, found that our own brand of empathy can also predict whether we’re going to become a troll or not. There is cognitive and affective empathy. Cognitive empathy means you can understand other people’s emotions – you know what will make them happy or mad. But affective empathy means you can internalize and experience the emotions of another – if they’re happy, you’re happy. If they’re mad, you’re mad. Not surprisingly, Trolls tend to have high cognitive empathy but low affective empathy. Obviously, there were plenty of such people before the Internet, but they’ve now gained the perfect forum for their twisted form of empathy. They can incite negativity relatively free from social consequence and reprisal. Even if the comments made are not anonymous, the poster can hide behind a degree of detachment that would be impossible in a physical environment.

So, why should we care? Again, it comes back to this idea of the unintended social consequences of technology. Increasingly, our connections are digital in nature. And for reasons already stated, I worry that these types of connections may bring out the worst in us.

 

Is Live the New Live?

HQ Trivia – the popular mobile game app –  seems to be going backwards. It’s an anachronism – going against all the things that technology promises. It tethers us to a schedule. It’s essentially a live game show broadcast (when everything works as it should, which is far from a sure bet) on a tiny screen – It also gets about a million players each and every time it plays, which is usually only twice a day.

My question is: Why the hell is it so popular?

Maybe it’s the Trivia Itself…

(Trivial Interlude – the word trivia comes from the Latin for the place where three roads come together. Originally in Latin it was used to refer to the three foundations of basic education – grammar, logic and rhetoric. The modern usage came from a book by Logan Pearsall Smith in 1902 – “Trivialities, bits of information of little consequence”. The singular of trivia is trivium)

As a spermologist (that’s a person who loves trivia – seriously – apparently the “sperm” has something to do with “seeds of knowledge”) I love a trivia contest. It’s one thing I’m pretty good at – knowing a little about a lot of things that have absolutely no importance. And if you too fancy yourself a spermologist (which, by the way, is how you should introduce yourself at social gatherings) you know that we always want to prove we’re the smartest people in the room. In HQ Trivia’s case, that room usually holds about a million people. That’s the current number of participants in the average broadcast. So the odds of being the smartest person is the room is – well – about one in a million. And a spermologist just can’t resist those odds.

But I don’t think HQ’s popularity is based on some alpha-spermology complex. A simple list of rankings would take care of that. No, there must be more to it. Let’s dig deeper.

Maybe it’s the Simoleons…

(Trivial Interlude: Simoleons is sometimes used as slang for American dollars, as Jimmy Stewart did in “It’s a Wonderful Life.” The word could be a portmanteau of “simon” and “Napoleon” – which was a 20 franc coin issued in France. The term seems to have originated in New Orleans, where French currency was in common use at the turn of the last century.)

HQ Trivia does offer up cash for smarts. Each contest has a prize, which is usually $5000. But even if you make it through all 12 questions and win, by the time the prize is divvied up amongst the survivors, you’ll probably walk away with barely enough money to buy a beer. Maybe two. So I don’t think it’s the prize money that accounts for the popularity of HQ Trivia.

Maybe It’s Because it’s Live..

(Trivial Interlude – As a Canadian, Trivia is near and dear to my heart. America’s favorite trivia quiz master, Alex Trebek, is Canadian, born in Sudbury, Ontario. Alex is actually his middle name. George is his first name. He is 77 years old. And Trivial Pursuit, the game that made trivia a household name in the 80’s, was invented by two Canadians, Chris Haney and Scott Abbott. It was created after the pair wanted to play Scrabble but found their game was missing some tiles. So they decided to create their own game. In 1984, more than 20 million copies of the game were sold. )

There is just something about reality in real time. Somehow, subconsciously, it makes us feel connected to something that is bigger than ourselves. And we like that. In fact, one of the other etymological roots of the word “trivia” itself is a “public place.”

The Hotchkiss Movie Choir Effect

If you want to choke up a Hotchkiss (or at least the ones I’m personally familiar with) just show us a movie where people spontaneously start singing together. I don’t care if it’s Pitch Perfect Twelve and a Half – we’ll still mist up. I never understood why, but I think it has to do with the same underlying appeal of connection. Dan Levitin, author of “This is Your Brain on Music,” explained what happens in our brain when we sing as part of a group in a recent interview on NPR:

“We’ve got to pay attention to what someone else is doing, coordinate our actions with theirs, and it really does pull us out of ourselves. And all of that activates a part of the frontal cortex that’s responsible for how you see yourself in the world, and whether you see yourself as part of a group or alone. And this is a powerful effect.”

The same thing goes for flash mobs. I’m thinking there has to be some type of psychological common denominator that HQ Trivia has somehow tapped into. It’s like a trivia-based flash mob. Even when things go wrong, which they do quite frequently, we feel that we’re going through it together. Host Scott Rogowsky embraces the glitchiness of the platform and commiserates with us. Misery – even when it’s trivial – loves company.

Whatever the reason for its popularity, HQ Trivia seems to be moving forward by taking us back to a time when we all managed to play nicely together.

 

Advertising Meets its Slippery Slope

We’ve now reached the crux of the matter when it comes to the ad biz.

For a couple of centuries now, we’ve been refining the process of advertising. The goal has always been to get people to buy stuff. But right now, there is now a perfect storm of forces converging that requires some deep navel gazing on the part of us insiders.

It used to be that to get people to buy, all we had to do was inform. Pent up consumer demand created by expanding markets and new product introductions would take care of the rest. We just had to connect better the better mousetraps with the world, which would then duly beat the path to the respective door.  Advertising equaled awareness.

But sometime in the waning days of the consumer orgy that followed World War Two, we changed our mandate. Not content with simply informing, we decided to become influencers. We slipped under the surface of the brain, moving from providing information for rational consideration to priming subconscious needs. We started messing with the wiring of our market’s emotional motivations.  We became persuaders.

Persuasion is like a mental iceberg – 90% of the bulk lies below the surface. Rationalization is typically the hastily added layer of ad hoc logic that happens after the decision is already made.  This is true to varying degrees for almost any consumer category you can think including – unfortunately – our political choices.

This is why, a few columns ago – I said Facebook’s current model is unsustainable. It is based on advertising, and I think advertising may have become unsustainable. The truth is, advertisers have gotten so good at persuading us to do things that we are beginning to revolt. It’s getting just too creepy.

To understand how we got here, let’s break down persuasion. It requires the persuader to shift the beliefs of the persuadee. The bigger the shift required, the tougher the job of persuasion.  We tend to build irrational (aka emotional) bulwarks around our beliefs to preserve them. For this reason, it’s tremendously beneficial to the persuader to understand the belief structure of their target. If they can do this, they can focus on those whose belief structure is most conducive to the shift required.

When it comes to advertisers, the needle on our creative powers of persuasion hasn’t really moved that much in the last half century. There were very persuasive ads created in the 1960’s and there are still great ads being created. The disruption that has moved our industry to the brink of the slippery slope has all happened on the targeting end.

The world we used to live in was a bunch of walled and mostly unconnected physical gardens. Within each, we would have relevant beliefs but they would remain essentially private. You could probably predict with reasonable accuracy the religious beliefs of the members of a local church. But that wouldn’t help you if you were wondering whether the congregation leaned towards Ford or Chevy.  Our beliefs lived inside us, typically unspoken and unmonitored.

That all changed when we created digital mirrors of ourselves through Facebook, Twitter, Google and all the other usual suspects. John Battelle, author of The Search,  once called Google the Database of Intentions. It is certainly that. But our intent also provides an insight into our beliefs. And when it comes to Facebook, we literally map out our entire previously private belief structure for the world to see. That is why Big Data is so potentially invasive. We are opening ourselves up to subconscious manipulation of our beliefs by anyone with the right budget. We are kidding ourselves if we believe ourselves immune to the potential abuse that comes with that. Like I said, 90% of our beliefs are submerged in our subconscious.

We are just beginning to realize how effective the new tools of persuasion are. And as we do so, we are beginning to feel that this is all very unfair. No one likes being manipulated; even if they have willing laid the groundwork for that manipulation. Our sense of retroactive justice kicks in. We post rationalize and point fingers. We blame Facebook, or the government, or some hackers in Russia. But these are all just participants in a new eco-system that we have helped build. The problem is not the players. The problem is the system.

It’s taken a long time, but advertising might just have gotten to the point where it works too well.

 

Who Should (or Could) Protect Our Data?

Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame – or at least, the responsibility – for the protection of our own data belonged to us. Reader Chuck Lantz responded with:

“In short, just because a company such as FaceBook can do something doesn’t mean they should.  We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain’t exactly wise, either.”

Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:

A) The Government

One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow up post, Wharton ran a list of the questions that Congress should have asked, compiled from their own faculty. My personal favorite comes from Eric Clemons, professor of Operations, Information and Decisions:

“You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”

Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:

  • Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
  • Technological “loopholes” will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
  • Most importantly, the GDPR regulation is aimed at data “worst case scenarios.” But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.

Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The US, as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering old technologically inept farts the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso. Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.

B) The companies that aggregate and manipulate our data.

Philosophically, I completely agree with Chuck. Like I said last week – the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better. And my fellow Media Insider, Kaila Colbin, nailed this in her column last week:

“Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”

There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company). Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.

C) Ourselves.

It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government, or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.

So if the responsibility ends up being ours, what can we expect?

A few weeks ago, another fellow Insider, Dave Morgan, predicted the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.

This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, I believe the ability to lock down our own data will come from entrepreneurialism. We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership and it should only be offered to platforms like Facebook on a case-by-case “need to know” transactional basis. Will it be disruptive to the current social eco-system? Absolutely. And that’s a good thing.

The targeting of advertising is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is – or should be – acceptable to us.