Why Disruption is Becoming More Likely in the Data Marketplace

Another weak, another breach. 500 million records were hacked from Marriott, making it the second largest data breach in history, behind Yahoo’s breach of 3 billion user accounts.

For now. There will probably be a bigger breach. There will definitely be a more painful breach. And by painful, I mean painful to you and me.  It’s in that pain – specifically, the degree of the pain – that the future of how we handle our personal data lies.

Markets innovate along paths of least resistance. Market development is a constantly evolving dynamic tension between innovation and resistance. If there is little resistance, markets will innovate in predictable ways from their current state. If this innovation leads to push back from the market, we encounter resistance.  When markets meet significant resistance, disruption occurs, opening the door for innovation in new directions to get around the resistance of the marketplace.  When we talk about data, we are talking about a market where value is still in the process of defining itself. And it’s in the definition of value where we’ll find the potential market resistance for data.

Individual data is a raw resource. It doesn’t have value until it becomes “Big.” Personal data needs to be aggregated and structured to become valuable. This creates a dilemma for us. Unless we provide the raw material, there is no “big” data possible. This makes it valuable to others, but not necessarily to ourselves.

Up to now, the value we have exchanged our privacy for has been convenience. It’s easier for us to store our credit card data with Amazon so we can enable one-click ordering. And we feel this exchange has been a bargain. But it remains an asymmetrical exchange. Our data has no positive value to us, only negative. We can be hurt by our data, but other than the afore-mentioned exchange for convenience, it doesn’t really help us. That is why we’ve been willing to give it away for so little. But once it’s aggregated and becomes “big”, it has tremendous value to the people we give it to. It also has value to those who wish to steal that data from those who we have entrusted it with. The irony here is that whether that data is in the “right” hands or the “wrong” ones, it can still mean pain for us. The differentiator is the degree of that pain.

Let’s examine the potential harm that could come from sharing our data. How painful could this get? Literally every warning we write about here at Mediapost has data at the core. Just yesterday, fellow Insider Steven Rosenbaum wrote about how the definition of warfare has changed. The fight isn’t for land. War is now waged for our minds. And data is used to target those minds.

Essentially, sharing our data makes us vulnerable to being targeted. And the outcome of that targeting can range from simply annoying to life-alteringly dangerous. Even the fact that we refer to it as targeting should raise a red flag. There’s a reason why we use a term typically associated with a negative outcome for those on the receiving end. You’re very seldom targeted for things that are beneficial to you. And that’s true no matter who’s the one doing the targeting. At its most benign, targeting is used to push some type of messaging – typically advertising – to you. But you could also be targeted by Russian hackers in an attempt to subvert democracy. Most acutely, you could be targeted for financial fraud. Or blackmail. Targeting is almost never a good thing. The degree of harm can vary, but the cause doesn’t. Our data – the data we share willingly – makes targeting possible.

We are in an interesting time for data. We have largely shrugged off the pain of the various breaches that have made it to the news. We still hand over our personal data with little to no thought of the consequences. And because we still participate by providing the raw material, we have enabled the development of an entire data marketplace. We do so because there is no alternative without making sacrifices we are not willing to make. But as the degree of personal pain continues to get dialed up, all the prerequisites of market disruption are being put in place. Breaches will continue. The odds of us being personally affected will continue to climb. And innovators will find solutions to this problem that will be increasingly easy to adopt.

For many, many reasons, I don’t think the current trajectory of the data marketplace is sustainable. I’m betting on disruption.

 

 

We Have to Dig Deeper for the True Disruption threatening Advertising

Ken Auletta had me at “disruption.” I’ve just finished reading his new book, “Frenemies, The Epic Disruption of the Ad Business (and Everything Else).” Regular readers will know that this title would be like catnip to me. Despite the hyperbole he employs, Auletta was speaking my language. As a bonus, Mr. Auletta does appear to be at least an occasional reader, as he did quote me twice in the book.

Frenemies bookThe majority of the disruption, according to Auletta, is happening within the ad biz itself. The “Frenemies” described in the book are the digital disruptors, Facebook, Google and – increasingly – Amazon. And their position of strength is the reams of data they collect. The disrupted are primarily the holding companies like WPP and Publicis.

Auletta’s narrative frame for his book is focused on the fortunes of Michael Kassan – who through his company MediaLink has managed to position himself as the über-connector between the traditional holding companies and the new digital disruptors. Auletta says Kassan is “advertising’s Dolly Levi.” For those of you on the south side of 70, he’s referring to the lead character from the 1964 musical Hello Dolly, a New York City matchmaker.  Auletta skips back and forth between the disrupted – represented by Sir Martin Sorrell and Irwin Gotlieb of WPP, Maurice Levy and Rishad Tobaccowala of Publicis and many of the other usual suspects – and the disruptors – in this case represented mainly by Carolyn Everson, a Facebook VP.

The other narrative device Auletta employs is the battle of Mad Men vs Math Men, which is a little too cutesy for my taste, not to mention hackneyed (the agency I used to work for trundled this same meme out at least 6 years ago). While the battle between the Big Idea and Big Data is an easily found target, I think what we’re missing here is the Big Picture.

Auletta’s take is too short sighted. The digital disruption he documents is indeed happening, but the bigger disruption is not between the holding companies and the new digital, data-rich platforms but between the market and the marketer. The entire advertising industry is based on an exchange that is no longer be valid. Other than one chapter which deals with ad-blocking and another about privacy concerns, Auletta spends little time exploring the consumer’s outright rejection of advertising.

If we’re talking about disruption in the ad biz, we have to borrow the infamously head-scratching quote from Donald Rumsfield:

there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones

Auletta’s book – understandably – deals with the first two categories. After all, it’s pretty hard to write a 358 page book on what you don’t know you don’t know. But as Rumsfeld said, it’s that category where you find your “gotchas.”

If we are really going to look at “epic disruptions” we have look at the foundations, not the increasingly shaky edifices built on those foundations. And the foundation of advertising is the exchange of a consumer’s attention in return for something of value to them. In the past, that has either been entertainment or information. And we – the consumers – placed value on those things because there was no other place to get them. Scarcity confers value. But that is no longer true. Our expectations have changed when it comes to sourcing both information and entertainment.

I’ve stated this before and it’s this quote that Auletta used – twice. While I’m grateful for that, I believe that this disruption in value exchange is not just an interesting aside. It’s the key to the whole thing. I don’t care if your advertising is driven by the smartest AI super-algorithm powered by munching on my personal data. If I didn’t ask to see your ad, I don’t want to see it. Period. I have many choices, and watching intrusive ads are way down my list.

If I’m right, I’m not sure what that means for the future. But this rejection of advertising by the market is the place where those things we don’t know we don’t know live. Yes, advertising holding companies are doomed. But I also think that Facebook’s future as an ad-financed platform is similarly doomed. And that’s certainly not something that made it into Ken Auletta’s book.

 

 

 

 

 

 

 

The Rank and File: Life and Work in a Quantified World

No one likes to be a number – unless, of course – that number is one. Then it’s okay.

Rankings started to be crucial to me back in 1996 when I jumped into the world of search engine optimization. Suddenly, the ten blue links on a search results page took on critical importance. The most important, naturally, was the first result. It turned on the tap for a flow of business many local organizations could only dream of. My company once got a California Mustang part retailer that number one ranking for “Ford Mustang Parts.” The official site of For – Ford.com – was number two. The California business did very well for a few years. We probably made them rich. Then Google came along and the party was over. We soon found that as quickly as that tap could be turned on, it could also be turned off. We and our clients rode the stormy waters of multiple Google updates. We called it the Google dance.

Now that I’m in my second life (third? fourth?) as a tourism operator I’m playing that ranking game again. This time it’s with TripAdvisor. You would not believe how important a top ranking in your category is here. Again, your flow of business can be totally at the mercy of how well you rank.

The problem with TripAdvisor is not so much with the algorithm in the background or the criteria used for ranking. The problem is with the Delta between riches and rags. If you drop below the proverbial fold in TripAdvisor, your tourism business can shrivel up and die. One bad review could be the difference. I feel like I’m dancing the Google dance all over again.

But at least TripAdvisor is what I would call a proximate ranking site. The source of the rankings is closely connected to the core nature of the industry. Tourism is all about experiences. And TripAdvisor is a platform for experience reviews. There is some wiggle room there for gaming the system, but the unintended consequences are kept to a minimum. If you’re in the business of providing good experiences, you should do well in TripAdvisor. And if you pay attention to the feedback on TripAdvisor, your business should improve. This is a circle that is mostly virtuous.

Such is not always the case. Take teaching, for example. Ratemyprofessors.com is a ranking site for teachers and professors, based on feedback from students. If you read through the reviews, it soon becomes obvious that funny, relatable, good-looking profs fare better than those who are less socially gifted. It has become a popularity contest for academics. Certainly, some of those things may factor into the effectiveness of a good educator, but there is a universe of other criteria that are given short-shrift on the site. Teaching is a subtle and complex profession. Is a popular prof necessarily a good prof? If too much reliance is placed on ratings like those found on ratemyprofessors.com, will the need to be popular push some of those other less-rankable attributes to the background?

But let’s step back even a bit further. Along with the need to quantify everything there is also a demand for transparency. Let’s step into another classroom, this time in your local elementary school. The current push is to document what’s happening in the classroom and share it on a special portal that parents have access to. In theory, this sounds great. Increasing collaboration and streamlining the communication triangle between teachers, students and parents should be a major step forward. But it’s here where unintended consequences can run the education process off the rails. Helicopter parents are the most frequent visitors to the portal. They also dominate these new communication channels that are now open to their children’s teachers. And – if you know a helicopter parent – you know these are people who have no problem picking up the phone and calling the school administrator or even the local school board to complain about a teacher. Suddenly, teachers feel they’re constantly under the microscope. They alter their teaching style and course content to appeal to the types of parents that are constantly monitoring them.

Even worse, the teacher finds themselves constantly interrupting their own lesson in order to document what’s going on to keep these parents satisfied. What appears to be happening in the classroom through social media becomes more important than what’s actually happening in the classroom. If you’ve ever tried to actively present to a group and also document what’s happening at the same time, you know how impossible this can be. Pity then the poor teacher of your children.

There is a quote that is often (incorrectly) attributed to management guru Peter Drucker, “If you can’t measure it, you can’t manage it.” The reality is a lot more nuanced. As we’re finding out, what you’re actually measuring matters a lot. It may be leading you in completely the wrong direction.

Our Trust Issues with Advertising Based Revenue Models

Facebook’s in the soup again. They’re getting their hands slapped for tracking our location. And I have to ask, why is anyone surprised they’re tracking our location? I’ve said this before, but I’ll say it again. What is good for us is not good for Facebook’s revenue model. And vice versa. Social platforms should never be driven by advertising. Period. Advertising requires targeting. And when you combine prospect targeting and the digital residue of our online activities, bad things are bound to happen. It’s inevitable, and it’s going to get worse. Facebook’s future earnings absolutely dictate that they have to try to get us to spend more time on their platform and they have to be more invasive about tracking what we do with that time. Their walled data garden and their reluctance to give us a peak at what’s happening inside should be massive red flags.

Our social activities are already starting to fragment across multiple platforms – and multiple accounts within each of those platforms. We are socially complex people and it’s naïve to think that all that complexity could be contained within any one ecosystem – even one as sprawling as Facebook’s.  In our real lives – you know – the life you lead when you’re not staring at your phone – our social activities are as varied as our moods, our activities, our environment and the people we are currently sharing that environment with. Being social is not a single aspect of our lives. It is the connecting tissue of all that we are. It binds all the things we do into a tapestry of experience. It reflects who we are and our identities are shaped by it. Even when we’re alone, as I am while writing this column, we are being social. I am communicating with each of you and the things I am communicated are shaped by my own social experiences.

My point here is that being social is not something we turn on and off. We don’t go somewhere to be social. We are social. To reduce social complexity and try to contain it within an online ecosystem is a fool’s errand. Trying to support it with advertising just makes it worse. A revenue model based on advertising is self-limiting. It has always been a path of least resistance, which is why it’s so commonly used. It places no financial hurdles on the path to adoption. We have never had to pay money to use Facebook, or Instagram, or Snapchat. But we do pay with our privacy. And eventually, after the inevitable security breaches, we also lose our trust. That lack of trust limits the effectiveness of any social medium.

Of course, it’s not just social media that suffers from the trust issues that come with advertising-based revenue. This advertising driven path has worked up to now because trust was never really an issue. We took comfort in our perceived anonymity in the eyes of the marketer. We were part of a faceless, nameless mass market that traded attention for access to information and entertainment. Advertising works well with mass. As I mentioned, there are no obstacles to adoption. It was the easiest way to assemble the biggest possible audience. But we now market one to one. And as the ones on the receiving end, we are now increasingly seeking functionality. That is a fundamentally different precept. When we seek to do things, rather than passively consume content, we can no longer remain anonymous. We make choices, we go places, we buy stuff, we do things. In doing this, we leave indelible footprints which are easy to track and aggregate.

Our online and offline lives have now melded to the point where we need – and expect – something more than a collection of platforms offering fragmented functionality. What we need is a highly personalized OS, a foundational operating system that is intimately designed just for us and connects the dots of functionality. This is already happening in bits and pieces through the data we surrender when we participate in the online world. But that data lives in thousands of different walled gardens, including the social platforms we use. Then that data is used to target advertising to us. And we hate advertising. It’s a fundamentally flawed contract that we will – given a viable alternative – opt out of. We don’t trust the social platforms we use and we’re right not to. If we had any idea of depth or degree of personal information they have about us, we would be aghast.  I have said before that we are willing to trade privacy for functionality and I still believe this. But once our trust has been broken, we are less willing to surrender that private data, which is essential to the continued profitability of an ad-supported platform.

We need to own our own data. This isn’t so much to protect our privacy as it is to build a new trust contract that will allow that data to be used more effectively for our own purposes and not that of a corporation whose only motive is to increase their own profit. We need to remove the limits imposed by a flawed functionality offering based on serving ads that we don’t want to us. If we’re looking for the true disruptor in advertising, that’s it in nutshell.

 

Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.

 

 

Advertising Meets its Slippery Slope

We’ve now reached the crux of the matter when it comes to the ad biz.

For a couple of centuries now, we’ve been refining the process of advertising. The goal has always been to get people to buy stuff. But right now, there is now a perfect storm of forces converging that requires some deep navel gazing on the part of us insiders.

It used to be that to get people to buy, all we had to do was inform. Pent up consumer demand created by expanding markets and new product introductions would take care of the rest. We just had to connect better the better mousetraps with the world, which would then duly beat the path to the respective door.  Advertising equaled awareness.

But sometime in the waning days of the consumer orgy that followed World War Two, we changed our mandate. Not content with simply informing, we decided to become influencers. We slipped under the surface of the brain, moving from providing information for rational consideration to priming subconscious needs. We started messing with the wiring of our market’s emotional motivations.  We became persuaders.

Persuasion is like a mental iceberg – 90% of the bulk lies below the surface. Rationalization is typically the hastily added layer of ad hoc logic that happens after the decision is already made.  This is true to varying degrees for almost any consumer category you can think including – unfortunately – our political choices.

This is why, a few columns ago – I said Facebook’s current model is unsustainable. It is based on advertising, and I think advertising may have become unsustainable. The truth is, advertisers have gotten so good at persuading us to do things that we are beginning to revolt. It’s getting just too creepy.

To understand how we got here, let’s break down persuasion. It requires the persuader to shift the beliefs of the persuadee. The bigger the shift required, the tougher the job of persuasion.  We tend to build irrational (aka emotional) bulwarks around our beliefs to preserve them. For this reason, it’s tremendously beneficial to the persuader to understand the belief structure of their target. If they can do this, they can focus on those whose belief structure is most conducive to the shift required.

When it comes to advertisers, the needle on our creative powers of persuasion hasn’t really moved that much in the last half century. There were very persuasive ads created in the 1960’s and there are still great ads being created. The disruption that has moved our industry to the brink of the slippery slope has all happened on the targeting end.

The world we used to live in was a bunch of walled and mostly unconnected physical gardens. Within each, we would have relevant beliefs but they would remain essentially private. You could probably predict with reasonable accuracy the religious beliefs of the members of a local church. But that wouldn’t help you if you were wondering whether the congregation leaned towards Ford or Chevy.  Our beliefs lived inside us, typically unspoken and unmonitored.

That all changed when we created digital mirrors of ourselves through Facebook, Twitter, Google and all the other usual suspects. John Battelle, author of The Search,  once called Google the Database of Intentions. It is certainly that. But our intent also provides an insight into our beliefs. And when it comes to Facebook, we literally map out our entire previously private belief structure for the world to see. That is why Big Data is so potentially invasive. We are opening ourselves up to subconscious manipulation of our beliefs by anyone with the right budget. We are kidding ourselves if we believe ourselves immune to the potential abuse that comes with that. Like I said, 90% of our beliefs are submerged in our subconscious.

We are just beginning to realize how effective the new tools of persuasion are. And as we do so, we are beginning to feel that this is all very unfair. No one likes being manipulated; even if they have willing laid the groundwork for that manipulation. Our sense of retroactive justice kicks in. We post rationalize and point fingers. We blame Facebook, or the government, or some hackers in Russia. But these are all just participants in a new eco-system that we have helped build. The problem is not the players. The problem is the system.

It’s taken a long time, but advertising might just have gotten to the point where it works too well.

 

Who Should (or Could) Protect Our Data?

Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame – or at least, the responsibility – for the protection of our own data belonged to us. Reader Chuck Lantz responded with:

“In short, just because a company such as FaceBook can do something doesn’t mean they should.  We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain’t exactly wise, either.”

Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:

A) The Government

One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow up post, Wharton ran a list of the questions that Congress should have asked, compiled from their own faculty. My personal favorite comes from Eric Clemons, professor of Operations, Information and Decisions:

“You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”

Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:

  • Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
  • Technological “loopholes” will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
  • Most importantly, the GDPR regulation is aimed at data “worst case scenarios.” But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.

Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The US, as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering old technologically inept farts the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso. Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.

B) The companies that aggregate and manipulate our data.

Philosophically, I completely agree with Chuck. Like I said last week – the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better. And my fellow Media Insider, Kaila Colbin, nailed this in her column last week:

“Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”

There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company). Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.

C) Ourselves.

It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government, or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.

So if the responsibility ends up being ours, what can we expect?

A few weeks ago, another fellow Insider, Dave Morgan, predicted the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.

This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, I believe the ability to lock down our own data will come from entrepreneurialism. We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership and it should only be offered to platforms like Facebook on a case-by-case “need to know” transactional basis. Will it be disruptive to the current social eco-system? Absolutely. And that’s a good thing.

The targeting of advertising is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is – or should be – acceptable to us.