Our Trust Issues with Advertising Based Revenue Models

Facebook’s in the soup again. They’re getting their hands slapped for tracking our location. And I have to ask, why is anyone surprised they’re tracking our location? I’ve said this before, but I’ll say it again. What is good for us is not good for Facebook’s revenue model. And vice versa. Social platforms should never be driven by advertising. Period. Advertising requires targeting. And when you combine prospect targeting and the digital residue of our online activities, bad things are bound to happen. It’s inevitable, and it’s going to get worse. Facebook’s future earnings absolutely dictate that they have to try to get us to spend more time on their platform and they have to be more invasive about tracking what we do with that time. Their walled data garden and their reluctance to give us a peak at what’s happening inside should be massive red flags.

Our social activities are already starting to fragment across multiple platforms – and multiple accounts within each of those platforms. We are socially complex people and it’s naïve to think that all that complexity could be contained within any one ecosystem – even one as sprawling as Facebook’s.  In our real lives – you know – the life you lead when you’re not staring at your phone – our social activities are as varied as our moods, our activities, our environment and the people we are currently sharing that environment with. Being social is not a single aspect of our lives. It is the connecting tissue of all that we are. It binds all the things we do into a tapestry of experience. It reflects who we are and our identities are shaped by it. Even when we’re alone, as I am while writing this column, we are being social. I am communicating with each of you and the things I am communicated are shaped by my own social experiences.

My point here is that being social is not something we turn on and off. We don’t go somewhere to be social. We are social. To reduce social complexity and try to contain it within an online ecosystem is a fool’s errand. Trying to support it with advertising just makes it worse. A revenue model based on advertising is self-limiting. It has always been a path of least resistance, which is why it’s so commonly used. It places no financial hurdles on the path to adoption. We have never had to pay money to use Facebook, or Instagram, or Snapchat. But we do pay with our privacy. And eventually, after the inevitable security breaches, we also lose our trust. That lack of trust limits the effectiveness of any social medium.

Of course, it’s not just social media that suffers from the trust issues that come with advertising-based revenue. This advertising driven path has worked up to now because trust was never really an issue. We took comfort in our perceived anonymity in the eyes of the marketer. We were part of a faceless, nameless mass market that traded attention for access to information and entertainment. Advertising works well with mass. As I mentioned, there are no obstacles to adoption. It was the easiest way to assemble the biggest possible audience. But we now market one to one. And as the ones on the receiving end, we are now increasingly seeking functionality. That is a fundamentally different precept. When we seek to do things, rather than passively consume content, we can no longer remain anonymous. We make choices, we go places, we buy stuff, we do things. In doing this, we leave indelible footprints which are easy to track and aggregate.

Our online and offline lives have now melded to the point where we need – and expect – something more than a collection of platforms offering fragmented functionality. What we need is a highly personalized OS, a foundational operating system that is intimately designed just for us and connects the dots of functionality. This is already happening in bits and pieces through the data we surrender when we participate in the online world. But that data lives in thousands of different walled gardens, including the social platforms we use. Then that data is used to target advertising to us. And we hate advertising. It’s a fundamentally flawed contract that we will – given a viable alternative – opt out of. We don’t trust the social platforms we use and we’re right not to. If we had any idea of depth or degree of personal information they have about us, we would be aghast.  I have said before that we are willing to trade privacy for functionality and I still believe this. But once our trust has been broken, we are less willing to surrender that private data, which is essential to the continued profitability of an ad-supported platform.

We need to own our own data. This isn’t so much to protect our privacy as it is to build a new trust contract that will allow that data to be used more effectively for our own purposes and not that of a corporation whose only motive is to increase their own profit. We need to remove the limits imposed by a flawed functionality offering based on serving ads that we don’t want to us. If we’re looking for the true disruptor in advertising, that’s it in nutshell.

 

Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.

 

 

Advertising Meets its Slippery Slope

We’ve now reached the crux of the matter when it comes to the ad biz.

For a couple of centuries now, we’ve been refining the process of advertising. The goal has always been to get people to buy stuff. But right now, there is now a perfect storm of forces converging that requires some deep navel gazing on the part of us insiders.

It used to be that to get people to buy, all we had to do was inform. Pent up consumer demand created by expanding markets and new product introductions would take care of the rest. We just had to connect better the better mousetraps with the world, which would then duly beat the path to the respective door.  Advertising equaled awareness.

But sometime in the waning days of the consumer orgy that followed World War Two, we changed our mandate. Not content with simply informing, we decided to become influencers. We slipped under the surface of the brain, moving from providing information for rational consideration to priming subconscious needs. We started messing with the wiring of our market’s emotional motivations.  We became persuaders.

Persuasion is like a mental iceberg – 90% of the bulk lies below the surface. Rationalization is typically the hastily added layer of ad hoc logic that happens after the decision is already made.  This is true to varying degrees for almost any consumer category you can think including – unfortunately – our political choices.

This is why, a few columns ago – I said Facebook’s current model is unsustainable. It is based on advertising, and I think advertising may have become unsustainable. The truth is, advertisers have gotten so good at persuading us to do things that we are beginning to revolt. It’s getting just too creepy.

To understand how we got here, let’s break down persuasion. It requires the persuader to shift the beliefs of the persuadee. The bigger the shift required, the tougher the job of persuasion.  We tend to build irrational (aka emotional) bulwarks around our beliefs to preserve them. For this reason, it’s tremendously beneficial to the persuader to understand the belief structure of their target. If they can do this, they can focus on those whose belief structure is most conducive to the shift required.

When it comes to advertisers, the needle on our creative powers of persuasion hasn’t really moved that much in the last half century. There were very persuasive ads created in the 1960’s and there are still great ads being created. The disruption that has moved our industry to the brink of the slippery slope has all happened on the targeting end.

The world we used to live in was a bunch of walled and mostly unconnected physical gardens. Within each, we would have relevant beliefs but they would remain essentially private. You could probably predict with reasonable accuracy the religious beliefs of the members of a local church. But that wouldn’t help you if you were wondering whether the congregation leaned towards Ford or Chevy.  Our beliefs lived inside us, typically unspoken and unmonitored.

That all changed when we created digital mirrors of ourselves through Facebook, Twitter, Google and all the other usual suspects. John Battelle, author of The Search,  once called Google the Database of Intentions. It is certainly that. But our intent also provides an insight into our beliefs. And when it comes to Facebook, we literally map out our entire previously private belief structure for the world to see. That is why Big Data is so potentially invasive. We are opening ourselves up to subconscious manipulation of our beliefs by anyone with the right budget. We are kidding ourselves if we believe ourselves immune to the potential abuse that comes with that. Like I said, 90% of our beliefs are submerged in our subconscious.

We are just beginning to realize how effective the new tools of persuasion are. And as we do so, we are beginning to feel that this is all very unfair. No one likes being manipulated; even if they have willing laid the groundwork for that manipulation. Our sense of retroactive justice kicks in. We post rationalize and point fingers. We blame Facebook, or the government, or some hackers in Russia. But these are all just participants in a new eco-system that we have helped build. The problem is not the players. The problem is the system.

It’s taken a long time, but advertising might just have gotten to the point where it works too well.

 

Who Should (or Could) Protect Our Data?

Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame – or at least, the responsibility – for the protection of our own data belonged to us. Reader Chuck Lantz responded with:

“In short, just because a company such as FaceBook can do something doesn’t mean they should.  We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain’t exactly wise, either.”

Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:

A) The Government

One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow up post, Wharton ran a list of the questions that Congress should have asked, compiled from their own faculty. My personal favorite comes from Eric Clemons, professor of Operations, Information and Decisions:

“You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”

Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:

  • Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
  • Technological “loopholes” will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
  • Most importantly, the GDPR regulation is aimed at data “worst case scenarios.” But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.

Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The US, as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering old technologically inept farts the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso. Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.

B) The companies that aggregate and manipulate our data.

Philosophically, I completely agree with Chuck. Like I said last week – the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better. And my fellow Media Insider, Kaila Colbin, nailed this in her column last week:

“Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”

There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company). Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.

C) Ourselves.

It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government, or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.

So if the responsibility ends up being ours, what can we expect?

A few weeks ago, another fellow Insider, Dave Morgan, predicted the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.

This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, I believe the ability to lock down our own data will come from entrepreneurialism. We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership and it should only be offered to platforms like Facebook on a case-by-case “need to know” transactional basis. Will it be disruptive to the current social eco-system? Absolutely. And that’s a good thing.

The targeting of advertising is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is – or should be – acceptable to us.

The Pillorying of Zuckerberg

Author’s Note: When I started this column I thought I agreed with the views stated. And I still do, mostly. But by the time I finished it, there was doubt niggling at me. It’s hard when you’re an opinion columnist who’s not sure you agree with your own opinion. So here’s what I decided to do. I’m running this column as I wrote it. Then, next week, I’m going to write a second column rebutting some of it.

Let’s face it. We love it when smart asses get theirs. For example: Sir Martin Sorrell. Sorry your lordship but I always thought you were a pontificating and pretentious dickhead and I’m kind of routing for the team digging up dirt on you. Let’s see if you doth protest too much.

Or Jeff Bezos. Okay, granted Trump doesn’t know what the hell he’s talking about regarding Amazon. And we apparently love the company. But just how much sympathy do we really have for the world’s richest man? Couldn’t he stand to be taken down a few pegs?

Don’t get me started on Bill Gates.

But the capo di tutti capi of smart-asses is Mark Zuckerberg. As mad as we are about the gushing security leak that has sprung on his watch, aren’t we all a little bit schaudenfreude-ish as we watch the public flailing that is currently playing out? It’s immensely satisfying to point a finger of blame and it’s doubly so to point it at Mr. Zuckerberg.

Which finger you use I’ll leave to your discretion.

But here’s the thing. As satisfying as it is to make Mark our scapegoat, this problem is systemic. It’s not the domain of one man, or even one company. I’m not absolving Facebook and it’s founder from blame. I’m just spreading it around so it’s a little more representatively distributed. And as much as we may hate to admit it, some of that blame ends up on our plate. We enabled the system that made this happen. We made personal data the new currency of exchange. And now we’re pissed off because there were exchanges made without our knowledge. It all comes down to this basic question: Who owns our data?

This is the fundamental question that has to be resolved. Up to now, we’ve been more than happy to surrender our data in return for the online functionality we need to pursue trivial goals. We rush to play Candy Crush and damn the consequences. We have mindlessly put our data in the hands of Facebook without any clear boundaries around what was and wasn’t acceptable for us.

If we look at data as a new market currency, our relationship with Facebook is really no different than that of a bank when we deposit our money in a bank account and allowing the bank to use our money for their own purposes in return for paying us interest. This is how markets work. They are complicated and interlinked and the furthest thing possible from being proportionately equitable.

Personal Data is a big industry. And like any industry, there is a value chain emerging. We are on the bottom of that chain. We supply the raw data. It is no coincidence that terms like “mining,” “scraping” and “stripping” are used when we talk about harvesting data. The digital trails of our behaviors and private thoughts are a raw resource that has become incredibly valuable. And Facebook just happens to be strategically placed in the market to reap the greatest rewards. They add value by aggregating and structuring the data. Advertisers then buy prepackaged blocks of this data to target their messaging. The targeting that Facebook can provide – thanks to the access they have to our data – is superior to what was available before. This is a simple supply and demand equation. Facebook was connecting the supply – coming from our willingness to surrender our personal data – with the demand – advertisers insisting on more intrusive and personal targeting criteria. It was a market opportunity that emerged and Facebook jumped on it. The phrase “don’t hate the player, hate the game” comes to mind.

When new and untested markets emerge, all goes well until it doesn’t. Then all hell breaks loose. Just like it did with Cambridge Analytica. When that happens, our sense of fairness kicks in. We feel duped. We rush to point fingers. We become judgmental, but everything is done in hindsight. This is all reaction. We have to be reactive, because emerging markets are unpredictable. You can’t predict something like Cambridge Analytica. If it wasn’t them – if it wasn’t this – it would have been something else that would have been equally unpredictable. The emerging market of data exchange virtually guaranteed that hell would eventually break loose. As a recent post on Gizmodo points out,

“the kind of data acquisition at the heart of the Cambridge Analytica scandal is more or less standard practice for every other technology company, including places like Google and even Apple. Facebook simply had the misfortune of getting caught after playing fast and loose with who has control over their data.”

To truly move forward from this, we all have to ask ourselves some hard questions. This is not restricted to Mark Zuckerberg and Facebook. It’s symptomatic of a much bigger issue. And we, the ground level source of this data, will be doing ourselves a disservice in the long run by trying to isolate the blame to any one individual or company. In a very real sense, this is our problem. We are part of a market dynamic that is untested and – as we’ve seen – powerful enough to subvert democracy. Some very big changes are required in the way we treat our own data. We owe it to ourselves to be part of that process.

The Rain in Spain

Olá! Greetings from the soggy Iberian Peninsula. I’ve been in Spain and Portugal for the last three weeks, which has included – count them – 21 days of rain and gale force winds. Weather aside, it’s been amazing. I have spent very little of that time thinking about online media. But, for what they’re worth, here are some random observations from the last three weeks:

The Importance of Familiarity

While here, I’ve been reading Derek Thompson’s book Hitmakers. One of the critical components of a hit is a foundation of familiarity. Once this is in place, a hit provides just enough novelty to tantalize us. It’s why Hollywood studios seem stuck on the superhero sequel cycle.

This was driven home to me as I travelled. I’m a do-it-yourself traveller. I avoid packaged vacations whenever and wherever possible. But there is a price to be paid for this. Every time we buy groceries, take a drive, catch a train, fill up with gas or drive through a tollbooth (especially in Portugal) there is a never-ending series of puzzles to be solved. The fact that I know no Portuguese and very little Spanish makes this even more challenging. I’m always up for a good challenge, but I have to tell you, at the end of three weeks, I’m mentally exhausted. I’ve had more than enough novelty and I’m craving some more familiarity.

This has made me rethink the entire concept of familiarity. Our grooves make us comfortable. They’re the foundations that make us secure enough to explore. It’s no coincidence that the words “family” and “familiar” come from the same etymological root.

The Opposite of Agile Development

seville-catheral-altarWhile in Seville, we visited the cathedral there. The main altarpiece, which is the largest and one of the finest in the world, was the life’s work of one man, Pierre Dancart. He worked on it for 44 years of his life and never saw the finished product. In total, it took over 80 years to complete.

Think about that for a moment. This man worked on this one piece of art for his entire life. There was no morning where he woke up and wondered, “Hmm, what am I going to do today?” This was it, from the time he was barely more than a teenager until he was an old man. And he still never got to see the completed work. That span of time is amazing to me. If built and finished today, it would have been started in 1936.

The Ubiquitous Screen

I love my smartphone. It has saved my ass more than once on this trip. But I was saddened to see that our preoccupation with being connected has spread into every nook and cranny of European culture. Last night, we went for dinner at a lovely little tapas bar in Lisbon. It was achingly romantic. There was a young German couple next to us who may or may not have been in love. It was difficult to tell, because they spent most of the evening staring at their phones rather than at each other.

I have realized that the word “screen” has many meanings, one of which is a “barrier meant to hide things or divide us.”

El Gordo

Finally, after giving my name in a few places and getting mysterious grins in return, I have realized that “gordo” means “fat” in Spanish and Portuguese.

Make of that what you will.

Drawing a Line in the Sand for Net Privacy

Ever heard of Strava? The likelihood that you would say yes jumped astronomically on January 27, 2018. That was the day of the Strava security breach. Before that, you had probably never heard of it, unless you happened to be a cyclist or runner.

I’ve talked about Strava before. Then, I was talking about social modality and trying to keep our various selves straight on various social networks. Today, I’m talking about privacy.

Through GPS enabled devices, like a fitness tracker or smartphone, Strava enables you to track your workouts, include the routes you take. Once a year, they aggregate all these activities and publish it as a global heatmap. Over 1 billion workouts are mapped in every corner of the earth. If you zoom in enough, you’ll see my favorite cycling routes in the city I live in. The same is true for everyone who uses the app. Unless – of course – you’ve opted out of the public display of your workouts.

And therein lies the problem. Actually – two problems.

First, problem number one. There is really no reason I shouldn’t share my workouts. The worst you could find out is that I’m a creature of habit when working out. But if I’m a marine stationed at a secret military base in Afghanistan and I start my morning jogging around the perimeter of the base – well – now we have a problem. I just inadvertently highlighted my base on the map for the world to see. And that’s exactly what happened. When the heatmap went live, a university student in Australia happened to notice there were a number of hotspots in the middle of nowhere in Afghanistan and Syria.

On the problem number two. In terms of numbers affected, the Strava breach is a drop in the bucket when you compare it to Yahoo – or Equifax – or Target – or any of the other breaches that have made the news. But this breach was different in a very important way. The victims here weren’t individual consumers. This time national security was threatened. And that moved it beyond the typical “consumer beware” defense that typically gets invoked.

This charts new territory for privacy. The difference in perspective in this breach has heightened sensitivities and moved the conversation in a new direction. Typically, the response when there is a breach is:

  1. You should have known better
  2. You should have taken steps to protect your information; or,
  3. Hmmm, it sucks to be you

Somehow, this response has held up in the previous breaches despite the fact that we all know that it’s almost impossible to navigate the minefield of settings and preferences that lies between you and foolproof privacy. As long as the victims were individuals it was easy to shift blame. This time, however, the victim was the collective “we” and the topic was the hot button of all hot buttons – national security.

Now, one could and should argue that all of these might apply to the unfortunate soldier that decided to take his Fitbit on his run, but I don’t think it will end there. I think the current “opt out” approach to net privacy might have to be considered. The fact is, all these platforms would prefer to gather and have the right to use as they see fit as much of your data as possible. It opens up a number of monetization opportunities for them. Typically, the quid pro quo that is offered back to you – the user – is more functionality and the ability to share to your own social circle. The current ecosystems default starting point is to enable as much sharing and functionality as possible. Humans being human, we will usually go with the easiest option – the default – and only worry about it if something goes wrong.

But as users, we do have the right to push back. We have to realize that opening the full data pipe gives the platforms much more value than we ever receive in return. We’re selling off our own personal data for the modern day equivalent of beads and trinkets. And the traditional corporate response – “you can always opt out if you want” – is simply taking advantage of our own human limitations. The current fallback is that they’re introducing more transparency into their own approaches to privacy, making it easier to understand. While this is a step in the right direction, a more ethical approach would be to take an “opt in” approach, where the default is the maximum protection of our privacy and we have to make a conscious effort to lower that wall.

We’ll see. Opting in puts ethics and profitability on a collision course. For that reason, I can’t ever see the platforms going in that direction unless we insist.