The Pillorying of Zuckerberg

Author’s Note: When I started this column I thought I agreed with the views stated. And I still do, mostly. But by the time I finished it, there was doubt niggling at me. It’s hard when you’re an opinion columnist who’s not sure you agree with your own opinion. So here’s what I decided to do. I’m running this column as I wrote it. Then, next week, I’m going to write a second column rebutting some of it.

Let’s face it. We love it when smart asses get theirs. For example: Sir Martin Sorrell. Sorry your lordship but I always thought you were a pontificating and pretentious dickhead and I’m kind of routing for the team digging up dirt on you. Let’s see if you doth protest too much.

Or Jeff Bezos. Okay, granted Trump doesn’t know what the hell he’s talking about regarding Amazon. And we apparently love the company. But just how much sympathy do we really have for the world’s richest man? Couldn’t he stand to be taken down a few pegs?

Don’t get me started on Bill Gates.

But the capo di tutti capi of smart-asses is Mark Zuckerberg. As mad as we are about the gushing security leak that has sprung on his watch, aren’t we all a little bit schaudenfreude-ish as we watch the public flailing that is currently playing out? It’s immensely satisfying to point a finger of blame and it’s doubly so to point it at Mr. Zuckerberg.

Which finger you use I’ll leave to your discretion.

But here’s the thing. As satisfying as it is to make Mark our scapegoat, this problem is systemic. It’s not the domain of one man, or even one company. I’m not absolving Facebook and it’s founder from blame. I’m just spreading it around so it’s a little more representatively distributed. And as much as we may hate to admit it, some of that blame ends up on our plate. We enabled the system that made this happen. We made personal data the new currency of exchange. And now we’re pissed off because there were exchanges made without our knowledge. It all comes down to this basic question: Who owns our data?

This is the fundamental question that has to be resolved. Up to now, we’ve been more than happy to surrender our data in return for the online functionality we need to pursue trivial goals. We rush to play Candy Crush and damn the consequences. We have mindlessly put our data in the hands of Facebook without any clear boundaries around what was and wasn’t acceptable for us.

If we look at data as a new market currency, our relationship with Facebook is really no different than that of a bank when we deposit our money in a bank account and allowing the bank to use our money for their own purposes in return for paying us interest. This is how markets work. They are complicated and interlinked and the furthest thing possible from being proportionately equitable.

Personal Data is a big industry. And like any industry, there is a value chain emerging. We are on the bottom of that chain. We supply the raw data. It is no coincidence that terms like “mining,” “scraping” and “stripping” are used when we talk about harvesting data. The digital trails of our behaviors and private thoughts are a raw resource that has become incredibly valuable. And Facebook just happens to be strategically placed in the market to reap the greatest rewards. They add value by aggregating and structuring the data. Advertisers then buy prepackaged blocks of this data to target their messaging. The targeting that Facebook can provide – thanks to the access they have to our data – is superior to what was available before. This is a simple supply and demand equation. Facebook was connecting the supply – coming from our willingness to surrender our personal data – with the demand – advertisers insisting on more intrusive and personal targeting criteria. It was a market opportunity that emerged and Facebook jumped on it. The phrase “don’t hate the player, hate the game” comes to mind.

When new and untested markets emerge, all goes well until it doesn’t. Then all hell breaks loose. Just like it did with Cambridge Analytica. When that happens, our sense of fairness kicks in. We feel duped. We rush to point fingers. We become judgmental, but everything is done in hindsight. This is all reaction. We have to be reactive, because emerging markets are unpredictable. You can’t predict something like Cambridge Analytica. If it wasn’t them – if it wasn’t this – it would have been something else that would have been equally unpredictable. The emerging market of data exchange virtually guaranteed that hell would eventually break loose. As a recent post on Gizmodo points out,

“the kind of data acquisition at the heart of the Cambridge Analytica scandal is more or less standard practice for every other technology company, including places like Google and even Apple. Facebook simply had the misfortune of getting caught after playing fast and loose with who has control over their data.”

To truly move forward from this, we all have to ask ourselves some hard questions. This is not restricted to Mark Zuckerberg and Facebook. It’s symptomatic of a much bigger issue. And we, the ground level source of this data, will be doing ourselves a disservice in the long run by trying to isolate the blame to any one individual or company. In a very real sense, this is our problem. We are part of a market dynamic that is untested and – as we’ve seen – powerful enough to subvert democracy. Some very big changes are required in the way we treat our own data. We owe it to ourselves to be part of that process.

Why Do Cities Work?

It always amazes me how cities just seem to work. Take New York – for example. How the hell does everything a city of nine million needs to continue to exist happen? Cities are perhaps the best example I can think of how complex adaptive systems can work in the real world. They may be the answer to our future as the world becomes a more complex and connected place.

It’s not due to any centralized sense of communal collaboration. If anything, cities make us more individualistic. Small towns are much more collaborative. I feel more anonymous and autonomous in a big city than I ever do in a small town. It’s something else, more akin to Adam Smith’s Invisible Hand – but different. Millions of individual agents can all do their own thing based on their own requirements, but it works out okay for all involved.

Actually, according to Harvard economist Ed Glaeser, cities are more than just okay. He calls them mankind’s greatest invention. “So much of what humankind has achieved over the past three millennia has come out of the remarkable collaborative creations that come out of cities. We are a social species. We come out of the womb with the ability to sop up information from people around us. It’s almost our defining characteristic as creatures. And cities play to that strength. Cities enable us to learn from other people.”

Somehow, cities manage to harness the collective potential of their population without dipping into chaos. This is all the more amazing when you consider that cities aren’t natural for humans – at least – not in evolutionary terms. If you considered just that, we should all live in clusters of 150 people – otherwise known as Dunbar’s number. That’s the brain’s cognitive limit for keeping track of our own immediate social networks. It we’re looking for a magic number in terms of maximizing human cooperation and collaboration that would be it. But somehow cities allow us to far surpass that number and still deliver exponential returns.

Most of our natural defense mechanisms are based on familiarity. Trust, in it’s most basic sense, is Pavlovian. We trust strangers who happen to resemble people we know and trust. We are wary of strangers that remind us of people who have taken advantage of us. We are primed to trust or distrust in a few milliseconds, far under the time threshold of rational thought. Humans evolved to live in communities where we keep seeing the same faces over and over – yet cities are the antithesis of this.

Cities work because it’s in everyone’s best interest to make cities work. In a city, people may not trust each other, but they do trust the system. And it’s that system – or rather – thousands of complementary systems, that makes cities work. We contribute to these systems because we have a stake in them. The majority of us avoid the Tragedy of the Commons because we understand that if we screw the system, the system becomes unsustainable and we all lose. There is an “invisible network of trust” that makes cities work.

The psychology of this trust is interesting. As I mentioned before, in evolutionary terms, the mechanisms that trigger trust are fairly rudimentary: Familiarity = Trust. But system trust is a different beast. It relies on social norms and morals – on our inherent need to conform to the will of the herd. In this case, there is at least one degree of separation between trust and the instincts that govern our behaviors. Think of it as a type of “meta-trust.” We are morally obligated to contribute to the system as long as we believe the system will increase our own personal well-being.

This moral obligation requires feedback. There needs to be some type of loop that shows our that our moral behaviors are paying off for us. As long as that loop is working, it creates a virtuous cycle. Moral behaviors need to lead to easily recognized rewards, both individually and collectively. As long as we have this loop, we will continue to be governed by social norms that maintain the systems of a city.

When we look to cities to provide us clues on how to maintain stability in a more connected world, we need to understand this concept of feedback. Cities provide feedback through physical proximity. When cities start to break down, the results become obvious to all who live there. But when it’s digital bonds rather than physical ones that link our networks, feedback becomes trickier. We need to ponder other ways of connecting cause, effect and consequences. As we move from physical communities to ideological ones, we have to overcome the numbing effects of distance.

 

Tempest in a Tweet-Pot

On February 16, a Facebook VP of Ads named Rob Goldman had a bad day. That was the day the office of Special Counsel, Robert Mueller, released an indictment of 13 Russian operatives who interfered in the U.S. election. Goldman felt he had to comment via a series of tweets that appeared to question the seriousness with which the Mueller investigation had considered the ads placed by Russians on Facebook. Nothing much happened for the rest of the day. But on February 17, after the US Tweeter-in-Chief – Donald Trump – picked up the thread, Facebook realized the tweets had turned into a “shit sandwich” and to limit the damage, Goldman had to officially apologize.

It’s just one more example of a personal tweet blowing into a major news event. This is happening with increasingly irritating frequency. So today, I thought I’d explore why.

Personal Brand vs Corporate Brand

First, why did Rob Goldman feel he had to go public with his views anyway? He did because he could. We all have varying degrees of loyalty to our employer and I’m sure the same is true for Mr. Goldman. Otherwise he wouldn’t have swallowed crow a few days later with his public mea culpa. But our true loyalties go not to the brand we work for, but the brand we are. Goldman – like me, like you, like all of us – is building his personal brand. Anyone who’s says they’re not – yet posts anything online – is in denial. Goldman’s brand, according to his twitter account, is “Student, seeker, raconteur, burner. ENFP.” That is followed with the disclaimer “Views are mine.” And you know what? This whole debacle has been great for Goldman’s brand, at least in terms of audience size. Before February 16th, he had about 1500 followers. When I checked, that had swelled to almost 12,000. Brand Goldman is on a roll!

The idea of a personal brand is new – just a few decades old. It really became amplified through the use of social media. Suddenly, you could have an audience -and not just any audience, but an audience numbering in the millions.

Before that, the only people who could have been said to have personal brands were artists, authors and musicians. They made their living by sharing who they were with us.

For the rest of us, our brands were trapped in our own contexts. Only the people who knew us were exposed to our brands. But the amplification of social media suddenly exposes our brand to a much broader audience. And when things go viral, like they did on February 17, millions suddenly became aware of Rob Goldman and his tweet without knowing anything more than that he was a VP of Ads for Facebook.

It was that connection that created the second issue for Goldman. When we speak for our own personal brands, we can say, “views are mine” but the problem always comes when things blow up, as they did for Rob Goldman. None of his tweets were passed by anyone at Facebook, yet he had suddenly become a spokesperson for the corporation. And for those eager to accept his tweets as fact, they suddenly became the “truth.”

Twitter: “Truth” Without Context

Increasingly, we’re not really that interested in the truth. What we are interested in is our beliefs and our own personal truth. This is the era of “Post Truth” – the Oxford Dictionary word of the year for 2016 – defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’.

Truth was a commonly understood base that could be supported by facts. Now, truth is in the eye of the beholder. Common understandings are increasingly difficult to come to as the world continues to fragment and become more complex. How can we possibly come to a common understanding of what is “true” when any issue worth discussing is complex? This is certainly true of the Mueller investigation. To try to distill the scope of it to 900 words – about the length of this column – would be virtually impossible. To reduce it to 280 characters – the limits of a tweet and one- twentieth the length of this column – well, there we should not tread. But, of course, we do.

This problem is exacerbated by the medium itself. Twitter is a channel that encourages “quipiness.” When we’re tweeting, we all want to be Oscar Wilde. Again, writing this column usually takes me 3 to 4 hours, including time to do some research, create a rough outline and then do the actual writing. That’s not an especially long time, but the process does allow some time for mental reflection and self-editing. The average tweet takes less than a minute to write – probably less to think about – and then it’s out there, a matter of record, irretrievable. You should find it more than a little terrifying that this is a chosen medium for the President of the United States and one that is increasingly forming our world-view.

Twitter is also not a medium that provides much support for irony, sarcasm or satire. In the Post-Truth era, we usually accept tweets as facts, especially when they come from someone who is a somewhat official position, as in the case of Rob Goldman. But at best, they’re abbreviated opinions.

In the light of all this, one has to appreciate Mr. Goldman’s Twitter handle: @robjective.

The Decentralization of Trust

Forget Bitcoin. It’s a symptom. Forget even Blockchain. It’s big – but it’s technology. That makes it a tool. Which means it’s used at our will. And that will is the real story. Our will is always the real story – why do we build the tools we do? What is revolutionary is that we’ve finally found a way to decentralize trust. That runs against the very nature of how we’ve defined trust for centuries.

And that’s the big deal.

Trust began by being very intimate – ruled by our instincts in a face-to-face context. But for the last thousand years, our history has been all about concentration and the mass of everything – including whom we trust. We have consolidated our defense, our government, our commerce and our culture. In doing so, we have also consolidated our trust in a few all-powerful institutions.

But the past 20 years have been all about decentralization and tearing down power structures, as we invent new technologies to let us do that. In that vien, Blockchain is a doozy. It will change everything. But it’s only a big deal because we’re exerting our will to make it a big deal. And the “why” behind that is what I’m focusing on.

For right or wrong, we have now decided we’d rather trust distribution than centralization. There is much evidence to support that view. Concentration of power also means concentration of risk. The opportunity for corruption skyrockets. Big things tend to rot from the inside out. This is not a new discovery on our part. We’ve known for at least a few centuries that “absolute power corrupts absolutely.”

As the world consolidated it also became more corrupt. But it was always a trade off we felt we had to make. Again, the collective will of the people is the story thread to follow here. Consolidation brought many benefits. We wouldn’t be where we are today if it wasn’t for hierarchies, in one form or another. So we willing subjugated ourselves to someone – somewhere – hoping to maintain a delicate balance where the risk of corruption was outweighed by a personal gain. I remember asking the Atlantic’s noted correspondent, James Fallows, a question when I met him once in China. I asked how the average Chinese citizen could tolerate the paradoxical mix of rampant economical entrepreneurialism and crushing ideological totalitarianism. His answer was, “As long as their lives are better today than they were yesterday, and promise to be even better tomorrow, they’ll tolerate it.”

That pretty much summarizes our attitudes towards control. We tolerated it because if we wanted our lives to continue to improve, we really didn’t have a choice. But perhaps we do now. And that possibility has pushed our collective will away from consolidated power hubs and towards decentralized networks. Blockchain gives us another way to do that. It promises a way to work around Big Money, Big Banks, Big Government and Big Business. We are eager to do so. Why? Because up to now we have had to place our trust in these centralized institutions and that trust has been consistently abused. But perhaps Blockchain technology has found a way to distribute trust in a foolproof way. It appears to offer a way to make everything better without the historic tradeoff of subjugating ourselves to anyone.

However, when we move our trust to a network we also make that trust subject to unanticipated network effects. That may be the new trade-off we have to make. Increasingly, our technology is dependent on networks, which – by their nature – are complex adaptive systems. That’s why I keep preaching the same message – we have to understand complexity. We must accept that complexity has interaction affects we could never successfully predict.

It’s an interesting swap to consider – control for complexity. Control has always offered us the faint comfort of an illusion of predictability. We hoped that someone who knew more than we did was manning the controls. This is new territory for us. Will it be better? Who can say? But we seem to building an irreversible head of steam in that direction.

What Price Privacy?

As promised, I’m picking up the thread from last week’s column on why we seem okay with trading privacy for convenience. The simple – and most plausible – answer is that we’re really not being given a choice.

As Mediapost Senior Editor Joe Mandese pointed out in an very on-point comment, what is being creating is an transactional marketplace where offers of value are exchanged for information.:

“Like any marketplace, you have to have your information represented in it to participate. If you’re not “listed” you cannot receive bids (offers of value) based on who you are.”

Amazon is perhaps the most relevant example of this. Take Alexa and Amazon Web Services (AWS). Alexa promises to “make your life easier and more fun.” But this comes at a price. Because Alexa is voice activated, it’s always listening. That means that privacy of anything we say in our homes has been ceded to Amazon through their terms of service. The same is true for Google Assist and Apple Siri.

But Amazon is pushing the privacy envelope even further as they test their new in-home delivery service – Amazon Key. In exchange for the convenience of having your parcels delivered inside your home when you’re away, you literally give Amazon the keys to your home. Your front door will have a smart door lock that can be opened via the remote servers of AWS. Opt in to this and suddenly you’ve given Amazon the right to not only listen to everything you say in your home but also to enter your home whenever they wish.

How do you feel about that?

This becomes the key question. How do we feel about the convenience/privacy exchange. But it turns out that our response depends in large part on how that question is framed. In a study conducted in 2015 by the Annenberg School for Communications at the University of Pennsylvania, researchers gathered responses from participants probing their sensitivity around the trading of privacy for convenience. Here is a sampling of the results:

  • 55% of respondents disagreed with the statement: “It’s OK if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
  • 71% disagreed with: “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.
  • 91% disagreed that: “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”

Here, along the spectrum of privacy pushback, we start to see what the real problem is. We’re willing to exchange private information, as long as we’re aware of all that is happening and feel in control of it. But that, of course, is unrealistic. We can’t control it. And even if we could, we’d soon learn that the overhead required to do so is unmanageable. It’s why Vint Cerf said we’re going to have to learn to live with transparency.

Again, as Mr. Mandese points out, we’re really not being given a choice. Participating in the modern economy required us anteing up personal information. If we choose to remain totally private, we cut ourselves off from a huge portion of what’s available. And we are already at the point where the vast majority of us really can’t opt out. We all get pissed off when we hear of a security breach a la the recent Equifax debacle. Our privacy sensitivities are heightened for a day or two and we give lip service to outrage. But unless we go full out Old Order Amish, what are our choices?

We may rationalize the trade off by saying the private information we’re exchanging for services is not really that sensitive. But that’s where the potential threat of Big Data comes in. Gather enough seemingly innocent data and soon you can start predicting with startling accuracy the aspects of our lives that we are sensitive about. We run headlong into the Target Pregnant Teen dilemma. And that particular dilemma becomes thornier as the walls break down between data siloes and your personal information becomes a commodity on an open market.

The potential risk of trading away our privacy becomes an escalating aspect here – it’s the frog in boiling water syndrome. It starts innocently but can soon develop into a scenario that will keep most anyone up at night with the paranoiac cold sweats. Let’s say the data is used for targeting – singling us out of the crowd for the purpose of selling stuff to us. Or – in the case of governments – seeing if we have a proclivity for terrorism. Perhaps that isn’t so scary if Big Brother is benevolent and looking out for our best interests. But what if Big Brother becomes a bully?

There is another important aspect to consider here, and one that may have dire unintended consequences. When our personal data is used to make our world more convenient for us, that requires a “filtering” of that world by some type of algorithm to remove anything that algo determines to be irrelevant or uninteresting to us. Essentially, the entire physical world is “targeted” to us. And this can go horribly wrong, as we saw in the last presidential election. Increasingly we live in a filtered “bubble” determined by things beyond our control. Our views get trapped in an echo chamber and our perspective narrows.

But perhaps the biggest red flag is the fact that in signing away our privacy by clicking accept, we often also sign away any potential protection when things do go wrong. In another study called “The Biggest Lie on the Internet,” researchers found that when students were presented with a fictitious terms of service and privacy policy, 74% skipped reading it. And those that took the time to read didn’t take very much time – just 73 seconds on average. What almost no one caught were “gotcha clauses” about data sharing with the NSA and giving up your first-born child. While these were fictitious, real terms of service and privacy notifications often include clauses that include total control over the information gathered about you and giving up your right to sue if anything went bad. Even if you could sue, there might not be anyone left to sue. One analyst calculated that even if all the people who had their financial information stolen from Equifax won a settlement, it would actually amount to about $81 dollars.

 

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

Will We Ever Let Robots Shop for Us?

Several years ago, my family and I visited Astoria, Oregon. You’ll find it at the mouth of the Columbia River, where it empties into the Pacific. We happened to take a tour of Astoria and our guide pointed out a warehouse. He told us it was filled with canned salmon, waiting to be labeled and shipped. I asked what brand they were. His answer was “All of them. They all come from the same warehouse. The only thing different is the label.”

Ahh… the power of branding…

Labels can make a huge difference. If you need proof, look no further than the experimental introduction of generic brands in grocery stores. Well, they were generic to begin with, anyway. But over time, the generic “yellow label” was replaced with a plethora of store brands. The quality of what’s inside the box hasn’t changed much, but the packaging has. We do love our brands.

But there’s often no rational reason to do so. Take the aforementioned canned salmon for example. Same fish, no matter what label you may stick on it. Brands are a trick our brain plays on us. We may swear our favorite brand tastes better than it’s competitors, but it’s usually just our brain short circuiting our senses and our sensibility. Neuroscientist Read Montague found this out when he redid the classic Pepsi taste test using a fMRI scanner. The result? When Coke drinkers didn’t know what they were drinking, the majority preferred Pepsi. But the minute the brand was revealed, they again sweared allegiance to Coke. The taste hadn’t changed, but their brains had. As soon as the brain was aware of the brand, some parts of it suddenly started lighting up like a pinball machine.

In previous research we did, we found that the brain instantly responded to favored brains the same way it did to a picture of a friend or a smiling face. Our brains have an instantaneous and subconscious response to brands. And because of that, our brains shouldn’t be trusted with buying decisions. We’d be better off letting a robot do it for us.

And I’m not saying that facetiously.

A recent post on Bloomberg.com looked forward 20 years and predicted how automation would gradually take over ever step of the consumer product supply chain, from manufacturing to shipping to delivery to our door. The post predicts that the factory floor, the warehouse, ocean liners, trucks and delivery drones will all be powered by Artificial intelligence and robotic labor. The first set of human hands that might touch a product would be those of the buyer. But maybe we’re automating the wrong side of the consumer transaction. The thing human hands shouldn’t be touching is the buy button. We suck at it.

We have taken some steps in the right direction. Itamar Simonson and Emanuel Rosen predicted a death of branding in their book Absolute Value:

“In the past the marketing function “protected” the organization in some cases. When things like positioning, branding, or persuasion worked effectively, a mediocre company with a good marketing arm (and deep pockets for advertising) could get by. Now, as consumers are becoming less influenced by quality proxies, and as more consumers base their decisions on their likely experience with a product, this is changing.”

But our brand love dies hard. If our brain can literally rewire the evidence from our own senses – how can we possibly make rational buying decisions? True, as Simonson and Rosen point out, we do tend to favor objective information when it’s available, but at the end of the day, our buying decisions still rely on an instrument that has proven itself unreliable in making optimal decisions under the influence of brand messaging.

If we’re prepared to let robots steer ships, drive trucks and run factories, why won’t we let them shop for us? Existing shopping bots stop well short of actually making the purchase. We’ll put our lives in the hands of A.I. in a myriad of ways, but we won’t hand our credit card over. Why is that?

It seems ironic to me. If there were any area where machines can beat humans, it would be in making purchases. They’re much better at filtering based on objective criteria, they can stay on top of all prices everywhere and they can instantly aggregate data from all similar types of purchases. Most importantly, machines can’t be tricked by branding or marketing. They can complete the Absolute Value loop Simonson and Rosen talk about in their book.

Of course, there’s just one little problem with all that. It essentially ends the entire marketing and advertising industry.

Ooops.