The Pillorying of Zuckerberg

Author’s Note: When I started this column I thought I agreed with the views stated. And I still do, mostly. But by the time I finished it, there was doubt niggling at me. It’s hard when you’re an opinion columnist who’s not sure you agree with your own opinion. So here’s what I decided to do. I’m running this column as I wrote it. Then, next week, I’m going to write a second column rebutting some of it.

Let’s face it. We love it when smart asses get theirs. For example: Sir Martin Sorrell. Sorry your lordship but I always thought you were a pontificating and pretentious dickhead and I’m kind of routing for the team digging up dirt on you. Let’s see if you doth protest too much.

Or Jeff Bezos. Okay, granted Trump doesn’t know what the hell he’s talking about regarding Amazon. And we apparently love the company. But just how much sympathy do we really have for the world’s richest man? Couldn’t he stand to be taken down a few pegs?

Don’t get me started on Bill Gates.

But the capo di tutti capi of smart-asses is Mark Zuckerberg. As mad as we are about the gushing security leak that has sprung on his watch, aren’t we all a little bit schaudenfreude-ish as we watch the public flailing that is currently playing out? It’s immensely satisfying to point a finger of blame and it’s doubly so to point it at Mr. Zuckerberg.

Which finger you use I’ll leave to your discretion.

But here’s the thing. As satisfying as it is to make Mark our scapegoat, this problem is systemic. It’s not the domain of one man, or even one company. I’m not absolving Facebook and it’s founder from blame. I’m just spreading it around so it’s a little more representatively distributed. And as much as we may hate to admit it, some of that blame ends up on our plate. We enabled the system that made this happen. We made personal data the new currency of exchange. And now we’re pissed off because there were exchanges made without our knowledge. It all comes down to this basic question: Who owns our data?

This is the fundamental question that has to be resolved. Up to now, we’ve been more than happy to surrender our data in return for the online functionality we need to pursue trivial goals. We rush to play Candy Crush and damn the consequences. We have mindlessly put our data in the hands of Facebook without any clear boundaries around what was and wasn’t acceptable for us.

If we look at data as a new market currency, our relationship with Facebook is really no different than that of a bank when we deposit our money in a bank account and allowing the bank to use our money for their own purposes in return for paying us interest. This is how markets work. They are complicated and interlinked and the furthest thing possible from being proportionately equitable.

Personal Data is a big industry. And like any industry, there is a value chain emerging. We are on the bottom of that chain. We supply the raw data. It is no coincidence that terms like “mining,” “scraping” and “stripping” are used when we talk about harvesting data. The digital trails of our behaviors and private thoughts are a raw resource that has become incredibly valuable. And Facebook just happens to be strategically placed in the market to reap the greatest rewards. They add value by aggregating and structuring the data. Advertisers then buy prepackaged blocks of this data to target their messaging. The targeting that Facebook can provide – thanks to the access they have to our data – is superior to what was available before. This is a simple supply and demand equation. Facebook was connecting the supply – coming from our willingness to surrender our personal data – with the demand – advertisers insisting on more intrusive and personal targeting criteria. It was a market opportunity that emerged and Facebook jumped on it. The phrase “don’t hate the player, hate the game” comes to mind.

When new and untested markets emerge, all goes well until it doesn’t. Then all hell breaks loose. Just like it did with Cambridge Analytica. When that happens, our sense of fairness kicks in. We feel duped. We rush to point fingers. We become judgmental, but everything is done in hindsight. This is all reaction. We have to be reactive, because emerging markets are unpredictable. You can’t predict something like Cambridge Analytica. If it wasn’t them – if it wasn’t this – it would have been something else that would have been equally unpredictable. The emerging market of data exchange virtually guaranteed that hell would eventually break loose. As a recent post on Gizmodo points out,

“the kind of data acquisition at the heart of the Cambridge Analytica scandal is more or less standard practice for every other technology company, including places like Google and even Apple. Facebook simply had the misfortune of getting caught after playing fast and loose with who has control over their data.”

To truly move forward from this, we all have to ask ourselves some hard questions. This is not restricted to Mark Zuckerberg and Facebook. It’s symptomatic of a much bigger issue. And we, the ground level source of this data, will be doing ourselves a disservice in the long run by trying to isolate the blame to any one individual or company. In a very real sense, this is our problem. We are part of a market dynamic that is untested and – as we’ve seen – powerful enough to subvert democracy. Some very big changes are required in the way we treat our own data. We owe it to ourselves to be part of that process.

What the Hell is “Time Spent” with Advertising Anyway?

Over at MediaPost’s Research Intelligencer, Joe Mandese is running a series of columns that are digging into a couple of questions:

  • How much time are consumers spending with advertising; and,
  • How much is that time worth.

The quick answers are 1.84 hours daily and about $3.40 per hour.

Although Joe readily admits that these are ‘back of the envelope” calculations, regular Mediapost reader and commentator Ed Papazian points out a gaping hole in the logic of these questions: an hour of being exposed to ads does not equal an hour spent with those ads and it certainly doesn’t mean an hour being aware of the ads.

Ignoring this fundamental glitch is symptomatic of the conceit of the advertising business in general. They believe there is a value exchange possible where paying consumers to watch advertising is related to the effectiveness of that advertising. The oversimplification required to rationalize this exchange is staggering. It essentially ignores the fields of cognitive psychology and neuroscience. It assumes that audience attention is a simple door that can be opened if only the price is right.

It just isn’t that simple.

Let’s go back to the concept of time spent with media. There are many studies done that quantify this. But the simple truth is that media is too big a catchall category to make this quantification meaningful. We’re not even attempting to compare apples and oranges. We’re comparing an apple, a jigsaw and a meteor. The cognitive variations alone in how we consume media are immense.

And while I’m on a rant, let’s nuke the term “consumption” all together, shall we? It’s probably the most misleading word ever coined to define our relationship with media. We don’t consume media any more than we consume our physical environment. It is an informational context within which we function. We interact with aspects of it with varying degrees of intention. Trying to measure all these interactions with a single yardstick is the same as trying to measure our physical interactions with water, oxygen, gravity and an apple tree by the same criterion.

Even trying to dig into this question has a major methodological flaw – we almost never think about advertising. It is usually forced on our consciousness. So to use a research tools like a survey – requiring respondents to actively consider their response – to explore our subconscious relationship with advertising is like using a banana to drive a nail. It’s the wrong tool for the job. It’s the same as me asking you how much you would pay per hour to have access to gravity.

This current fervor all comes from a prediction from Publicis Groupe Chief Growth Officer Rishad Tobaccowala that the supply of consumer attention would erode by 20% to 30% in the next five years. Tobaccowala – by putting a number to attention – led to the mistaken belief that it’s something that could be managed by the industry. The attention of your audience isn’t slipping away because advertising and media buying was mismanaged. It’s slipping away because your audience now has choices, and some of those choices don’t include advertising. Let’s just admit the obvious. People don’t want advertising. We only put up with advertising when we have no choice.

“But wait,” the ad industry is quick to protest, “In surveys people say they are willing to have ads in return for free access to media. In fact, almost 80% of respondents in a recent survey said that they prefer the ad-supported model!”

Again, we have the methodological fly in the ointment. We’re asking people to stop and think about something they never stop and think about. You’re not going to get the right answer. A better answer would be to think about what happens when you get the pop up when you go to a news site with your ad-blocker on. “Hey,” it says, “We notice you’re using an ad-blocker.” If you have the option of turning the ad-blocker off to see the article or just clicking a link that let’s you see it anyway, which are you going to choose? That’s what I thought. And you’re probably in the ad business. It pays your mortgage.

Look, I get that the ad business is in crisis. And I also understand why the industry is motivated to find an answer. But the complexity of the issue in front of us is staggering and no one is served well by oversimplifying it down to putting a price tag on our attention. We have to understand that we’re in an industry where – given the choice – people would rather not have anything to do with us. Unless we do that, we’ll just be making the same mistakes over and over again.

 

 

The Rain in Spain

Olá! Greetings from the soggy Iberian Peninsula. I’ve been in Spain and Portugal for the last three weeks, which has included – count them – 21 days of rain and gale force winds. Weather aside, it’s been amazing. I have spent very little of that time thinking about online media. But, for what they’re worth, here are some random observations from the last three weeks:

The Importance of Familiarity

While here, I’ve been reading Derek Thompson’s book Hitmakers. One of the critical components of a hit is a foundation of familiarity. Once this is in place, a hit provides just enough novelty to tantalize us. It’s why Hollywood studios seem stuck on the superhero sequel cycle.

This was driven home to me as I travelled. I’m a do-it-yourself traveller. I avoid packaged vacations whenever and wherever possible. But there is a price to be paid for this. Every time we buy groceries, take a drive, catch a train, fill up with gas or drive through a tollbooth (especially in Portugal) there is a never-ending series of puzzles to be solved. The fact that I know no Portuguese and very little Spanish makes this even more challenging. I’m always up for a good challenge, but I have to tell you, at the end of three weeks, I’m mentally exhausted. I’ve had more than enough novelty and I’m craving some more familiarity.

This has made me rethink the entire concept of familiarity. Our grooves make us comfortable. They’re the foundations that make us secure enough to explore. It’s no coincidence that the words “family” and “familiar” come from the same etymological root.

The Opposite of Agile Development

seville-catheral-altarWhile in Seville, we visited the cathedral there. The main altarpiece, which is the largest and one of the finest in the world, was the life’s work of one man, Pierre Dancart. He worked on it for 44 years of his life and never saw the finished product. In total, it took over 80 years to complete.

Think about that for a moment. This man worked on this one piece of art for his entire life. There was no morning where he woke up and wondered, “Hmm, what am I going to do today?” This was it, from the time he was barely more than a teenager until he was an old man. And he still never got to see the completed work. That span of time is amazing to me. If built and finished today, it would have been started in 1936.

The Ubiquitous Screen

I love my smartphone. It has saved my ass more than once on this trip. But I was saddened to see that our preoccupation with being connected has spread into every nook and cranny of European culture. Last night, we went for dinner at a lovely little tapas bar in Lisbon. It was achingly romantic. There was a young German couple next to us who may or may not have been in love. It was difficult to tell, because they spent most of the evening staring at their phones rather than at each other.

I have realized that the word “screen” has many meanings, one of which is a “barrier meant to hide things or divide us.”

El Gordo

Finally, after giving my name in a few places and getting mysterious grins in return, I have realized that “gordo” means “fat” in Spanish and Portuguese.

Make of that what you will.

WTF Tech

Do you need a Kuvée?

Wait. Don’t answer yet. Let me first tell you what a Kuvée is: It’s a $178 wine bottle that connects to Wi-Fi.

Okay..let’s try again. Do you need a Kuvée?

Don’t bother answering. You don’t need a Kuvée. No one needs a Kuvée. The earth has 7.2 billion people on it. Not one of them needs a Kuvée. That’s probably why the company is packing up their high tech bottles and calling it a day. A Kuvée is an example of WTF Tech. Hold that thought, because we’ll get back to that in a minute.

So, we’ve established that you don’t need a Kuvée. “But that’s not the point,” you might say. “It’s not whether I need a Kuvée. It’s whether I want a Kuvée.” Fair point. In our world of ostentatious consumerism, it’s not really about need – it’s about desire. And Lord knows many of the most pretentious and entitled assholes in the world are wine snobs.

But I have to believe that, buried deep in our lizard brain; there is still a tenuous link between wanting something and needing something. Drench it as we might in the best wine technology can serve, there still might be spark of practicality glowing in the gathering dark of our souls. But like I said, I know some real dickhead wine drinkers. So, who knows? Maybe Kuvée was just ahead of the curve.

And that brings us back to WTF tech. This defines the application of tech to a problem that doesn’t exist simply because it’s tech. There is no practical reason why this tech ever needs to exist. Besides the Kuvée, here are some other examples of WTF tech:

The Kérastase Hair Coach

withings-loreal-hair-coach-3-1This is a hairbrush with an Internet connection. Seriously. It has a microphone that “listens” while you brush your hear, as well as an accelerometer, gyroscope and other sensors. It’s supposed to save you from bruising your hair while you’re brushing it. It retails for “under $200.”

 

The Hushme Mask

hushme-voice-masking-470x310@2xThis tech actually does solve a problem, but in a really stupid way. The problem is obnoxious jerks that insist on carrying on their phone conversation at the top of their lungs while sitting next to you. That’s a real problem, right? But here’s the stupid part. In order for this thing to work, you have to convince the guilty party to wear this Hannibal Lector-like mask while they’re on the phone. Go ahead, buy one for $189 and give it a shot next time you run into a really loud tele-jerk. Let me know how it works out for you.

Denso Vacuum Shoes

denso-vacuum-shoe-ces-2017-03“These boots are made for sucking…and that’s just what they’ll do.”

Finally, an invention that lets you shoe-ver your carpet. That’s right, the Japanese company Denso is working on a prototype of a shoe that vacuums as you walk, storing the dirt in a tiny box in the shoe’s sole. As a special bonus, they look just like a pair of circa 1975 Elton John Pinball Wizard boots.

When You’re a Hammer…

We live in a “tech for tech’s sake” time. When all the world is a hi-tech hammer, everything begins to look like a low-tech nail. Each of these questionable gadgets had investors who believed in them. Both the Kuvée and the Hushme had successful crowd-funding campaigns. The Hair Coach and the Vacuum Shoes have corporate backing. The dot-com bubble of 2000-2002 has just morphed into a bunch of broader based but no less ephemeral bubbles.

Let me wrap up with a story. Some years ago, I was speaking at a conference and my panel was the last one of the day. After it wrapped, the moderator, a few of the other panelists and I decided to go out for dinner. One of my co-panelists suggested a restaurant he had done some programming work for. When we got there, he showed us his brainchild. With much pomp and ceremony, our waiter delivered an iPad to the table. Our co-panelist took it and showed us how his company had set up the wine list as an app. Theoretically, you could scroll through descriptions and see what the suggested pairings were. I say theoretically, because none of that happened on this particular night.

Our moderator watched silently as the demonstration struggled through a series of glitches. Finally, he could stay silent no longer. “You know what else works, Dave? A sommelier. When I’m paying this much for a dinner, I want to talk to a f*$@ng human.”

Sometimes, there’s just not an app for that.

Why Do Cities Work?

It always amazes me how cities just seem to work. Take New York – for example. How the hell does everything a city of nine million needs to continue to exist happen? Cities are perhaps the best example I can think of how complex adaptive systems can work in the real world. They may be the answer to our future as the world becomes a more complex and connected place.

It’s not due to any centralized sense of communal collaboration. If anything, cities make us more individualistic. Small towns are much more collaborative. I feel more anonymous and autonomous in a big city than I ever do in a small town. It’s something else, more akin to Adam Smith’s Invisible Hand – but different. Millions of individual agents can all do their own thing based on their own requirements, but it works out okay for all involved.

Actually, according to Harvard economist Ed Glaeser, cities are more than just okay. He calls them mankind’s greatest invention. “So much of what humankind has achieved over the past three millennia has come out of the remarkable collaborative creations that come out of cities. We are a social species. We come out of the womb with the ability to sop up information from people around us. It’s almost our defining characteristic as creatures. And cities play to that strength. Cities enable us to learn from other people.”

Somehow, cities manage to harness the collective potential of their population without dipping into chaos. This is all the more amazing when you consider that cities aren’t natural for humans – at least – not in evolutionary terms. If you considered just that, we should all live in clusters of 150 people – otherwise known as Dunbar’s number. That’s the brain’s cognitive limit for keeping track of our own immediate social networks. It we’re looking for a magic number in terms of maximizing human cooperation and collaboration that would be it. But somehow cities allow us to far surpass that number and still deliver exponential returns.

Most of our natural defense mechanisms are based on familiarity. Trust, in it’s most basic sense, is Pavlovian. We trust strangers who happen to resemble people we know and trust. We are wary of strangers that remind us of people who have taken advantage of us. We are primed to trust or distrust in a few milliseconds, far under the time threshold of rational thought. Humans evolved to live in communities where we keep seeing the same faces over and over – yet cities are the antithesis of this.

Cities work because it’s in everyone’s best interest to make cities work. In a city, people may not trust each other, but they do trust the system. And it’s that system – or rather – thousands of complementary systems, that makes cities work. We contribute to these systems because we have a stake in them. The majority of us avoid the Tragedy of the Commons because we understand that if we screw the system, the system becomes unsustainable and we all lose. There is an “invisible network of trust” that makes cities work.

The psychology of this trust is interesting. As I mentioned before, in evolutionary terms, the mechanisms that trigger trust are fairly rudimentary: Familiarity = Trust. But system trust is a different beast. It relies on social norms and morals – on our inherent need to conform to the will of the herd. In this case, there is at least one degree of separation between trust and the instincts that govern our behaviors. Think of it as a type of “meta-trust.” We are morally obligated to contribute to the system as long as we believe the system will increase our own personal well-being.

This moral obligation requires feedback. There needs to be some type of loop that shows our that our moral behaviors are paying off for us. As long as that loop is working, it creates a virtuous cycle. Moral behaviors need to lead to easily recognized rewards, both individually and collectively. As long as we have this loop, we will continue to be governed by social norms that maintain the systems of a city.

When we look to cities to provide us clues on how to maintain stability in a more connected world, we need to understand this concept of feedback. Cities provide feedback through physical proximity. When cities start to break down, the results become obvious to all who live there. But when it’s digital bonds rather than physical ones that link our networks, feedback becomes trickier. We need to ponder other ways of connecting cause, effect and consequences. As we move from physical communities to ideological ones, we have to overcome the numbing effects of distance.

 

Tempest in a Tweet-Pot

On February 16, a Facebook VP of Ads named Rob Goldman had a bad day. That was the day the office of Special Counsel, Robert Mueller, released an indictment of 13 Russian operatives who interfered in the U.S. election. Goldman felt he had to comment via a series of tweets that appeared to question the seriousness with which the Mueller investigation had considered the ads placed by Russians on Facebook. Nothing much happened for the rest of the day. But on February 17, after the US Tweeter-in-Chief – Donald Trump – picked up the thread, Facebook realized the tweets had turned into a “shit sandwich” and to limit the damage, Goldman had to officially apologize.

It’s just one more example of a personal tweet blowing into a major news event. This is happening with increasingly irritating frequency. So today, I thought I’d explore why.

Personal Brand vs Corporate Brand

First, why did Rob Goldman feel he had to go public with his views anyway? He did because he could. We all have varying degrees of loyalty to our employer and I’m sure the same is true for Mr. Goldman. Otherwise he wouldn’t have swallowed crow a few days later with his public mea culpa. But our true loyalties go not to the brand we work for, but the brand we are. Goldman – like me, like you, like all of us – is building his personal brand. Anyone who’s says they’re not – yet posts anything online – is in denial. Goldman’s brand, according to his twitter account, is “Student, seeker, raconteur, burner. ENFP.” That is followed with the disclaimer “Views are mine.” And you know what? This whole debacle has been great for Goldman’s brand, at least in terms of audience size. Before February 16th, he had about 1500 followers. When I checked, that had swelled to almost 12,000. Brand Goldman is on a roll!

The idea of a personal brand is new – just a few decades old. It really became amplified through the use of social media. Suddenly, you could have an audience -and not just any audience, but an audience numbering in the millions.

Before that, the only people who could have been said to have personal brands were artists, authors and musicians. They made their living by sharing who they were with us.

For the rest of us, our brands were trapped in our own contexts. Only the people who knew us were exposed to our brands. But the amplification of social media suddenly exposes our brand to a much broader audience. And when things go viral, like they did on February 17, millions suddenly became aware of Rob Goldman and his tweet without knowing anything more than that he was a VP of Ads for Facebook.

It was that connection that created the second issue for Goldman. When we speak for our own personal brands, we can say, “views are mine” but the problem always comes when things blow up, as they did for Rob Goldman. None of his tweets were passed by anyone at Facebook, yet he had suddenly become a spokesperson for the corporation. And for those eager to accept his tweets as fact, they suddenly became the “truth.”

Twitter: “Truth” Without Context

Increasingly, we’re not really that interested in the truth. What we are interested in is our beliefs and our own personal truth. This is the era of “Post Truth” – the Oxford Dictionary word of the year for 2016 – defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’.

Truth was a commonly understood base that could be supported by facts. Now, truth is in the eye of the beholder. Common understandings are increasingly difficult to come to as the world continues to fragment and become more complex. How can we possibly come to a common understanding of what is “true” when any issue worth discussing is complex? This is certainly true of the Mueller investigation. To try to distill the scope of it to 900 words – about the length of this column – would be virtually impossible. To reduce it to 280 characters – the limits of a tweet and one- twentieth the length of this column – well, there we should not tread. But, of course, we do.

This problem is exacerbated by the medium itself. Twitter is a channel that encourages “quipiness.” When we’re tweeting, we all want to be Oscar Wilde. Again, writing this column usually takes me 3 to 4 hours, including time to do some research, create a rough outline and then do the actual writing. That’s not an especially long time, but the process does allow some time for mental reflection and self-editing. The average tweet takes less than a minute to write – probably less to think about – and then it’s out there, a matter of record, irretrievable. You should find it more than a little terrifying that this is a chosen medium for the President of the United States and one that is increasingly forming our world-view.

Twitter is also not a medium that provides much support for irony, sarcasm or satire. In the Post-Truth era, we usually accept tweets as facts, especially when they come from someone who is a somewhat official position, as in the case of Rob Goldman. But at best, they’re abbreviated opinions.

In the light of all this, one has to appreciate Mr. Goldman’s Twitter handle: @robjective.

Sharing a Little about the Sharing Economy

In the last week, I’ve had first hand experience with the sharing economy, using both Uber and Air BnB. I – not surprisingly – am a sucker for disruption and will gladly adopt new technologies. I appreciate the rational logic of a well thought out platform that promises to be a game changer. I push my wife’s comfort level to the breaking point, trying mightily to maintain the balance between delightful discovery and that cold stare that means I’ve completely messed up this time. It is in that spirit – and with the admitted bias of being a sample of one – that I share some of my macro-level observations.

Creating an Opening for Innovation

To me, using the term the “sharing economy” doesn’t quite cut it. That only explains one aspect of this disruption – the supply side. What is really happening here is the democratization and fragmentation of a previously verticalized market, where the platform creates a new type of one-to-one market connection. That spreads the market horizontally, which in turn opens a wide door for participation at all levels. And that, inevitably, spurs innovation. When you allow everyone to be creative – rather than just a few within a vertically integrated chain who have it in their job description – the pace of innovation can’t help but accelerate.

Disruptive Platforms and Network Effects

Innovation is a good thing, but there is another side to this. If you allow for rampant innovation and facilitate one-to-one connections at all levels of the market, you are going to have network effects. Markets become more chaotic and less predictable. The rising tide of innovation will eventually raise all boats, but it also means the waters can get a little choppy on the way. Disruptive platforms strip away traditional control systems – corporate oversight, traditional forms of consumer protection and legislative regulation. All faith – on both sides of the market – is placed on the design of the platform to ensure self-correcting regulation. There’s just one problem with that…

Compression of Pendulum Markets

When you depend on self-correction in a dynamic market, you forego stability that typically comes from vertical oversight. Not only do you remove the oversight but you also remove predictability. There are new players entering and exiting the market all the time. And even if the players stabilize, experience has limited value in a marketplace that may not do tomorrow what it did yesterday.

All sharing platforms – Uber and AirBnB included – depend on market feedback to ensure self correction. In these two cases, they have well thought out market control mechanisms but feedback is – by necessity – a reactive rather than a proactive device. You can anticipate with reasonable confidence in a stable, controlled market but you can’t in a dynamic, networked market. All you can do is respond. This creates a pendulum effect. Constant connection to the platform means that feedback is fast, but the physics of a pendulum mean that the volatility of the swings back and forth are greatest at the beginning and stabilize over time.

This creates what I would call the Bubbles and Backlash phenomenon. As markets open up, new suppliers jump on the bandwagon. Some are great, some are horrible, some are mediocre. But it will take the platform and it’s self-correcting mechanisms some time to sort them out. Also, we have to hope the mechanisms are reasonably robust against suppliers who want to game the system. I think both Uber and AirBnB are working their way through this particular pain point right now. I find ratings artificially high on many suppliers which whom I’ve had personal experience. There could be a number of reasons for this, including the psychological bias of reciprocity, but I think most platforms have some tweaking to do before the user ratings provide a reasonable frame of expectations.

Inevitable Gaps in the User Experience

Finally, because the travel market is moving from a vertical orientation to a horizontal one, it leaves it up to the user to navigate her way through the various horizontal layers that stack together to create her individual user journey. When you’re in a layer – taking Uber to the airport for example – you’re probably okay. But it’s moving from layer to layer that places a little extra demand on the user. The previous players who inhabited the niches within the vertical ecosystem are understandably reluctant to share their niches with new, disruptive players. Where, for example, do you catch the Uber at the airport?

But All’s Well that Ends Well

In the end, it comes down to a matter of taste. I am an early adopter, so I will always choose disruption over the status quo. For those of a different bent, the vertically integrated path is still open to them. But for all of us, I believe disruption has created a travel marketplace that is more diverse, authentic and rewarding than ever before.

 

Short Sightedness, Sharks and Mental Myopia

2017 was an average year for shark attacks.

And this just in…

By the year 2050 half of the World will be Near Sighted.

What could these two headlines possibly have in common? Well, sit back – I’ll tell you.

First, let’s look at why 2017 was a decidedly non-eventful year – at least when it came to interactions between Selachimorpha (sharks) and Homo (us). Nothing unusual happened. That’s it. There was no sudden spike in Jaws-like incidents. Sharks didn’t suddenly disappear from the world’s oceans. Everything was just – average. Was it the only way that 2017 was uneventful? No. There were others. But we didn’t notice because we were focused on the ways that the world seemed to be going to hell in a handbasket. If we look at 2017 like a bell curve, we were focused on the outliers, not the middle.

There’s no shame in that. That’s what we do. The usual doesn’t make the nightly news. It doesn’t even make our Facebook feed. But here’s the thing..we live most of our live in the middle of the curve, not in the outlier extremes. The things that are most relevant to our lives falls squarely into the usual. But all the communication channels that have been built to channel information to us are focused on the unusual. And that’s because we insist not on being informed, but instead on being amused.

In 1985, Neil Postman wrote the book Amusing Ourselves to Death. In it, he charts how the introduction of electronic media – especially television – hastened our decline into a dystopian existence that shared more than a few parallels with Aldous Huxley’s Brave New World. His warning was pointed, to say the least, “ There are two ways by which the spirit of a culture may be shrivelled,” Postman says. “In the first—the Orwellian—culture becomes a prison. In the second—the Huxleyan—culture becomes a burlesque.” It’s probably worth reminding ourselves of what burlesque means, “a literary or dramatic work that seeks to ridicule by means of grotesque exaggeration or comic imitation.” If the transformation of our culture into burlesque seemed apparent in the 80’s, you’d pretty much have to say it’s a fait accompli 35 years later. Grotesque exaggeration is the new normal., not to mention the new president.

But this steering of our numbed senses towards the extremes has some consequences. As the world becomes more extreme, it requires more extreme events to catch our notice. We are spending more and more of our media consumption time amongst the outliers. And that brings up the second problem.

Extremes – by their nature – tend to be ideologically polarized as well. If we’re going to consider extremes that carry a politically charged message, we stick to the extremes that are well synced with our worldview. In cognitive terms, these ideas are “fluent” – they’re easier to process. The more polarized and extreme a message is, the more important it is that it be fluent for us. We also are more likely to filter out non-fluent messages – messages that we don’t happen to agree with.

The third problem is that we are becoming short-sighted (see, I told you I’d get there, eventually). So not only do we look for extremes, we are increasingly seeking out the trivial. We do so because being informed is increasingly scaring the bejeezus out of us. We don’t look too deep nor do we look too far in the future – because the future is scary. There is the collapse of our climate, World War III with North Korea, four more years of Trump…this stuff is terrifying. Increasingly we spend our cognitive resources looking things that are amusing and immediate. The information we seek has to provide immediate gratification. Yes, we are becoming physically short-sighted because we stare at screens too much, but we’re also becoming mentally myopic as well.

If all this is disturbing, don’t worry. Just grab a Soma and enjoy a Feelie.

Drawing a Line in the Sand for Net Privacy

Ever heard of Strava? The likelihood that you would say yes jumped astronomically on January 27, 2018. That was the day of the Strava security breach. Before that, you had probably never heard of it, unless you happened to be a cyclist or runner.

I’ve talked about Strava before. Then, I was talking about social modality and trying to keep our various selves straight on various social networks. Today, I’m talking about privacy.

Through GPS enabled devices, like a fitness tracker or smartphone, Strava enables you to track your workouts, include the routes you take. Once a year, they aggregate all these activities and publish it as a global heatmap. Over 1 billion workouts are mapped in every corner of the earth. If you zoom in enough, you’ll see my favorite cycling routes in the city I live in. The same is true for everyone who uses the app. Unless – of course – you’ve opted out of the public display of your workouts.

And therein lies the problem. Actually – two problems.

First, problem number one. There is really no reason I shouldn’t share my workouts. The worst you could find out is that I’m a creature of habit when working out. But if I’m a marine stationed at a secret military base in Afghanistan and I start my morning jogging around the perimeter of the base – well – now we have a problem. I just inadvertently highlighted my base on the map for the world to see. And that’s exactly what happened. When the heatmap went live, a university student in Australia happened to notice there were a number of hotspots in the middle of nowhere in Afghanistan and Syria.

On the problem number two. In terms of numbers affected, the Strava breach is a drop in the bucket when you compare it to Yahoo – or Equifax – or Target – or any of the other breaches that have made the news. But this breach was different in a very important way. The victims here weren’t individual consumers. This time national security was threatened. And that moved it beyond the typical “consumer beware” defense that typically gets invoked.

This charts new territory for privacy. The difference in perspective in this breach has heightened sensitivities and moved the conversation in a new direction. Typically, the response when there is a breach is:

  1. You should have known better
  2. You should have taken steps to protect your information; or,
  3. Hmmm, it sucks to be you

Somehow, this response has held up in the previous breaches despite the fact that we all know that it’s almost impossible to navigate the minefield of settings and preferences that lies between you and foolproof privacy. As long as the victims were individuals it was easy to shift blame. This time, however, the victim was the collective “we” and the topic was the hot button of all hot buttons – national security.

Now, one could and should argue that all of these might apply to the unfortunate soldier that decided to take his Fitbit on his run, but I don’t think it will end there. I think the current “opt out” approach to net privacy might have to be considered. The fact is, all these platforms would prefer to gather and have the right to use as they see fit as much of your data as possible. It opens up a number of monetization opportunities for them. Typically, the quid pro quo that is offered back to you – the user – is more functionality and the ability to share to your own social circle. The current ecosystems default starting point is to enable as much sharing and functionality as possible. Humans being human, we will usually go with the easiest option – the default – and only worry about it if something goes wrong.

But as users, we do have the right to push back. We have to realize that opening the full data pipe gives the platforms much more value than we ever receive in return. We’re selling off our own personal data for the modern day equivalent of beads and trinkets. And the traditional corporate response – “you can always opt out if you want” – is simply taking advantage of our own human limitations. The current fallback is that they’re introducing more transparency into their own approaches to privacy, making it easier to understand. While this is a step in the right direction, a more ethical approach would be to take an “opt in” approach, where the default is the maximum protection of our privacy and we have to make a conscious effort to lower that wall.

We’ll see. Opting in puts ethics and profitability on a collision course. For that reason, I can’t ever see the platforms going in that direction unless we insist.

 

 

Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.