The Unusual Evolution of the Internet

The Internet we have today evolved out of improbability. It shouldn’t have happened like it did. It evolved as a wide-open network forged by starry-eyed academics and geeks who really believed it might make the world better. It wasn’t supposed to win against walled gardens like Compuserve, Prodigy and AOL — but it did. If you rolled back the clock, knowing what we know now, you could be sure it would never play out the same way again.

To use the same analogy that Eric Raymond did in his now-famous essay on the development of Linux, these were people who believed in bazaars rather than cathedrals. The internet was cobbled together to scratch an intellectual and ethical itch, rather than a financial one.

But today, as this essay in The Atlantic by Jonathan Zittrain warns us, the core of the internet is rotting. Because it was built by everyone and no one, all the superstructure that was assembled on top of that core is teetering. Things work, until they don’t: “The internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks.”

The problem is, it’s no one’s job to make sure those bricks stay in place.

Zittrain talks about the holes in humanity’s store of knowledge. But there’s another thing about this evolution that is either maddening or magical, depending on your perspective: It was never built with a business case in mind.

Eventually, commerce pipes were retrofitted into the whole glorious mess, and billions managed to be made. Google alone has managed to pull over a trillion dollars in revenue in less than 20 years by becoming the de facto index to the world’s most haphazard library of digital stuff. Amazon went one better, using the Internet to reinvent humanity’s marketplace and pulling in $2 trillion in revenue along the way.

But despite all this massive monetization, the benefactors still at least had to pay lip service to that original intent: the naïve belief that technology could make us better, and  that it didn’t just have to be about money.

Even Google, which is on its way to posting $200 billion in revenue, making it the fifth biggest media company in the world (after Netflix, Disney, Comcast, and AT&T), stumbled on its way to making a buck. Perhaps it’s because its founders, Larry Page and Sergey Brin, didn’t trust advertising. In their original academic paper, they said that “advertising-funded search engines will inherently be biased toward the advertisers and away from the needs of consumers.”  Of course they ultimately ended up giving in to the dark side of advertising. But I watched the Google user experience closely from 2003 to 2011, and that dedication to the user was always part of a delicate balancing act that was generally successful.

But that innocence of the original Internet is almost gone, as I noted in a recent post. And there are those who want to make sure that the next thing — whatever it is — is built on a framework that has monetization built in. It’s why Mark Zuckerberg is feverishly hoping that his company can build the foundations of the Metaverse. It’s why Google is trying to assemble the pipes and struts that build the new web. Those things would be completely free of the moral — albeit naïve — constraints that still linger in the original model. In the new one, there would only be one goal: making sure shareholders are happy.

It’s also natural that many of those future monetization models will likely embrace advertising, which is, as I’ve said before, the path of least resistance to profitability.

We should pay attention to this. The very fact that the Internet’s original evolution was as improbable and profit-free as it was puts us in a unique position today. What would it look like if things had turned out differently, and the internet had been profit-driven from day one? I suspect it might have been better-maintained but a lot less magical, at least in its earliest iterations.

Whatever that new thing is will form a significant part of our reality. It will be even more foundational and necessary to us than the current internet. We won’t be able to live without it. For that reason, we should worry about the motives that may lie behind whatever “it” will be.

The Relationship between Trust and Tech: It’s Complicated

Today, I wanted to follow up on last week’s post about not trusting tech companies with your privacy. In that post, I said, “To find a corporation’s moral fiber, you always, always, always have to follow the money.”

A friend from back in my industry show days — the always insightful Brett Tabke — reached out to me to comment, and mentioned that the position taken by Apple in the current privacy brouhaha with Facebook is one of convenience, especially this “holier-than-thou” privacy stand adopted by Tim Cook and Apple.

“I really wonder though if it is a case of do-the-right-thing privacy moral stance, or one of convenience that supports their ecosystem, and attacks a competitor?” he asked.

It’s hard to argue against that. As Brett mentioned, Apple really can’t lose by “taking money out of a side-competitors pocket and using it to lay more foundational corner stones in the walled garden, [which] props up the illusion that the garden is a moral feature, and not a criminal antitrust offence.”

But let’s look beyond Facebook and Apple for a moment. As Brett also mentioned to me, “So who does a privacy action really impact more? Does it hit Facebook or ultimately Google? Facebook is just collateral damage here in the real war with Google. Apple and Google control their own platform ecosystems, but only Google can exert influence over the entire web. As we learned from the unredacted documents in the States vs Google antitrust filings, Google is clearly trying to leverage its assets to exert that control — even when ethically dubious.”

So, if we are talking trust and privacy, where is Google in this debate? Given the nature of Google’s revenue stream, its stand on privacy is not quite as blatantly obvious (or as self-serving) as Facebook’s. Both depend on advertising to pay the bills, but the nature of that advertising is significantly different.

57% of Alphabet’s (Google’s parent company) annual $182-billion revenue stream still comes from search ads, according to its most recent annual report. And search advertising is relatively immune from crackdowns on privacy.

When you search for something on Google, you have already expressed your intent, which is the clearest possible signal with which you can target advertising. Yes, additional data taken with or without your knowledge can help fine-tune ad delivery — and Google has shown it’s certainly not above using this  — but Apple tightening up its data security will not significantly impair Google’s ability to make money through its search revenue channel.

Facebook’s advertising model, on the other hand, targets you well before any expression of intent. For that reason, it has to rely on behavioral data and other targeting to effectively deliver those ads. Personal data is the lifeblood of such targeting. Turn off the tap, and Facebook’s revenue model dries up instantly.

But Google has always had ambitions beyond search revenue. Even today, 43% of its revenue comes from non-search sources. Google has always struggled with the inherently capped nature of search-based ad inventory. There are only so many searches done against which you can serve advertising. And, as Brett points out, that leads Google to look at the very infrastructure of the web to find new revenue sources. And that has led to signs of a troubling collusion with Facebook.

Again, we come back to my “follow the money” mantra for rooting out rot in the system. And in this case, the money we’re talking about is the premium that Google skims off the top when it determines which ads are shown to you. That premium depends on Google’s ability to use data to target the most effective ads possible through its own “Open Bidding” system. According to the unredacted documents released in the antitrust suit, that premium can amount to 22% to 42% of the ad spend that goes through that system.

In summing up, it appears that if you want to know who can be trusted most with your data, it’s the companies that don’t depend on that data to support an advertising revenue model. Right now, that’s Apple. But as Brett also pointed out, don’t mistake this for any warm, fuzzy feeling that Apple is your knight in shining armour: “Apple has shown time and time again they are willing to sacrifice strong desires of customers in order to make money and control the ecosystem. Can anyone look past headphone jacks, Macbook jacks, or the absence of Macbook touch screens without getting the clear indication that these were all robber-baronesque choices of a monopoly in action? Is so, then how can we go ‘all in’ on privacy with them just because we agree with the stance?”

The Tech Giant Trust Exercise

If we look at those that rule in the Valley of Silicon — the companies that determine our technological future — it seems, as I previously wrote,  that Apple alone is serious about protecting our privacy. 

MediaPost editor in chief Joe Mandese shared a post late last month about how Apple’s new privacy features are increasingly taking aim at the various ways in which advertising can be targeted to specific consumers. The latest victim in those sights is geotargeting.

Then Steve Rosenbaum mentioned last week that as Apple and Facebook gird their loins and prepare to do battle over the next virtual dominion — the metaverse — they are taking two very different approaches. Facebook sees this next dimension as an extension of its hacker mentality, a “raw, nasty networker of spammers.” Apple is, as always, determined to exert a top-down restriction on who plays in its sandbox, only welcoming those who are willing to play by its rules. In that approach, the company is also signaling that it will take privacy in the metaverse seriously. Apple CEO Tim Cook said he believes “users should have the choice over the data that is being collected about them and how it’s used.”

Apple can take this stand because its revenue model doesn’t depend on advertising. To find a corporation’s moral fiber, you always, always, always have to follow the money. Facebook depends on advertising for revenue. And it has repeatedly shown it doesn’t really give a damn about protecting the privacy of users. Apple, on the other hand, takes every opportunity to unfurl the privacy banner as its battle standard because its revenue stream isn’t really impacted by privacy.

If you’re looking for the rot at the roots of technology, a good place to start is at anything that relies on advertising. In my 40 years in marketing, I have come to the inescapable conclusion that it is impossible for business models that rely on advertising as their primary source of revenue to stay on the right side of privacy concerns. There is an inherent conflict that cannot be resolved. In a recent earnings call,  Facebook CEO Mark Zuckerberg said it in about the clearest way it could be said, “As expected, we did experience revenue headwinds this quarter, including from Apple’s [privacy rule] changes that are not only negatively affecting our business, but millions of small businesses in what is already a difficult time for them in the economy.”

Facebook has proven time and time again that when the need for advertising revenue runs up against a question of ethical treatment of users, it will always be the ethics that give way.

It’s also interesting that Europe is light years ahead of North America in introducing legislation that protects privacy. According to one Internet Privacy Ranking study, four of the five top countries for protecting privacy are in Northern Europe. Australia is the fifth. My country, Canada, shares these characteristics. We rank seventh. The US ranks 18th.

There is an interesting corollary here I’ve touched on before. All these top-ranked countries are social democracies. All have strong public broadcasting systems. All have a very different relationship with advertising than the U.S. We that live in these countries are not immune from the dangers of advertising (this is certainly true for Canada), but our media structure is not wholly dependent on it. The U.S., right from the earliest days of electronic media, took a different path — one that relied almost exclusively on advertising to pay the bills.

As we start thinking about things like the metaverse or other forms of reality that are increasingly intertwined with technology, this reliance on advertising-funded platforms is something we must consider long and hard. It won’t be the companies that initiate the change. An advertising-based business model follows the path of least resistance, making it the shortest route to that mythical unicorn success story. The only way this will change will be if we — as users — demand that it changes.

And we should  — we must — demand it. Ad-based tech giants that have no regard for our personal privacy are one of the greatest threats we face. The more we rely on them, they more they will ask from us.

The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.

Splitting Ethical Hairs in an Online Ecosystem

In looking for a topic for today’s post, I thought it might be interesting to look at the Lincoln Project. My thought was that it would be an interesting case study in how to use social media effectively.

But what I found is that the Lincoln Project is currently imploding due to scandal. And you know what? I wasn’t surprised. Disappointed? Yes. Surprised? No.

While we on the left of the political spectrum may applaud what the Lincoln Project was doing, let’s make no mistake about the tactics used. It was the social media version of Nixon’s Dirty Tricks. The whole purpose was to bait Trump into engaging in a social media brawl. This was political mudslinging, as practiced by veteran warriors. The Lincoln Project was comfortable with getting down and dirty.

Effective? Yes. Ethical? Borderline.

But what it did highlight is the sordid but powerful force of social media influence. And it’s not surprising that those with questionable ethics, as some of the Lincoln Project leaders have proven to be, were attracted to it.

Social media is the single biggest and most effective influencer on human behavior ever invented. And that should scare the hell out of us, because it’s an ecosystem in which sociopaths will thrive.

A definition of Antisocial Personality Disorder (the condition from which sociopaths suffer) states, “People with ASPD may also use ‘mind games’ to control friends, family members, co-workers, and even strangers. They may also be perceived as charismatic or charming.”

All you have to do is substitute “social media” for “mind games,” and you’ll get my point.  Social media is sociopathy writ large.

That’s why we — meaning marketers — have to be very careful what we wish for. Since Google cracked down on personally identifiable information, following in the footsteps of Apple, there has been a great hue and cry from the ad-tech community about the unfairness of it all. Some of that hue and cry has issued forth here at MediaPost, like Ted McConnell’s post a few weeks ago, “Data Winter is Coming.”

And it is data that’s at the center of all this. Social media continually pumps personal data into the online ecosystem. And it’s this data that is the essential life force of the ecosystem. Ad tech sucks up that data as a raw resource and uses it for ad delivery across multiple channels. That’s the whole point of the personal identifiers that Apple and Google are cracking down on.

I suppose one could  draw an artificial boundary between social media and ad targeting in other channels, but that would be splitting hairs. It’s all part of the same ecosystem. Marketers want the data, no matter where it comes from, and they want it tied to an individual to make targeting their campaigns more effective.

By building and defending an ecosystem that enables sociopathic predators, we are contributing to the problem. McConnell and I are on opposite sides of the debate here. While I don’t disagree with some of his technical points about the efficacy of Google and Apple’s moves to protect privacy, there is a much bigger question here for marketers: Should we protect user privacy, even if it makes our jobs harder?

There has always been a moral ambiguity with marketers that I find troubling. To be honest, it’s why I finally left this industry. I was tired of the “yes, but” justification that ignored all the awful things that were happening for the sake of a handful of examples that showed the industry in a better light.

And let’s just be honest about this for a second: using personally identifiable data to build a more effective machine to influence people is an awful thing. Can it be used for good? Yes. Will it be? Not if the sociopaths have anything to say about it. It’s why the current rogue’s gallery of awful people are all scrambling to carve out as big a piece of the online ecosystem as they can.

Let’s look at nature as an example. In biology, a complex balance has evolved between predators and prey. If predators are too successful, they will eliminate their prey and will subsequently starve. So a self-limiting cycle emerges to keep everything in balance. But if the limits are removed on predators, the balance is lost. The predators are free to gorge themselves.

When it comes to our society, social media has removed the limits on “prey.” Right now, there is a never-ending supply.

It’s like we’re building a hen house, inviting a fox inside and then feigning surprise when the shit hits the fan. What the hell did we expect?

Facebook Vs. Apple Vs. Your Privacy

As I was writing last week’s words about Mark Zuckerberg’s hubris-driven view of world domination, little did I know that the next chapter was literally being written. The very next day, a full-page ad from Facebook ran in The New York Times, The Washington Post and The Wall Street Journal attacking Apple for building privacy protection prompts into iOS 14.

It will come as a surprise to no one that I line up firmly on the side of Apple in this cat fight. I have always said we need to retain control over our personal data, choosing what’s shared and when. I also believe we need to have more control over the nature of the data being shared. iOS 14 is taking some much-needed steps in that direction.

Facebook is taking a stand that sadly underlines everything I wrote just last week — a disingenuous stand for a free-market environment — by unfurling the “Save small business” banner. Zuckerberg loves to stand up for “free” things — be it speech or markets — when it serves his purpose.

And the hidden agenda here is not really hidden at all. It’s not the small business around the corner Mark is worried about. It’s the 800-billion-dollar business that he owns 60% of the voting shares in.

The headline of the ad reads, “We’re standing up to Apple for small businesses everywhere.”

Ummm — yeah, right.

What you’re standing up for, Mark, is your revenue model, which depends on Facebook’s being free to hoover up as much personal data on you as possible, across as many platforms as possible.

The only thing that you care about when it comes to small businesses is that they spend as much with Facebook as possible. What you’re trying to defend is not “free” markets or “free” speech. What you’re defending is about the furthest thing imaginable away from  “free.”  It’s $70 billion plus in revenues and $18 and a half billion in profits. What you’re trying to protect is your number-five slot on the Forbes richest people in the world list, with your net worth of $100 billion.

Then, on the very next day, Facebook added insult to injury with a second ad, this time defending the “Free Internet,”  saying Apple “will change the internet as we know it” by forcing websites and blogs “to start charging you subscription fees.”

Good. The “internet as we know it” is a crap sandwich. “Free” has led us to exactly where we are now, with democracy hanging on by a thread, with true journalism in the last paroxysms of its battle for survival, and with anyone with half a brain feeling like they’re swimming in a sea of stupidity.

Bravo to Apple for pushing us away from the toxicity of “free” that comes with our enthralled reverence for “free” things to prop up a rapidly disintegrating information marketplace. If we accept a free model for our access to information, we must also accept advertising that will become increasingly intrusive, with even less regard for our personal privacy. We must accept all the things that come with “free”: the things that have proven to be so detrimental to our ability to function as a caring and compassionate democratic society over the past decade.

In doing the research for this column, I ran into an op-ed piece that ran last year in The New York Times. In it, Facebook co-founder Chris Hughes lays out the case for antitrust regulators dismantling Facebook’s dominance in social media.

This is a guy who was one of Zuckerberg’s best friends in college, who shared in the thrill of starting Facebook, and whose name is on the patent for Facebook’s News Feed algorithm. It’s a major move when a guy like that, knowing what he knows, says, “The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.”

Hughes admits that the drive to break up Facebook won’t be easy. In the end, it may not even be successful. But it has to be attempted.

Too much power sits in the Zuckerberg’s hands. An attempt has to be made to break down the walls behind which our private data is being manipulated. We cannot trust Facebook — or Mark Zuckerberg — to do the right thing with the data. It would be so much easier if we could, but it has been proven again and again and again that our trust is misplaced.

The very fact that those calling the shots at Facebook believe you’ll fall for yet another public appeal wrapped in some altruistic bullshit appeal about protecting “free” that’s as substantial as Saran Wrap should be taken as an insult. It should make you mad as hell.

And it should put Apple’s stand to protect your privacy in the right perspective: a long overdue attempt to stop the runaway train that is social media.

Looking At The World Through Zuckerberg-Colored Glasses

Mark Zuckerberg has managed to do something almost no one else has been able to do. He has actually been able to find one small patch of common ground between the far right and the far left in American politics. It seems everybody hates Facebook, even if it’s for different reasons.

The right hates the fact that they’re not given free rein to say whatever they want without Facebook tagging their posts as misinformation. The left worries about the erosion of privacy. And antitrust legislators feel Facebook is just too powerful and dominant in the social media market. Mark Zuckerberg has few friends in Washington — on either side of the aisle.

The common denominator here is control. Facebook has too much of it, and no one likes that. The question on the top of my mind is, “What is Facebook intending to do with that control?” Why is dominance an important part of Zuckerberg’s master plan?

Further, just what is that master plan?  Almost four years ago, in the early days of 2017, Zuckerberg issued a 6,000-word manifesto. In it, he addressed what he called “the most important question of all.” That question was, “Are we building the world we all want?”

According to the manifesto, the plan for Facebook includes “spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science.”

Then, two years later, Zuckerberg issued another lengthy memo about his vision regarding privacy and the future of communication, which “will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure.” He explained that Facebook and Instagram are like a town square, a public place for communication. But WhatsApp and Messenger are like your living room, where you can have private conversations without worrying about those conversations.

So, how is all that wonderfulness going, anyway?

Well, first of all, there’s what Mark says, and what Facebook actually does. When he’s not firing off biennial manifestos promising a cotton-candy-colored world, he’s busy assembling all the pieces required to suck up as much data on you as possible, and fighting lawsuits when he gets caught doing something he shouldn’t be.

You have to understand that for Zuckerberg, all these plans are built on a common foundation: Everything happens on a platform that Facebook owns. And those platforms are paid for by advertising. And advertising needs data. And therein lies the problem: What the hell is Facebook doing with all this data?

I’m pretty sure it’s not spreading prosperity and freedom or promoting peace and understanding. Quite the opposite. If you look at Facebook’s fingerprints that are all over the sociological dumpster fire that has been the past four years, you could call them the Keyser Söze of shit disturbing.

And it’s only going to get worse. Facebook and other participants in the attention economy are betting heavily on facial recognition technology. This effectively eliminates our last shred of supposed anonymity online. It forever links our digital dust trail with our real-world activities. And it dumps even more information about you into the voracious algorithms of Facebook, Google and other data devourers. Again, what might be the plans for this data: putting in place the pieces of a more utopian world, or meeting next quarter’s revenue projections?

Here’s the thing. I don’t think Zuckerberg is being wilfully dishonest when he writes these manifestos. I think — at the time — he actually believes them. And he probably legitimately thinks that Facebook is the best way to accomplish them. Zuckerberg always believes he’s the smartest one in the room. And he — like Steve Jobs — has a reality distortion field that’s always on. In that distorted reality, he believes Facebook — a company that is entirely dependent on advertising for survival — can be trusted with all our data. If we just trust him, Facebook will all be okay.

The past four years have proven over and over again that that’s not true. It’s not even possible. No matter how good the intentions you go in with, the revenue model that fuels Facebook will subvert those intentions and turn them into something corrosive.

I think David Fincher summed up the problem nicely in his movie “The Social Network.” There, screenwriter Aaron Sorkin nailed the Zuckerberg nail on the head when he wrote the scene where Zuckerberg’s lawyer said to him, “You’re not an asshole, Mark. You’re just trying so hard to be.”

Facebook represents a lethal mixture that has all the classic warning signs of an abusive relationship:

  • A corporation that can survive only when its advertisers are happy.
  • Advertisers that are demanding more and more data they can use to target prospects.
  • A bro-culture where Facebook folks think they’re smarter than everyone else and believe they can actually thread the needle between being fabulous successful as an advertising platform and not being complete assholes.
  • And an audience of users who are misplacing their trust by buying into the occasional manifesto, while ignoring the red flags that are popping up every day.

Given all these factors, the question becomes: Will splitting up Facebook be a good or bad thing? It’s a question that will become very pertinent in the year to come. I’d love to hear your thoughts.

Why Technology May Not Save Us

We are a clever race. We’re not as smart as we think we are, but we are pretty damn smart. We are the only race who has managed to forcibly shift the eternal cycles of nature for our own benefit. We have bent the world to our will. And look how that’s turning out for us.

For the last 10,000 years our cleverness has set us apart from all other species on earth. For the last 1000 years, the pace of that cleverness has accelerated. In the last 100 years, it has been advancing at breakneck speed. Our tools and ingenuity have dramatically reshaped our lives. our everyday is full of stuff we couldn’t imagine just a few short decades ago.

That’s a trend that’s hard to ignore. And because of that, we could be excused for thinking the same may be true going forward. When it comes to thinking about technology, we tend to do so from a glass half full perspective. It’s worked for us in the past. It will work for us in the future. There is no problem too big that our own technological prowess cannot solve.

But maybe it won’t. Maybe – just maybe – we’re dealing with another type of problem now to which technology is not well suited as a solution. And here are 3 reasons why.

The Unintended Consequences Problem

Technology solutions focus on the proximate rather than the distal – which is a fancy way of saying that technology always deals with the task at hand. Being technology, these solutions usually come from an engineer’s perspective, and engineers don’t do well with nuance. Complicated they can deal with. Complexity is another matter.

I wrote about this before when I wondered why tech companies tend to be confused by ethics. It’s because ethics falls into a category of problems known as a wicked problem. Racial injustice is another wicked problem. So is climate change. All of these things are complex and messy. Their dependence on collective human behavior makes them so. Engineers don’t like wicked problems, because they are by definition concretely non-solvable. They are also hotbeds of unintended consequences.

In Collapse, anthropologist Jared Diamond’s 2005 exploration of failed societies, past and present, Diamond notes that when we look forward, we tend to cling to technology as a way to dodge impending doom. But he notes, “underlying this expression of faith is the implicit assumption that, from tomorrow onwards, technology will function primarily to solve existing problems and will cease to create new problems.”

And there’s the rub. For every proximate solution it provides, technology has a nasty habit of unleashing scads of unintended new problems. Internal combustion engines, mechanized agriculture and social media come to mind immediately as just three examples. The more complex the context of the problem, the more likely it is that the solution will come with unintended consequences.

The 90 Day Problem

Going hand in hand with the unintended consequence problem is the 90 Day problem. This is a port-over from the corporate world, where management tends to focus on problems that can be solved in 90 days. This comes from a human desire to link cause and effect. It’s why we have to-do lists. We like to get shit done.

Some of the problems we’re dealing with now – like climate change – won’t be solved in 90 days. They won’t be solved in 90 weeks or even 90 months. Being wicked problems, they will probably never be solved completely. If we’re very, very, very lucky and we start acting immediately and with unprecedented effort, we might be seeing some significant progress in 90 years.

This is the inconvenient truth of these problems. The consequences are impacting us today but the payoff for tackling them is – even if we do it correctly – sometime far in the future, possibly beyond the horizon of our own lifetimes. We humans don’t do well with those kinds of timelines.

The Alfred E. Neuman Problem

The final problem with relying on technology is that we think of it as a silver bullet. The alternative is a huge amount of personal sacrifice and effort with no guarantee of success. So, it’s easier just to put our faith in technology and say, “What, Me Worry?” like Mad Magazine mascot Alfred E. Neuman. It’s much easier to shift the onus for us surviving our own future to some nameless, faceless geek somewhere who’s working their way towards their “Eureka” moment.

While that may be convenient and reassuring, it’s not very realistic. I believe the past few years – and certainly the past few months – have shown us that all of us have to make some very significant changes in our lives and be prepared to rethink what we thought our future might be. At the very least, it means voting for leadership committed to fixing problems rather than ignoring them in favor of the status quo.

I hope I’m wrong, but I don’t think technology is going to save our ass this time.

A.I. and Our Current Rugged Landscape

In evolution, there’s something called the adaptive landscape. It’s a complex concept, but in the smallest nutshell possible, it refers to how fit species are for a particular environment. In a relatively static landscape, status quos tend to be maintained. It’s business as usual. 

But a rugged adaptive landscape —-one beset by disruption and adversity — drives evolutionary change through speciation, the introduction of new and distinct species. 

The concept is not unique to evolution. Adapting to adversity is a feature in all complex, dynamic systems. Our economy has its own version. Economist Joseph Schumpeter called them Gales of Creative Destruction.

The same is true for cultural evolution. When shit gets real, the status quo crumbles like a sandcastle at high tide. When it comes to life today and everything we know about it, we are definitely in a rugged landscape. COVID-19 might be driving us to our new future faster than we ever suspected. The question is, what does that future look like?

Homo Deus

In his follow up to his best-seller “Sapiens: A Brief History of Humankind,” author Yuval Noah Harari takes a shot at predicting just that. “Homo Deus: A Brief History of Tomorrow” looks at what our future might be. Written well before the pandemic (in 2015) the book deals frankly with the impending irrelevance of humanity. 

The issue, according to Harari, is the decoupling of intelligence and consciousness. Once we break the link between the two, the human vessels that have traditionally carried intelligence become superfluous. 

In his book, Harari foresees two possible paths: techno-humanism and Dataism. 

Techno-humanism

In this version of our future, we humans remain essential, but not in our current form. Thanks to technology, we get an upgrade and become “super-human.”

Dataism

Alternatively, why do we need humans at all? Once intelligence becomes decoupled from human consciousness, will it simply decide that our corporeal forms are a charming but antiquated oddity and just start with a clean slate?

Our Current Landscape

Speaking of clean slates, many have been talking about the opportunity COVID-19 has presented to us to start anew. As I was writing this column, I received a press release from MIT promoting a new book “Building the New Economy,” edited by Alex Pentland. I haven’t read it yet, but based on the first two lines in the release, it certainly seems to be following this type of thinking:“With each major crisis, be it war, pandemic, or major new technology, there has been a need to reinvent the relationships between individuals, businesses, and governments. Today’s pandemic, joined with the tsunami of data, crypto and AI technologies, is such a crisis.”

We are intrigued by the idea of using the technologies we have available to us to build a societal framework less susceptible to inevitable Black Swans. But is this just an invitation to pry open Pandora’s Box and allow the future Yuval Noah Harari is warning us about?

The Debate 

Harari isn’t the only one seeing the impending doom of the human race. Elon Musk has been warning us about it for years. As we race to embrace artificial intelligence, Musk sees the biggest threat to human existence we have ever faced. 

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” warns Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

There are those that pooh-pooh Musk’s alarmism, calling it much ado about nothing. Noted Harvard cognitive psychologist and author Steven Pinker, whose rose-colored vision of humanity’s future reliably trends up and to the right, dismissed Musk’s warnings with this: “If Elon Musk was really serious about the AI threat, he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”

In turn, Musk puts Pinker’s Pollyanna perspective down to human hubris: “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

From Today Forward

This brings us back to our current adaptive landscape. It’s rugged. The peaks and valleys of our day-to-day reality are more rugged then they have ever been — at least in our lifetimes. 

We need help. And when you’re dealing with a massive threat that involves probability modeling and statistical inference, more advanced artificial intelligence is a natural place to look. 

Would we trade more invasive monitoring of our own bio-status and aggregation of that data to prevent more deaths? In a heartbeat.

Would we put our trust in algorithms that can instantly crunch vast amounts of data our own brains couldn’t possibly comprehend? We already have.

Will we even adopt connected devices constantly streaming the bits of data that define our existence to some corporate third party or government agency in return for a promise of better odds that we can extend that existence? Sign us up.

We are willingly tossing the keys to our future to the Googles, Apples, Amazons and Facebooks of the world. As much as the present may be frightening, we should consider the steps we’re taking carefully.

If we continue rushing down the path towards Yuval Noah Harari’s Dataism, we should be prepared for what we find there: “This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”

The Saddest Part about Sadfishing

There’s a certain kind of post I’ve always felt uncomfortable with when I see it on Facebook. You know the ones I’m talking about — where someone volunteers excruciatingly personal information about their failing relationships, their job dissatisfaction, their struggles with personal demons. These posts make me squirm.

Part of that feeling is that, being of British descent, I deal with emotions the same way the main character’s parents are dealt with in the first 15 minutes of any Disney movie: Dispose of them quickly, so we can get on with the business at hand.

I also suspect this ultra-personal sharing  is happening in the wrong forum. So today, I’m trying to put an empirical finger on my gut feelings of unease about this particular topic.

After a little research, I found there’s a name for this kind of sharing: sadfishing. According to Wikipedia, “Sadfishing is the act of making exaggerated claims about one’s emotional problems to generate sympathy. The name is a variation on ‘catfishing.’ Sadfishing is a common reaction for someone going through a hard time, or pretending to be going through a hard time.”

My cynicism towards these posts probably sounds unnecessarily harsh. It goes against our empathetic grain. These are people who are just calling out for help. And one of the biggest issues with mental illness is the social stigma attached to it. Isn’t having the courage to reach out for help through any channel available — even social media — a good thing?

I do believe asking for help is undeniably a good thing. I wish I myself was better able to do that. It’s Facebook I have the problem with. Actually, I have a few problems with it.

It’s Complicated

Problem #1: Even if a post is a genuine request for help, the poster may not get the type of response he or she needs.

Mental Illness, personal grief and major bumps on our life’s journey are all complicated problems — and social media is a horrible place to deal with complicated problems. It’s far too shallow to contain the breadth and depth of personal adversity.

Many read a gut-wrenching, soul-scorching post (genuine or not), then leave a heart or a sad face, and move on. Within the paper-thin social protocols of Facebook, this is an acceptable response. And it’s acceptable because we have no skin in the game. That brings us to problem #2.

Empathy is Wired to Work Face-to-Face

Our humanness works best in proximity. It’s the way we’re wired.

Let’s assume someone truly needs help. If you’re physically with them and you care about them, things are going to get real very quickly. It will be a connection that happens at all possible levels and through all senses.

This will require, at a minimum, hand-holding and, more likely, hugs, tears and a staggering personal commitment  to help this person. It is not something taken or given lightly. It can be life-changing on both sides.

You can’t do it at arm’s length. And you sure as hell can’t do it through a Facebook reply.

The Post That Cried Wolf

But the biggest issue I have is that social media takes a truly genuine and admirable instinct, the simple act of helping someone, and turns it into just another example of fake news.

Not every plea for help on Facebook is exaggerated just for the sake of gaining attention, but some of them are.

Again, Facebook tends to take the less admirable parts of our character and amplify them throughout our network. So, if you tend to be narcissistic, you’re more apt to sadfish. If you have someone you know who continually reaches out through Facebook with uncomfortably personal posts of their struggles, it may be a sign of a deeper personality disorder, as noted in this post on The Conversation.

This phenomenon can create a kind of social numbness that could mask genuine requests for help. For the one sadfishing, It becomes another game that relies on generating the maximum number of social responses. Those of us on the other side quickly learn how to play the game. We minimize our personal commitment and shield ourselves against false drama.

The really sad thing about all of this is that social media has managed to turn legitimate cries for help into just more noise we have to filter through.

But What If It’s Real?

Sadfishing aside, for some people Facebook might be all they have in the way of a social lifeline. And in this case, we mustn’t throw the baby out with the bathwater. If someone you know and care about has posted what you suspect is a genuine plea for help, respond as humans should: Reach out in the most personal way possible. Elevate the conversation beyond the bounds of social media by picking up the phone or visiting them in person. Create a person-to-person connection and be there for them.