Who wouldn’t want to be in Venice? Gondolas drift by with Italian gondoliers singing “O Sole Mio.” You sit at a café savoring your espresso as you watch Latin lovers stroll by hand in hand on their way to the Bridge of Sighs. The Piazza San Marco is bathed in a golden glow as the sun sets behind the Basilica di San Marco. The picture? Perfect.
Again, who wouldn’t want to live in Venice?
The answer, according to the latest population stats, is almost everyone. The population of Venice is one third what it was in 1970.
The sharp-eyed among you may have noticed that I changed the sentence slightly in the second version. I replaced “be” with “live.” And that’s the difference. Venice is literally the “nice place to visit but I wouldn’t want to live there.”
A lot of people do visit, well over 5 million a year. But almost nobody lives there. The permanent population of Venice has shrunk to below 60,000.
Venice has become tourified. It’s a false front of a city, one built for those who are going to be there for 48 to 72 hours. In the process, everything needed to make it sustainable for those who want to call it home has been stripped out. It has become addicted to tourist dollars — and that addiction is killing it.
We should learn from Venice’s example. Sometimes, in trying to make a fantasy real, you take away the very things needed to let it survive.
Perfection doesn’t exist in nature. Imperfections are required for robustness. Yet, we are increasingly looking for a picture of perfection we can escape to.
The unintended consequences of this are troubling to think about.
We spent a good part of the last century devising new ways to escape. What was once an activity that lived well apart from our real lives has become increasingly more entwined with those lives.
As our collective affluence has grown, we spend more and more time chasing the fantastical. Social media has accelerated this chase. Our feeds are full of posts from those in pursuit of a fantasy.
We have shifted our focus from the place we live to the “nice place to visit.” This distorts our expectations of what reality should be. We expect the tourist-brochure version of Venice without realizing that in constructing exactly that, we set in motion a chain of events resulting in a city that’s unlivable.
The rise of populist politics is the broken-mirror image of this. Many of us have mythologized the America we want — or Britain, or any of the other countries that have gone down the populist path.. And myths are, by definition, unsustainable in the real world. They are vastly oversimplified pictures that allow us to create a story that we long for. It’s the same as the picture I painted of Venice in the first paragraph: a fantasy that can’t survive reality.
In our tendency to “tourify” everything, there are at least two unintended consequences: one for ourselves and one for our world.
For us, the need to escape continually draws our energies and attentions from what we need to do to save the world we actually live in, toward the mythologization of the world we think we want to live in. We ignore the inconvenient truths of reality as we pursue our imagined perfection.
But it’s the second outcome that’s probably more troubling. Even if we were successful in building the world we think we want, we could well find that built a bigger version of Venice, a place sinking under the weight of its own fantasy.
Sometimes, you have to be careful what you wish for.
Remember 2010? For me that was a pretty important year. It was the year I sold my digital marketing business. While I would continue to actively work in the industry for another 3 years, for me things were never the same as they were in 2010. And – looking back – I realize that’s pretty well true for most of us. We were more innocent and more hopeful. We still believed that the Internet would be the solution, not the problem.
In 2010, two big trends were jointly reshaping our notions of being connected. Early in the year, former Morgan Stanley analyst Mary Meeker laid them out for us in her “State of the Internet” report. Back then, just three years after the introduction of the iPhone, internet usage from mobile devices hadn’t even reached double digits as a percentage of overall traffic. Meeker knew this was going to change, and quickly. She saw mobile adoption on track to be the steepest tech adoption curve in history. She was right. Today, over 60% of internet usage comes from a mobile device.
The other defining trend was social media. Even then, Facebook had about 600 million users, or just under 10% of the world’s population. When you had a platform that big – connecting that many people – you just knew the consequences will be significant. There were some pretty rosy predications for the impact of social media.
Of course, it’s the stuff you can’t predict that will bite you. Like I said, we were a little naïve.
One trend that Meeker didn’t predict was the nasty issue of data ownership. We were just starting to become aware of the looming spectre of privacy.
The biggest Internet related story of 2010 was WikiLeaks. In February, Julian Assange’s site started releasing 260,000 sensitive diplomatic cables sent to them by Chelsea Manning, a US soldier stationed in Iraq. According to the governments of the world, this was an illegal release of classified material, tantamount to an act of espionage. According to public opinion, this was shit finally rolling uphill. We revelled in the revelations. Wikileaks and Julian Assange was taking it to the man.
That budding sense of optimism continued throughout the year. By December of 2010, the Arab Spring had begun. This was our virtual vindication – the awesome power of social media was a blinding light to shine on the darkest nooks and crannies of despotism and tyranny. The digital future was clear and bright. We would triumph thanks to technology. The Internet had helped put Obama in the White House. It had toppled corrupt regimes.
A decade later, we’re shell shocked to discover that the Internet is the source of a whole new kind of corruption.
The rigidly digitized ideals of Zuckerberg, Page, Brin et al seemed to be a call to arms: transparency, the elimination of bureaucracy, a free and open friction-free digital market, the sharing economy, a vast social network that would connect humanity in ways never imagined, connected devices in our pockets – in 2010 all things seemed possible. And we were naïve enough to believe that those things would all be good and moral and in our best interests.
But soon, we were smelling the stench that came from Silicon Valley. Those ideals were subverted into an outright attack on our privacy. Democratic elections were sold to the highest bidder. Ideals evaporated under the pressure of profit margins and expanding power. Those impossibly bright, impossibly young billionaire CEO’s of ten years ago are now testifying in front of Congress. The corporate culture of many tech companies reeks like a frat house on Sunday morning.
Is there a lesson to be learned? I hope so. I think it’s this. Technology won’t do the heavy lifting for us. It is a tool that is subject to our own frailty. It amplifies what it is to be human. It won’t eliminate greed or corruption unless we continually steer it in that direction.
And I use the term “we” deliberately. We have to hold tech companies to a higher standard. We have to be more discerning of what we agree to. We have to start demanding better treatment and not be willing to trade our rights away with the click of an accept button.
A lot of what could have been slipped through our fingers in the last 10 years. It shouldn’t have happened. Not on our watch.
The older I get, the more I enjoy talking to people who have accumulated decades of life experience. I consider it the original social media: the sharing of personal oral histories.
People my age often become interested in their family histories. When you talk to these people, they always say the same thing: “I wish I had taken more time to talk to my grandparents when they were still alive.” No one has ever wished they had spent less time with Grandma and Grandpa.
In the hubris of youth, there seems to be the common opinion that there couldn’t be anything of interest in the past that stretches further than the day before yesterday. When we’re young, we seldom look back. We live in the moment and are obsessed with the future.
This is probably as it should be. Most of our lives lie in front of us. But as we pass the middle mark of our own journey, we start to become more reflective. And as we do so, we realize that we’ve missed the opportunity to hear most of our own personal family histories from the people who lived it. Let’s call it ROMO: The Regret of Missing Out.
Let me give you one example. In our family, with Remembrance Day (the Canadian version of Veterans Day) fast approaching, one of my cousins asked if we knew of any family that served in World War I. I vaguely remembered that my great grandfather may have served, so I did some digging and eventually found all his service records.
I discovered that he enlisted to go overseas when he was almost 45 years old, leaving behind a wife and five children. He served as a private in the trenches in the Battle of the Somme and Vimy Ridge. He was gassed. He had at least four bouts of trench fever, which is transmitted by body lice.
As a result, he developed a debilitating soreness in his limbs and back that made it impossible for him to continue active duty. Two and a half years after he enlisted, this almost 50-year-old man was able to sail home to his wife and family.
I was able to piece this together from the various records and medical reports. But I would have given anything to be able to hear these stories from him.
Unfortunately, I never knew him. My mom was just a few years old when he died, a somewhat premature death that was probably precipitated by his wartime experience.
This was a story that fell through the cracks between the generations. And now it’s too late. It will remain mostly hidden, revealed only by the sparse information we can glean from a handful of digitized records.
It’s not easy to get most older people talking. They’re not used to people caring about their past or their stories. You have to start gently and tease it out of them.
But if you persist and show an eagerness to listen, eventually the barriers come down and the past comes tumbling out, narrated by the person who lived it. Trust me when I say there is nothing more worthwhile that you can do.
We tend to ignore old people because we just have too much going on in our own lives. But it kills me just a little bit inside when I see grandparents and grandchildren in the same room, the young staring at a screen and the old staring off into space because no one is talking to them.
The screen will always be there. But Grandma isn’t getting any younger. She has lived her life. And I guarantee that in the breadth and depth of that life, there are some amazing stories you should take some time to listen to.
Note: I originally wrote this before results were available. Today, we know Trudeau’s Liberals won a minority government, but the Conservatives actually won the popular vote: 34.4% vs 33.06% for the Liberals. It was a very close election.
As I write this, Canadians are going to the polls in our national election. When you read this, the outcome will have been decided. I won’t predict — because this one is going to be too close to call.
For a nation that is often satirized for our tendencies to be nice and polite, this has been a very nasty campaign. So nasty, in fact, that in focusing on scandals and personal attacks, it forgot to mention the issues.
Most of us are going to the polls today without an inkling of who stands for what. We’re basically voting for the candidate we hate the least. In other words, we’re using the same decision strategy we used to pick the last guest at our grade 6 birthday party.
The devolvement of democracy has now hit the Great White North, thanks to Facebook and Mark Zuckerberg.
While the amount of viral vitriol I have seen here is still a pale shadow of what I saw from south of the 49th in 2016, it’s still jarring to witness. Canucks have been “Zucked.” We’re so busy slinging mud that we’ve forgotten to care about the things that are essential to our well being as a nation.
It should come as news to no one that Facebook has been wantonly trampling the tenets of democracy. Elizabeth Warren recently ran a fake ad on Facebook just to show she could. Then Mark Zuckerberg defended Facebook last week when he said: “While I certainly worry about an erosion of truth, I worry about living in a world where you can only post things that tech companies decide to be 100 per cent true.”
Zuckerberg believes the onus lies with the Facebook user to be able to judge what is false and what is not. This is a suspiciously convenient defense of Facebook’s revenue model wrapped up as a defense of freedom of speech. At best it’s naïve, not to mention hypocritical. What we see is determined by Facebook’s algorithm. At worst it’s misleading and malicious.
Hitting hot buttons tied to emotions is nothing new in politics. Campaign runners have been drawing out and sharpening the long knives for decades now. TV ads added a particularly effective weapon into the political arsenal. In the 1964 presidential campaign, it even went nuclear with Lyndon Johnson’s famous “Daisy” Ad.
But this is different. For many reasons.
First of all, there is the question of trust in the channel. We have been raised in a world where media channels historically take some responsibility to delineate between what they say is factual (i.e., the news) and what is paid persuasion (i.e., the ads).
In his statement, Zuckerberg is essentially telling us that giving us some baseline of trust in political advertising is not Facebook’s job and not their problem. We should know better.
But we don’t. It’s a remarkably condescending and convenient excuse for Zuckerberg to appear to be telling us “You should be smarter than this” when he knows that this messaging has little to do with our intellectual horsepower.
This is messaging that is painstakingly designed to be mentally processed before the rational part of our brain even kicks in.
In a recent survey, three out of four Canadians said they had trouble telling which social media accounts were fake. And 40% of Canadians say they had found links to stories on current affairs that were obviously false. Those were only the links they knew were fake. I assume that many more snuck through their factual filters. By the way, people of my generation are the worst at sniffing out fake news.
We’ve all seen it, but only one third of Canadians 55 and over realize it. We can’t all be stupid.
Because social media runs on open platforms, with very few checks and balances, it’s wide open for abuse. Fake accounts, bots, hacks and other digital detritus litter the online landscape. There has been little effective policing of this. The issue is that cracking down on this directly impacts the bottom line. As Upton Sinclair said: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Even given these two gaping vulnerabilities, the biggest shift when we think of social media as an ad platform is that it is built on the complexity of a network. The things that come with this — things like virality, filter bubbles, threshold effects — have no corresponding rule book to play by. It’s like playing poker with a deck full of wild cards.
Now — let’s talk about targeting.
When you take all of the above and then factor in the data-driven targeting that is now possible, you light the fuse on the bomb nestled beneath our democratic platforms. You can now segment out the most vulnerable, gullible, volatile sectors of the electorate. You can feed them misinformation and prod them to action. You can then sit back and watch as the network effects play themselves out. Fan — meet shit. Shit — meet fan.
It is this that Facebook has wrought, and then Mark Zuckerberg feeds us some holier-than-thou line about freedom of speech.
Mark, I worry about living in a world where false — and malicious — information can be widely disseminated because a tech company makes a profit from it.
I’ve
been hesitating to write this column. But increasingly, everything I write and
think about seems to come back to the same point – the ideological divide
between liberals and conservatives. That divide is tearing the world apart. And
technology seems to be accelerating the forces causing the rift, rather than
reversing them.
First,
a warning: I am a Liberal. That probably doesn’t come as a surprise to anyone
who has read any of my columns, but I did want to put it out there. And the
reason I feel that warning is required it that with this column, I’m diving
into the dangerous waters – I’m going to be talking about the differences
between liberal and conservative brains, particularly those brains that are
working in the media space.
Last week, I talked about the evolution of media bias through two – and what seems increasingly likely – three impeachment proceedings. Mainstream media has historically had a left bias. In a longitudinal study of journalism, two professors at University of Indiana – Lars Willnat and David Weaver – found that in 2012, just 7% of American journalists identified themselves as Republican, while 28% said they were Democrats. Over 50% said they were Independent, but I suspect this is more a statement on the professed objectivity of journalists than their actual political leanings. I would be willing to bet that those independents sway left far more often than they sway right.
So,
it’s entirely fair to say that mainstream media does have liberal bias. The
question is – why? Is it a premediated conspiracy or just a coincidental
correlation? I believe the bias is actually self-selected. Those that choose to
go into journalism have brains that work in a particular way – a way that is
most often found in those that fall on the liberal end of the spectrum.
I
first started putting this hypothesis together when I read the following
passage in Robert Sapolsky’s book “Behave, The Biology of Humans at Our Best and
Worst.”
Sapolsky was talking about a growing number of studies looking at the cognitive
differences between liberals and conservatives: “This literature has two broad
themes. One is that rightists are relatively uncomfortable intellectually with
ambiguity…The other is that leftists, well, think harder, have a greater
capacity for what the political scientist Philip Tetlock of the University of
Pennsylvania calls ‘integrative complexity’.”
Sapolsky
goes on to differentiate these intellectual approaches, “conservatives start
gut and stay gut; liberals go from gut to head.”
Going
from “gut to head” is a pretty good quality for a journalist. In fact, you
could say it’s their job description.
Sapolsky
cites a number of studies he bases this conclusion on. In the abstract of one of these studies, the researchers note: “Liberals
are more likely to process information systematically, recognize differences in
argument quality, and to be persuaded explicitly by scientific evidence,
whereas conservatives are more likely to process information heuristically,
attend to message-irrelevant cues such as source similarity, and to be
persuaded implicitly through evaluative conditioning. Conservatives are also
more likely than liberals to rely on stereotypical cues and assume consensus
with like-minded others.”
This
is about as good a description of the differences between mainstream media and the
alt-right media as I’ve seen. The researchers further note that, “Liberals
score higher than conservatives on need for cognition and open-mindedness,
whereas conservatives score higher than liberals on intuitive thinking and
self-deception.”
That
explains so much of the current situation we’re finding ourselves in. Liberals
tend to be investigative journalists. Conservatives tend to be opinion
columnists and pundits. One is using their head. The other is using their gut.
Of
course, it’s not just the conservative media that rely on gut instinct. The
Commander in Chief uses the same approach. In a 2016 article in the Washington Post, Marc Fisher probed Trump’s disdain for
reading, “He
said in a series of interviews that he does not need to read extensively
because he reaches the right decisions “with very little knowledge other than
the knowledge I [already] had, plus the words ‘common sense,’ because I have a
lot of common sense and I have a lot of business ability.”
I
have nothing against intuition. The same Post articles goes on to give examples
of other presidents who relied on gut instinct (Fisher notes, however; that
even when these are factored in, Trump is still an outlier). But when the
stakes are as high as they are now, I prefer intuition combined with some
research and objective evaluation.
We
believe in the concept of equality and fairness, as we should. For that reason,
I hesitate to put yet another wall between conservatives and liberals. But – in
seeking answers to complex questions – I think we have to be open and honest
about the things that make us different. There is a reason some of us are
liberals and some of us are conservatives – our brains work differently*. And
when those differences extend to our processing of our respective realities and
the sources we turn to for information, we should be aware of them. We should
take them into account in evaluating our media choices. We should go forward
with open minds.
Unfortunately,
I suspect I’m preaching to the choir. If you got this far in my column, you’re
probably a liberal too.
Note: A shorter version of this ran in MediaPost where it was edited for length. This is the full version as I originally wrote it. GH
In my lifetime, the articles of Impeachment of have been prepared to go to the House of Representatives twice: once for Richard Nixon in 1974 and once for Bill Clinton in 1998. As this week begins, it looks like we’re heading for number three with Donald Trump. I thought it might be interesting to look these impeachment proceedings in the context of the media landscape. As I started my research, I realized this actually shows the dramatic shifts both in our media and in the culture of our ideologies. It’s worth taking a few minutes to examine them.
First, a little historical housekeeping. Two presidents have
been impeached: Andrew Johnson in 1868 and Bill Clinton in 1998. Nixon resigned
before the articles got to the House for voting. Because I’m looking at
impeachment in the context of media, we won’t spend too much time on Johnson,
but we’ll still look at it for the history of a Republic Divided.
Impeachment tends to crop up when there is a deep
ideological divide in the country. The rifts naturally extend to Washington and
its political climate. What is fascinating is to see how these divides have
been reflected in the media landscape as it has evolved.
Andrew Johnson, 1868
First of all, to provide a somewhat objective baseline to
begin with, let’s begin with a quick assessment of each impeachment case using two
criteria from David Greenberg, a professor of history from Rutgers, author
and a contributing editor at Politico Magazine. First, was Impeachment and
conviction justified? And secondly, was impeachment and conviction possible?
Remember, no presidential impeachment case has won the vote in both houses,
leading to the removal of a president.
With Andrew Johnson, Greenberg’s answer to those two
questions was, “Justified? Probably not.” and “Possible? Most definitely.” Johnson
survived the senate impeachment debate by a single vote.
The Impeachment of Johnson was a direct result of differing
opinions over reconstruction after the civil war. A Democrat, Johnson ran
headlong into resistance from a Republican controlled Congress and Senate.
Although the odds were stacked against him, 10 Republicans broke party ranks
and voted against impeachment in the senate, which fell one vote below the
required two-thirds majority.
Richard Nixon, 1974
According to David Greenberg, Nixon’s impeachment was both
justified and possible. Tricky Dick was heading for almost certain impeachment
when he resigned on August 9, 1974.
The U.S. in 1974 was deeply divided ideologically but this
rift did not extend to the media. The US media landscape was relatively
monolithic in the 70’s, dominated by national newspapers and the three big
television networks. Media coverage of the Watergate scandal followed the lead
of one of those national papers – the Washington Post – and the now mythic
reporting of Carl Bernstein and Bob Woodward. With a few exceptions, this media
bloc definitely leaned left in its political views.
It’s also important to note the timeline of the Watergate
revelations. Impeachment proceedings didn’t even begin until the Senate investigation
was over a year old. By that time, there was substantial evidence pointing to
both initial crimes and subsequent cover ups. The case was so damning –
culminating in the release of the famous “smoking gun”
tape – that even Republican support for Nixon quickly evaporated. We also have
to remember that left-leaning media outlets had all the time in the world to
erode public support for the president. This is not to condemn the journalism.
It’s just acknowledging the media realities of the time.
The “Watergate Effect” would make its mark on national
journalism for the next two decades. Suddenly, there was a flood of bright,
idealistic (and yes – primarily left leaning) young people choosing journalism
as a career. America’s right became increasingly frustrated with a media
complex they saw as being dangerously biased to the left. One of the most vocal
was Nixon’s own Executive Producer, a twenty-something named Roger Ailes.
Bill Clinton, 1998
This brings us to the Clinton Impeachment case, launched by
an extra-marital affair with intern Monica Lewinsky. According to David
Greenberg’s assessment, this impeachment was neither justified nor possible.
What is interesting about the Clinton case is how it marks
the emergence of a right-wing media voice. The impeachment itself was largely a
vendetta against the Clintons driven by Pentagon employee Linda Tripp and
prosecutor Kenneth Starr. Tripp secretly recorded her conversations with
Lewinsky in which she acknowledged the affair with Clinton. Lewinsky met Tripp
after she had been reassigned to the Pentagon by White House aides hoping to
avoid a scandal. Tripp then took the recordings to Newsweek hoping they would
go public immediately. Given the implications of the story, Newsweek elected to
sit on the story to give them the chance to do some further verification. Tripp
was frustrated and had a book agent walk the tapes over to the Drudge Report, a
fledgling Right-Wing story aggregator with a subscriber email list. They
immediately published, causing a flustered mainstream media to follow suit.
Almost a year after the affair became public, Impeachment
proceedings began. By this time, something called “Clinton Fatigue” had set in.
Although the public was initially titillated by the salacious details, as the
story dragged on, we all were struck with a collective and distasteful ennui.
One got the sense that mainstream media were hoping the whole thing would eventually
just go away. Eventually it did, after Clinton was acquitted in the Senate by
all the Democrats and a handful of Republicans.
What Clinton’s impeachment did do was give a voice to the Right-Wing
media which found a home in the explosion of cable channels and the very first
online news sites. That same Roger Ailes was granted the helm of Fox News by
Rupert Murdoch in 1996. The Conservatives were able to outflank the established
media machine by laying claim to the emerging media platforms. This was media
with a difference. Although the left-wing bias of mainstream media was
generally acknowledged by most, it was largely an unspoken truth. Most
journalists professed to be resolutely neutral and unbiased. The Right-Wing
media was not so subtle. Their role was to counteract what they felt was a
leftist spin machine.
The Clinton Impeachment also drove another wedge into the
right-left split that has widened ever since. The staffers on Kenneth Starr’s
prosecution team included current Supreme Court Justice Brett Kavanaugh and recently
resigned Deputy Attorney General Rod Rosenstein. Also, an ex-investment banker
by the name of Steve Bannon was thinking he might give entertainment and media
a shot.
Donald Trump, 2019
So, what do we have to look forward to? As we seem to be
barreling towards impeachment, how will the story play out in today’s media and
political landscape?
Professor David Greenberg is quick to point out that these
are uncharted waters. It makes little sense to look for historical precedent,
because this impeachment will be unlike anything we have seen before. For what
it’s worth, he says the Impeachment of Donald Trump is justified, but is highly
unlikely to be successful, given that the Senate is controlled by a seemingly
uncrackable Republican majority.
But here are the wild cards that we in the media should be
watching:
1) the speed at which
this is playing out is like nothing we’ve ever seen before. We are only one
week into this.
2) We have never had a President – or a White House – like
this.
3) We have never had a media landscape like this. There is a
very vocal Right-Wing Media Machine that has proven to be every bit as
effective as the mainstream media.
4) The way we consume – and interact – with media is light
years removed from two decades ago. This shift has been so massive that we are
still grappling with understanding it.
5) The general public has never been networked the way we
are now. We have seen the fallout from network effects both in the 2016 US
Election and the UK’s Brexit vote. What part will the network play in an
Impeachment?
When the internet ushered in an explosion of information in the mid to late 90s there were many — I among them — who believed humans would get smarter. What we didn’t realize then is that the opposite would eventually prove to be true.
The internet lures us into thinking with half a brain. Actually, with less than half a brain – and the half we’re using is the least thoughtful, most savage half. The culprit is the speed of connection and reaction. We are now living in a pinball culture, where the speed of play determines that we have to react by instinct. There is no time left for thoughtfulness.
Daniel Kahneman’s monumental book, “Thinking, Fast and Slow,” lays out the two loops we use for mental processing. There’s the fast loop, our instinctive response to situations, and there’s the slow loop, our thoughtful processing of reality.
Humans need both loops. This is especially true in the complexity of today’s world. The more complex our reality, the more we need the time to absorb and think about it.
If we could only think fast, we’d all believe in capital punishment, extreme retribution and eye-for-eye retaliation. We would be disgusted and pissed off almost all the time. We would live in the Hobbesian State of Nature (from English philosopher Thomas Hobbes): The “natural condition of mankind” is what would exist if there were no government, no civilization, no laws, and no common power to restrain human nature. The state of nature is a “war of all against all,” in which human beings constantly seek to destroy each other in an incessant pursuit for power. Life in the state of nature is “nasty, brutish and short.”
That is not the world I want to live in. I want a world of compassion, empathy and respect. But the better angels of our nature rely on thoughtfulness. They take time to come to their conclusions.
With its dense interconnectedness, the internet has created a culture of immediate reaction. We react without all the facts. We are disgusted and pissed off all the time. This is the era of “cancel” and “callout” culture. The court of public opinion is now less like an actual court and more like a school of sharks in a feeding frenzy.
We seem to think this is OK because for every post we see that makes us rage inside, we also see posts that make us gush and goo. Every hateful tweet we see is leavened with a link to a video that tugs at our heartstrings. We are quick to point out that, yes, there is the bad — but there is an equal amount of good. Either can go viral. Social media simply holds up a mirror that reflects the best and worst of us.
But that’s not really true. All these posts have one thing in common: They are digested too quickly to allow for thoughtfulness. Good or bad, happy or mad — we simply react and scroll down. FOMO continues to drive us forward to the next piece of emotionally charged clickbait.
There’s a reason why social media is so addictive: All the content is aimed directly at our “Thinking Fast” hot buttons. And evolution has reinforced those hot buttons with generous discharges of neurocchemicals that act as emotional catalysts. Our brain online is a junkie jonesing for a fix of dopamine or noradrenaline or serotonin. We get our hit and move on.
Technology is hijacking our need to pause and reflect. Marshall McLuhan was right: The medium is the message and, in this case, the medium is one that is hardwired directly to the inner demons of our humanity.It took humans over five thousand years to become civilized. Ironically, one of our greatest achievements is dissembling that civilization faster than we think. Literally.
Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:
“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”
We, here in
the west, are big believers in Adam Smith’s Invisible Hand. We inherently
believe that markets will self-regulate and eventually balance themselves. We
are loath to involve government in the running of a free market.
In introducing
the concept of the Invisible Hand, Smith speculated that,
“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”
In short, a rising tide raises all boats. But there is a dicey
little dilemma buried in the midst of the Invisible Hand Premise – summed up most
succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed
is Good.”
More eloquently, economist and Nobel laureate Milton
Friedman explained it like this:
“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”
But here’s the thing. Up until very recently, the concept of
the Invisible Hand dealt only with physical goods. It was all about maximizing tangible
resources and distributing them to the greatest number of people in the most
efficient way possible.
The difference now is that we’re not just talking about
toasters or running shoes. Physical things are not the stock in trade of
Facebook or Google. They deal in information, feelings, emotions, beliefs and desires.
We are not talking about hardware any longer, we are talking about the very
operating system of our society. The thing that guides the Invisible Hand is no
longer consumption, it’s influence. And, in that case, we have to wonder if we’re
willing to trust our future to the conscience
of a corporation?
For this reason, I suspect Warren might be right. All the
past arguments for keeping government out of business were all based on a
physical market. When we shift that to a market that peddles influence, those
arguments are flipped on their head. Milton Friedman himself said , “It (the
corporation) only cares whether they can produce something you want to buy.” Let’s
shift that to today’s world and apply it to a corporation like Facebook – “It only
cares whether they can produce something that captures your attention.” To
expect anything else from a corporation that peddles persuasion is to expect
too much.
The problem with Warren’s argument is that she is still using
the language of a market that dealt with consumable products. She wants to
break up a monopoly that is limiting competition. And she is targeting that
message to an audience that generally believes that big government and free
markets don’t mix.
The much, much bigger issue here is that even if you believe
in the efficacy of the Invisible Hand, as described by all believers from Smith
to Friedman, you also have to believe that the single purpose of a corporation
that relies on selling persuasion will be to influence even more people more
effectively. None of most fervent evangelists of the Invisible Hand ever argued
that corporations have a conscience. They simply stated that the interests of a
profit driven company and an audience intent on consumption were typically
aligned.
We’re now playing a different game with significantly
different rules.
Relevance is the new gold standard in marketing. In an article in the Harvard Business Review written last year, John Zealley, Robert Wollan and Joshua Bellin — three senior execs at Accenture — outline five stages of marketing (paraphrased courtesy of a post from Phillip Nones):
Mass marketing (up through the 1970s) – The era of mass production, scale and distribution.Marketing segmentation (1980s) – More sophisticated research enabling marketers to target customers in niche segments.
Customer-level marketing (1990s and 2000s) – Advances in enterprise IT make it possible to target individuals and aim to maximize customer lifetime value.
Loyalty marketing (2010s) – The era of CRM, tailored incentives and advanced customer retention.
Relevance marketing (emerging) – Mass communication to the previously unattainable “Segment of One.”
This last stage – according to marketers past and present – should be the golden era of marketing:
“The perfect advertisement is one of which the reader can say, ‘This is for me, and me alone.”
— Peter Drucker
“Audiences crave tailored messages that cater to them specifically and they are willing to offer information that enables marketers to do so.”
— Kevin Tash, CEO of Tack Media, a digital marketing agency in Los Angeles.
Umm…no! In fact, hell, no!
I agree that relevance is an important thing. And in an ethical world, the exchange Tash talks about would be a good thing, for both consumers and marketers. But we don’t live in such a world. The world we live in has companies like Facebook and Cambridge Analytica.
Stop Thinking Like a Marketer!
There is a cognitive whiplash that happens when our perspective changes from that of marketer to that of a consumer. I’ve seen it many times. I’ve even prompted it on occasion. But to watch it in 113 minutes of excruciating detail, you should catch “The Great Hack” on Netflix.
The documentary is a journalistic peeling of the onion that is the Cambridge Analytica scandal. It was kicked off by the whistle blowing of Christopher Wylie, a contract programmer who enjoyed his 15 minutes of fame. But to me, the far more interesting story is that of Brittany Kaiser, the director of business Development of SCL Group, the parent company of Cambridge Analytica. The documentary digs into the tortured shift of perspective as she transitions from thinking like a marketer to a citizen who has just had her private data violated. It makes for compelling viewing.
Kaiser shifted her ideological compass about as far as one could possibly do, from her beginnings as an idealistic intern for Barack Obama and a lobbyist for Amnesty International to one of the chief architects of the campaigns supporting Trump’s presidential run, Brexit and other far right persuasion blitzkriegs. At one point, she justifies her shift to the right by revealing her family’s financial struggle and the fact that you don’t get paid much as an underling for Democrats or as a moral lobbyist. The big bucks are found in the ethically grey areas. Throughout the documentary, she vacillates between the outrage of a private citizen and the rationalization of a marketer. She is a woman torn between two conflicting perspectives.
We marketers have to stop kidding ourselves and justifying misuse of personal data with statements like the one previously quoted from Kevin Tash. As people, we’re okay. I like most of the marketers I know. But as professional marketers, we have a pretty shitty track record. We trample privacy, we pry into places we shouldn’t and we gleefully high-five ourselves when we deliver the goods on a campaign — no matter who that campaign might be for and what its goals might be. We are very different people when we’re on the clock.
We are now faced with what may be the most important questions of our lives: How do we manage our personal data? Who owns it? Who stores it? Who has the right to use it? When we answer those questions, let’s do it as people, and not marketers. Because there is a lot more at stake here than the ROI rates on a marketing campaign.
I was at a family reunion this past week. While there, my family did what families do at reunions: We looked at family photos.
In our case, our photographic history started some 110 years or so ago, with my great-great grandfather George and his wife Kezia. We have a stunning picture of the couple, with Kezia wearing an ostrich feather hat.
George and Kezia Ching – Redondo Beach
At the time of the photo, George was an ostrich feather dyer in Hollywood, California. Apparently, there was a need for dyed ostrich feathers in turn-of-the-century Hollywood. That need didn’t last for long. The bottom fell out of the ostrich feather market and George and Kezia turned their sights north of the 49th, high-tailing it for Canada.
We’re a lucky family. We have four generations of photographic evidence of my mother’s forebears. They were solidly middle class and could afford the luxury of having a photo taken, even around the turn of the century. There were plenty of preserved family images that fueled many conversations and sparked memories as we gathered the clan.
What was interesting to me is that some 110 years after this memorable portrait was taken, we also took many new photos so we could remember this reunion in the future. With all the technological change that has happened since George and Kezia posed in all their ostrich-feather-accessorized finery, the basic format of a two-dimensional visual representation was still our chosen medium for capturing the moment.
We talk about media a lot here at MediaPost — enough that it’s included in the headline of the post you’re reading. I think it’s worth a quick nod of appreciation to media that have endured for more than a century. Books and photos both fall into this category. Great-Great Grandfather George might be a bit flustered if he was looking at a book on a Kindle or viewing the photo on an iPhone, but the format of the medium itself would not be that foreign to him. He would be able to figure it out.
What dictates longevity in media? I think we have an inherent love for media that are a good match for both our senses and our capacity to imagine. Books give us the cognitive room to imagine worlds that no CGI effect has yet been able to match. And a photograph is still the most convenient way to render permanent the fleeting images that chase across our visual cortex. This is all the more true when those images are comprised of the faces we love. Like books, photos also give our minds the room to fill in the blanks, remembering the stories that go with the static image.
Compare a photo to something like a video. We could easily have taken videos to capture the moment. All of has had a pretty good video camera in our pocket. But we didn’t. Why not?
Again, we have to look at intended purpose at the moment of future consumption. Videos are linear. They force their own narrative arc upon us. We have to allocate the time required to watch the video to its conclusion. But a photo is randomly accessed. Our senses consume it at their own pace and prerogative, free of the restraints of the medium itself. For things like communal memories at a family reunion, a photo is the right match. There are circumstances where a video would be a better fit. This wasn’t one of them.
Our Family – 2019
There is one thing about photos that will be different moving forward. They are now in the digital domain, which means they can be stored with no restraints on space. It also means that we can take advantage of appended metadata. For the sake of my descendants, I hope this makes the bond between the photo and the stories a little more durable than what we currently deal with. If we were lucky, we had a quick notation on the back of an old photo to clarify the whos, whens and wheres.
A few of my more archivally inclined cousins started talking about the future generations of our family. When they remember us, what media would they be using? Would they be looking at the many selfies and digital shots that were taken in 2019 and try to remember who was that person between Cousin Dave and Aunt Lorna? What would be the platform used to store the photos? What will be the equivalent of the family album in 2119? How will they be archiving their own memories?
I suspect that if I were there, I wouldn’t be that surprised at the medium of choice.