The Saddest Part about Sadfishing

There’s a certain kind of post I’ve always felt uncomfortable with when I see it on Facebook. You know the ones I’m talking about — where someone volunteers excruciatingly personal information about their failing relationships, their job dissatisfaction, their struggles with personal demons. These posts make me squirm.

Part of that feeling is that, being of British descent, I deal with emotions the same way the main character’s parents are dealt with in the first 15 minutes of any Disney movie: Dispose of them quickly, so we can get on with the business at hand.

I also suspect this ultra-personal sharing  is happening in the wrong forum. So today, I’m trying to put an empirical finger on my gut feelings of unease about this particular topic.

After a little research, I found there’s a name for this kind of sharing: sadfishing. According to Wikipedia, “Sadfishing is the act of making exaggerated claims about one’s emotional problems to generate sympathy. The name is a variation on ‘catfishing.’ Sadfishing is a common reaction for someone going through a hard time, or pretending to be going through a hard time.”

My cynicism towards these posts probably sounds unnecessarily harsh. It goes against our empathetic grain. These are people who are just calling out for help. And one of the biggest issues with mental illness is the social stigma attached to it. Isn’t having the courage to reach out for help through any channel available — even social media — a good thing?

I do believe asking for help is undeniably a good thing. I wish I myself was better able to do that. It’s Facebook I have the problem with. Actually, I have a few problems with it.

It’s Complicated

Problem #1: Even if a post is a genuine request for help, the poster may not get the type of response he or she needs.

Mental Illness, personal grief and major bumps on our life’s journey are all complicated problems — and social media is a horrible place to deal with complicated problems. It’s far too shallow to contain the breadth and depth of personal adversity.

Many read a gut-wrenching, soul-scorching post (genuine or not), then leave a heart or a sad face, and move on. Within the paper-thin social protocols of Facebook, this is an acceptable response. And it’s acceptable because we have no skin in the game. That brings us to problem #2.

Empathy is Wired to Work Face-to-Face

Our humanness works best in proximity. It’s the way we’re wired.

Let’s assume someone truly needs help. If you’re physically with them and you care about them, things are going to get real very quickly. It will be a connection that happens at all possible levels and through all senses.

This will require, at a minimum, hand-holding and, more likely, hugs, tears and a staggering personal commitment  to help this person. It is not something taken or given lightly. It can be life-changing on both sides.

You can’t do it at arm’s length. And you sure as hell can’t do it through a Facebook reply.

The Post That Cried Wolf

But the biggest issue I have is that social media takes a truly genuine and admirable instinct, the simple act of helping someone, and turns it into just another example of fake news.

Not every plea for help on Facebook is exaggerated just for the sake of gaining attention, but some of them are.

Again, Facebook tends to take the less admirable parts of our character and amplify them throughout our network. So, if you tend to be narcissistic, you’re more apt to sadfish. If you have someone you know who continually reaches out through Facebook with uncomfortably personal posts of their struggles, it may be a sign of a deeper personality disorder, as noted in this post on The Conversation.

This phenomenon can create a kind of social numbness that could mask genuine requests for help. For the one sadfishing, It becomes another game that relies on generating the maximum number of social responses. Those of us on the other side quickly learn how to play the game. We minimize our personal commitment and shield ourselves against false drama.

The really sad thing about all of this is that social media has managed to turn legitimate cries for help into just more noise we have to filter through.

But What If It’s Real?

Sadfishing aside, for some people Facebook might be all they have in the way of a social lifeline. And in this case, we mustn’t throw the baby out with the bathwater. If someone you know and care about has posted what you suspect is a genuine plea for help, respond as humans should: Reach out in the most personal way possible. Elevate the conversation beyond the bounds of social media by picking up the phone or visiting them in person. Create a person-to-person connection and be there for them.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.

A Troubling Prognostication

It’s that time of year again. My inbox is jammed with pitches from PR flacks trying to get some editorial love for their clients. In all my years of writing, I think I have actually taken the bait maybe once or twice. That is an extremely low success rate. So much for targeting.

In early January, many of the pitches offer either reviews of 2019 or predictions for 2020.  I was just about to hit the delete button on one such pitch when something jumped out at me: “The number-one marketing trend for 2020 will be CDPs: customer data platforms.”

I wasn’t surprised by that. It makes sense. I know there’s a truckload of personal data being collected from everyone and their dog. Marketers love platforms. Why wouldn’t these two things come together?

But then I thought more about it — and immediately had an anxiety attack. This is not a good thing. In fact, this is a catastrophically terrible thing. It’s right up there with climate change and populist politics as the biggest world threats that keep me up at night.

To close out 2019,  fellow Insider Maarten Albarda gave you a great guide on where not to spend your money. In that column, he said this: “Remember when connected TVs, Google Glass and the Amazon Fire Phone were going to provide break-through platforms that would force mass marketing out of the box, and into the promised land of end-to-end, personalized one-on-one marketing?”

Ah, marketing nirvana: the Promised Land! The Holy Grail of personalized marketing. A perfect, friction-free direct connection between the marketer and the consumer.

Maarten went on to say that social media is one of the channels you shouldn’t be throwing money into, saying, “It’s also true that we have yet to see a compelling case where social media played a significant role in the establishment or continued success of a brand or service.”

I’m not sure I agree with this, though I admit I don’t have the empirical data to back up my opinion. But I do have another, darker reason why we should shut off the taps providing the flow of revenue to the usual social suspects. Social media based on an advertising revenue model is a cancerous growth — and we have to shut off its blood flow.

Personalized one-to-one marketing — that Promised Land —  cannot exist without a consistent and premeditated attack on our privacy. It comes at a price we should not be prepared to pay.

It depends on us trusting profit-driven corporations that have proven again and again that they shouldn’t be trusted. It is fueled by our darkest and least admirable motives.

The ecosystem that is required to enable one-to-one marketing is a cesspool of abuse and greed. In a pristine world of marketing with players who sport shiny ideals and rock-solid ethics, maybe it would be okay. Maybe. Personally, I wouldn’t take that bet. But in the world we actually live and work in, it’s a sure recipe for disaster.

To see just how subversive data-driven marketing can get, read “Mindf*ck” by Christopher Wylie. If that name sounds vaguely familiar to you, let me jog your memory. Wylie is the whistleblower who first exposed the Cambridge Analytica scandal. An openly gay, liberal, pink-haired Canadian, he seems an unlikely candidate to be the architect of the data-driven “Mindf*ck” machine that drove Trump into office and the Brexit vote over the 50% threshold.

Wylie admits to being blinded by the tantalizing possibilities of what he was working on at Cambridge Analytica: “Every day, I overlooked, ignored, or explained away warning signs. With so much intellectual freedom, and with scholars from the world’s leading universities telling me we were on the cusp of ‘revolutionizing’ social science, I had gotten greedy, ignoring the dark side of what we were doing.”

But Wylie is more than a whistleblower. He’s a surprisingly adept writer who has a firm grasp on not just the technical aspects, but also the psychology behind the weaponization of data. If venture capitalist Roger McNamee’s tell-all expose of Facebook, “Zucked,”  kept you up at night, “Mindf*ck” will give you screaming night terrors.

I usually hold off jumping on the year-end prognostication bandwagon, because I’ve always felt it’s a mug’s game. I would like to think that 2020 will be the year when the world becomes “woke” to the threat of profit-driven data abuse — but based on our collective track record of ignoring inconvenient truths, I’m not holding my breath.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

Looking Back at a Decade That’s 99.44% Done

Remember 2010? For me that was a pretty important year. It was the year I sold my digital marketing business. While I would continue to actively work in the industry for another 3 years, for me things were never the same as they were in 2010. And – looking back – I realize that’s pretty well true for most of us. We were more innocent and more hopeful. We still believed that the Internet would be the solution, not the problem.

In 2010, two big trends were jointly reshaping our notions of being connected. Early in the year, former Morgan Stanley analyst Mary Meeker laid them out for us in her “State of the Internet” report. Back then, just three years after the introduction of the iPhone, internet usage from mobile devices hadn’t even reached double digits as a percentage of overall traffic. Meeker knew this was going to change, and quickly. She saw mobile adoption on track to be the steepest tech adoption curve in history. She was right. Today, over 60% of internet usage comes from a mobile device.

The other defining trend was social media. Even then, Facebook had about 600 million users, or just under 10% of the world’s population. When you had a platform that big – connecting that many people – you just knew the consequences will be significant. There were some pretty rosy predications for the impact of social media.

Of course, it’s the stuff you can’t predict that will bite you. Like I said, we were a little naïve.

One trend that Meeker didn’t predict was the nasty issue of data ownership. We were just starting to become aware of the looming spectre of privacy.

The biggest Internet related story of 2010 was WikiLeaks. In February, Julian Assange’s site started releasing 260,000 sensitive diplomatic cables sent to them by Chelsea Manning, a US soldier stationed in Iraq. According to the governments of the world, this was an illegal release of classified material, tantamount to an act of espionage. According to public opinion, this was shit finally rolling uphill. We revelled in the revelations. Wikileaks and Julian Assange was taking it to the man.

That budding sense of optimism continued throughout the year. By December of 2010, the Arab Spring had begun. This was our virtual vindication – the awesome power of social media was a blinding light to shine on the darkest nooks and crannies of despotism and tyranny. The digital future was clear and bright. We would triumph thanks to technology. The Internet had helped put Obama in the White House. It had toppled corrupt regimes.

A decade later, we’re shell shocked to discover that the Internet is the source of a whole new kind of corruption.

The rigidly digitized ideals of Zuckerberg, Page, Brin et al seemed to be a call to arms: transparency, the elimination of bureaucracy, a free and open friction-free digital market, the sharing economy, a vast social network that would connect humanity in ways never imagined, connected devices in our pockets – in 2010 all things seemed possible. And we were naïve enough to believe that those things would all be good and moral and in our best interests.

But soon, we were smelling the stench that came from Silicon Valley. Those ideals were subverted into an outright attack on our privacy. Democratic elections were sold to the highest bidder. Ideals evaporated under the pressure of profit margins and expanding power. Those impossibly bright, impossibly young billionaire CEO’s of ten years ago are now testifying in front of Congress. The corporate culture of many tech companies reeks like a frat house on Sunday morning.

Is there a lesson to be learned? I hope so. I think it’s this. Technology won’t do the heavy lifting for us. It is a tool that is subject to our own frailty. It amplifies what it is to be human. It won’t eliminate greed or corruption unless we continually steer it in that direction. 

And I use the term “we” deliberately. We have to hold tech companies to a higher standard. We have to be more discerning of what we agree to. We have to start demanding better treatment and not be willing to trade our rights away with the click of an accept button. 

A lot of what could have been slipped through our fingers in the last 10 years.  It shouldn’t have happened. Not on our watch.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.

This is Why We Can’t Have Nice Things

Relevance is the new gold standard in marketing. In an  article in the Harvard Business Review written last year, John Zealley, Robert Wollan and Joshua Bellin — three senior execs at Accenture — outline five stages of marketing (paraphrased courtesy of a post from Phillip Nones):

  1. Mass marketing (up through the 1970s) – The era of mass production, scale and distribution.Marketing segmentation (1980s) – More sophisticated research enabling marketers to target customers in niche segments.
  2. Customer-level marketing (1990s and 2000s) – Advances in enterprise IT make it possible to target individuals and aim to maximize customer lifetime value.
  3. Loyalty marketing (2010s) – The era of CRM, tailored incentives and advanced customer retention.
  4. Relevance marketing (emerging) – Mass communication to the previously unattainable “Segment of One.”

This last stage – according to marketers past and present – should be the golden era of marketing:

“The perfect advertisement is one of which the reader can say, ‘This is for me, and me alone.” 

— Peter Drucker

“Audiences crave tailored messages that cater to them specifically and they are willing to offer information that enables marketers to do so.”

 Kevin Tash, CEO of Tack Media, a digital marketing agency in Los Angeles.

Umm…no! In fact, hell, no!

I agree that relevance is an important thing. And in an ethical world, the exchange Tash talks about would be a good thing, for both consumers and marketers. But we don’t live in such a world. The world we live in has companies like Facebook and Cambridge Analytica.

Stop Thinking Like a Marketer!

There is a cognitive whiplash that happens when our perspective changes from that of marketer to that of a consumer. I’ve seen it many times. I’ve even prompted it on occasion. But to watch it in 113 minutes of excruciating detail, you should catch “The Great Hack” on Netflix. 

The documentary is a journalistic peeling of the onion that is the Cambridge Analytica scandal. It was kicked off by the whistle blowing of Christopher Wylie, a contract programmer who enjoyed his 15 minutes of fame. But to me, the far more interesting story is that of Brittany Kaiser, the director of business Development of SCL Group, the parent company of Cambridge Analytica. The documentary digs into the tortured shift of perspective as she transitions from thinking like a marketer to a citizen who has just had her private data violated. It makes for compelling viewing.

Kaiser shifted her ideological compass about as far as one could possibly do, from her beginnings as an idealistic intern for Barack Obama and a lobbyist for Amnesty International to one of the chief architects of the campaigns supporting Trump’s presidential run, Brexit and other far right persuasion blitzkriegs. At one point, she justifies her shift to the right by revealing her family’s financial struggle and the fact that you don’t get paid much as an underling for Democrats or as a moral lobbyist. The big bucks are found in the ethically grey areas.  Throughout the documentary, she vacillates between the outrage of a private citizen and the rationalization of a marketer. She is a woman torn between two conflicting perspectives.

We marketers have to stop kidding ourselves and justifying misuse of personal data with statements like the one previously quoted from Kevin Tash. As people, we’re okay. I like most of the marketers I know. But as professional marketers, we have a pretty shitty track record. We trample privacy, we pry into places we shouldn’t and we gleefully high-five ourselves when we deliver the goods on a campaign — no matter who that campaign might be for and what its goals might be. We are very different people when we’re on the clock.

We are now faced with what may be the most important questions of our lives: How do we manage our personal data? Who owns it? Who stores it? Who has the right to use it? When we answer those questions, let’s do it as people, and not marketers. Because there is a lot more at stake here than the ROI rates on a marketing campaign.