Together We Lie

Humans are social animals. We’ve become this way because – evolutionarily speaking – we do better as a group than individually. But there’s a caveat here. If you get a group of usually honest people together, they’re more likely to lie. Why is this?

Martin Kocher and his colleagues from LMU in Munich set up a study where participants had to watch a video of a single roll of a die and then report on the number that came up. Depending on what they reported, there was a payoff. Researchers asked both individuals and small groups who had the opportunity to chat anonymously with each other before reporting. The result,

“Our findings are unequivocal: People are less likely to lie if they decide on their own.”

Even individuals who answered honestly independently started lying when they got in a group.

The researchers called this a “dishonesty shift.” They blame it on a shifting weight placed on the norm of honesty. Norms are those patterns we have that guide us in our behaviors and beliefs. But those norms may be different individually than they are when we’re part of a group.

“Feedback is the decisive factor. Group-based decision-making involves an exchange of views that may alter the relative weight assigned to the relevant norm.”

Let’s look at how this may play out. Individually, we may default to honesty. We do so because we’re unsure of the consequences of not being honest. But when we get in a group, we start talking to others and it’s easier to rationalize not being honest – “Well, if everyone’s going to lie, I might as well too.”

Why is this important? Because marketing is done in groups, by groups, to groups. The dynamics of group-based ethics are important for us to understand. It could help to explain the most egregious breaches of ethics we see becoming more and more commonplace, either in corporations or in governments.

Four of the seminal studies in psychology and sociology shed further light on why groups tend to shift towards dishonesty. Let’s look at them individually.

In 1955, Solomon Asch showed that even if individually we believe something to be incorrect, if enough people around us have a different option, we’ll go with the group consensus rather than risk being the odd person out. In his famous study, he would surround a subject with “plants” who, when shown cards with three black lines of obviously differing lengths on it, would insist that three lines were equal. The subjects were then asked their opinion. In 75% of the cases, they’d go with the group rather than risk disagreement. As Asch said in his paper – quoting sociologist Gabriel Tarde – “Social man in a somnambulist.” We have about as much independent will as your average sleepwalker.

Now, let’s continue with Stanley Milgram’s Obedience to Authority study, perhaps the most controversial and frightening of the group. When confronted with an authoritative demeanor, a white coat and a clipboard, 63% of the subjects meekly followed directions and delivered what were supposed to be lethal levels of electrical shock to a hapless individual. The results were so disheartening that we’ve been trying to debunk them ever since. But a follow up study by Stanford psychology professor Philip Zimbardo – where subjects were arbitrarily assigned roles as guards and inmates in a mock prison scenario – was even more shocking. We’re more likely to become monsters and abandon our personal ethics when we’re in a group than when we act alone. Whether it’s obedience to authority – as Milgram was trying to prove – or whether it’s social conformity taken to the extreme, we tend to do very bad things when we’re in bad company.

But how do we slip so far so quickly from our own personal ethical baseline? Here’s where the last study I’ll cite can shed a little light. Sociologist Mark Granovetter – famous for his Strength of Weak Ties study – also looked at the viral spreading of behaviors in groups. I’ve talked about this in a previous column, but here’s the short version: If we have the choice between two options, with accompanying social consequences, which option we choose may be driven by social conformity. If we see enough other people around us picking the more disruptive option (i.e. starting a riot) we may follow suit. Even if we all have different thresholds – which we do – the nature of a crowd is such that those with the lowest threshold will pick the disruption option, setting into effect a bandwagon effect that eventually tips the entire group over the threshold.

These were all studied in isolation, because that’s how science works. We study variables in isolation. But it’s when factors combine that we get the complexity that typifies the real world – and the real marketplace. And there’s where predictability goes out the window. The group dynamics in play can create behavioral patterns that make no sense to the average person with the average degree of morality. But it’s happened before, it’s happening now, and it’s sure to happen again.

 

 

Email Keeps Us Hanging On

Adobe just released their Consumer Email Survey Report. And one line from it immediately jumped out:

“We’ve seen a 28 percent decrease in consumers checking email messages from bed in the morning (though 26 percent still do it),”

Good for you, you 28 percent who have a life. I, unfortunately, fall into the pathetic 26 percent.

So, what is it about email that still makes it such a dominant part of our digital lives? It’s been 46 years since the first email was sent, from Ray Tomlinson to himself. Yet, it’s never gone out of vogue. In fact, according to this survey, the majority of us (about 85%) see our use of email staying the same or increasing over the next two years. Even the rebellious Generation Z – the post-Millenials who are rewriting the book on tech behaviors, color inside the lines when it comes to email. 41% of them predict their use of email will increase at work, and 30% of them foresee themselves using email more in their personal lives.

Email is the most commonly used communication channel for work – beating actually talking to other people by a full 11 points

What was interesting to me was when and where email was used:

adobe-consumer-email-survey-report-2017-27-1024

From Adobe’s Email Survey Report, used with permission

This suggests some interesting modality variations. I’ve talked about modality before, including a column a few weeks ago about devices. Personally, as a UX geek, I find the whole idea of modality fascinating. Here’s the best way I can think of to understand the importance of modality as it applies to behaviors. You have to stay late at work to fire an employee that has become a train wreck, becoming increasingly hostile to management and bullying her co-workers. It does not go well, but you get it done. Unfortunately it makes you late for your 10-year-old daughter’s birthday party. Consider the seismic shifting of mental frameworks required so you don’t permanently traumatize a roomful of giggling pre-teens. That’s modality in action. It becomes essential when we’re talking about technology because as we step into different roles to accomplish different objectives, it seems we have pre-determined technologies already assigned to the tasks required.

Email seems closely linked as a communication channel perfect for certain behavioral modes: If you want a quick update on a project, are delivering feedback or asking a brief question, email is the preferred communication channel. But for anything that requires more social finesse – asking for help, pitching a new idea, letting your boss know about an issue or even calling it quits – there’s no substitute for face to face.

Here we see why email has not faded in popularity – it’s Occam’s Razor of factual communication. It does just what it needs to do, without unnecessary complication. It allows both the sender and receiver to communication on their timelines, without disruption. It provides an archival record of communication. And it’s already integrated into all our task flows – no extra steps are required. Many start-ups have promised to abolish the in-box. So far, none have succeeded.

What Email doesn’t do very well is convey emotion. Emails have a habit of blowing up in our faces in delicate situations, for all the same reasons as stated above. But that’s okay. We know that. That’s why most of us don’t use it for that purpose. (Note – even for delicate situations, email is still usually the next most popular choice after face to face. 11% of survey respondents would still choose email to tell their bosses to take a flying leap).

As email approached it’s half-century birthday, logic tells us that someday it will become obsolete. But it’s outlasted VCRs, fax machines, 8 tracks and a veritable junk heap of other discarded technologies. In fact, it’s hard to think of one other thing that has changed so little over the decades and is still such an integral part of our lives. Say what you want about email – it does appear to have legs.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

The Calcification of a Columnist

First: the Caveat. I’m old and grumpy. That is self-evident. There is no need to remind me.

But even with this truth established, the fact is that I’ve noticed a trend. Increasingly, when I come to write this column, I get depressed. The more I look for a topic to write about, the more my mood spirals downward.

I’ve been writing for Mediapost for over 12 years now. Together, between the Search Insider and Online Spin, that’s close to 600 columns. Many – if not most – of those have been focused on the intersection between technology and human behavior. I’m fascinated by what happens when evolved instincts meet technological disruption.

When I started this gig I was mostly optimistic. I was amazed by the possibilities and – somewhat naively it turns out – believed it would make us better. Unlimited access to information, the ability to connect with anyone – anywhere, new ways to reach beyond the limits of our own DNA; how could this not make humans amazing?

Why, then, do we seem to be going backwards? What I didn’t realize at the time is that technology is like a magnifying glass. Yes, it can make the good of human nature better, but it can also make the bad worse. Not only that, but Technology also has a nasty habit of throwing in unintended consequences; little gotchas we never saw coming that have massive moral implications. Disruption can be a good thing, but it can also rip things apart in a thrice that took centuries of careful and thoughtful building to put in place. Black Swans have little regard for ethics or morality.

I have always said that technology doesn’t change behaviors. It enables behaviors. When it comes to the things that matter, our innate instincts and beliefs, we are not perceptibly different than our distant ancestors were. We are driven by the same drives. Increasingly, as I look at how we use the outcomes of science and innovation to pursue these objectives, I realize that while it can enable love, courage and compassion, technology can also engender more hate, racism and misogyny. It makes us better while it also makes us worse. We are becoming caricatures of ourselves.

800px-diffusion_of_ideas

Everett Rogers, 1962

Everett Rogers plotted the diffusion of technology through the masses on a bell curve and divided us up into innovators, early adopters, early majority, late majority and laggards. The categorization was defined by our acceptance of innovation. Inevitably, then, there would be a correlation between that acceptance and our sense of optimism about the possibilities of technology. Early adopters would naturally see how technology would enable us to be better. But, as diffusion rolls through the curve we would eventually hit those for which technology is just there – another entitlement, a factor of our environment, oxygen. There is no special magic or promise here. Technology simply is.

So, to recap, I’m old and grumpy. As I started to write yet another column I was submerged in a wave of weariness.   I have to admit – I have been emotionally beat up by the last few years. I’m tired of writing about how technology is making us stupider, lazier and less tolerant when it should be making us great.

But another thing usually comes with age: perspective. This isn’t the first time that humans and disruptive technology have crossed paths. That’s been the story of our existence. Perhaps we should zoom out a bit from our current situation. Let’s set aside for a moment our navel gazing about fake news, click bait, viral hatred, connected xenophobia and erosion of public trusts. Let’s look at the bigger picture.

History isn’t sketched in straight lines. History is plotted on a curve. Correction. History is plotted in a series of waves. We are constantly correcting course. Disruption tends to swing a pendulum one way until a gathering of opposing force swings it the other way. It takes us awhile to absorb disruption, but we do – eventually.

I suspect if I were writing this in 1785 I’d be disheartened by the industrial blight that was enveloping the world. Then, like now, technology was plotting a new course for us. But in this case, we have the advantage of hindsight to put things in perspective. Consider this one fact: between 1200 and 1600 the life span of a British noble didn’t go up by even a single year. But, between 1800 and today, life expectancy for white males in the West doubled from thirty eight years to seventy six. Technology made that possible.

stevenpinker2Technology, when viewed on a longer timeline, has also made us better. If you doubt that, read psychologist and author Steven Pinker’s “Better Angels of Our Nature.” His exhaustively researched and reasoned book leads you to the inescapable conclusion that we are better now than we ever have been. We are less violent, less cruel and more peaceful than at any time in history. Technology also made that possible.

It’s okay to be frustrated by the squandering of the promise of technology. But it’s not okay to just shrug and move on. You are the opposing force that can cause the pendulum to change direction. Because, in the end, it’s not technology that makes us better. It’s how we choose to use that technology.

 

 

 

Why Millennials are so Fascinating

When I was growing up, there was a lot of talk about the Generation Gap. This referred to the ideological gap between my generation – the Baby Boomers, and our parent’s generation – The Silent Generation (1923 – 1944).

But in terms of behavior, there was a significant gap even amongst early Baby Boomers and those that came at the tail end of the boom – like myself. Generations are products of their environment and there was a significant change in our environment in the 20-year run of the Baby Boomers – from 1945 to 1964. During that time, TV came into most of our homes. For the later boomers, like myself, we were raised with TV. And I believe the adoption of that one technology created an unbridgeable ideological gap that is still impacting our society.

The adoption of ubiquitous technologies – like TV and, more recently, connective platforms like mobile phones and the Internet – inevitable trigger massive environmental shifts. This is especially true for generations that grow up with this technology. Our brain goes through two phases where it literally rewires itself to adapt to its environment. One of those phases happens from birth to about 2 to 3 years of age and the other happens during puberty – from 14 to 20 years of age. A generation that goes through both of those phases while exposed to a new technology will inevitably be quite different from the generation that preceded it.

The two phases of our brain’s restructuring – also called neuroplasticity – are quite different in their goals. The first period – right after birth – rewires the brain to adapt to its physical environment. We learn to adapt to external stimuli and to interact with our surroundings. The second phase is perhaps even more influential in terms of who we will eventually be. This is when our brain creates its social connections. It’s also when we set our ideological compasses. Technologies we spend a huge amount of time with will inevitably impact both those processes.

That’s what makes Millennials so fascinating. It’s probably the first generation since my own that bridges that adoption of a massively influential technological change. Most definitions of this generation have it starting in the early 80’s and extend it to 1996 or 97.   This means the early Millennials grew up in an environment that was not all that different than the generation that preceded it. The technologies that were undergoing massive adoption in the early 80’s were VCRs and microwaves – hardly earth shaking in terms of environmental change. But late Millennials, like my daughters, grew up during the rapid adoption of three massively disruptive technologies: mobile phones, computers and the Internet. So we have a completely different environment for which the brain must adapt not only from generation to generation, but within the generation itself. This makes Millennials a very complex generation to pin down.

In terms of trying to understand this, let’s go back to my generation – the Baby Boomers – to see how environment adaptation can alter the face of society. Boomers that grew up in the late 40’s and early 50’s were much different than boomers that grew up just a few years later. Early boomers probably didn’t have a TV. Only the wealthiest families would have been able to afford them. In 1951, only 24% of American homes had a TV. But by 1960, almost 90% of Americans had a TV.

Whether we like to admit it or not, the values of my generation where shaped by TV. But this was not a universal process. The impact of TV was dependent on household income, which would have been correlated with education. So TV impacted the societal elite first and then trickled down. This elite segment would have also been those most likely to attend college. So, in the mid-60’s, you had a segment of a generation who’s values and world view were at least partially shaped by TV – and it’s creation of a “global village” – and who suddenly came together during a time and place (college) when we build the persona foundations we will inhabit for the rest of our lives. You had another segment of a generation that didn’t have this same exposure and who didn’t pursue a post-secondary education. The Vietnam War didn’t create the Counter-Cultural revolution. It just gave it a handy focal point that highlighted the ideological rift not only between two generations but also within the Baby Boomers themselves. At that point in history, part of our society turned right and part turned left.

Is the same thing happening with Millennials now? Certainly the worldview of at least the younger Millennials has been shaped through exposure to connected media. When polled, they inevitably have dramatically different opinions about things like religion, politics, science – well – pretty much everything. But even within the Millennial camp, their views often seem incoherent and confusing. Perhaps another intra-generational divide is forming. The fact is it’s probably too early to tell. These things take time to play out. But if it plays out like it did last time this happened, the impact will still be felt a half century from now.

The Bermuda Triangle of Advertising

In the past few weeks, via the comments I’ve received on my two (1,2) columns looking at the possible future of media selection and targeting, it’s become apparent to me that we’re at a crisis point when it comes to advertising. I’ve been fortunate enough to have some of the brightest minds and sharpest commentators in the industry contributing their thoughts on the topic. In the middle of all these comments lies a massive gap. This gap can be triangulated by looking at three comments in particular:

Esther Dyson: “Ultimately, what the advertisers want is sales…  attention, engagement…all these are merely indicators for attribution and waypoints on the path to sales.”

Doc Searls: “Please do what you do best (and wins the most awards): make ads that clearly sponsor the content they accompany (we can actually appreciate that), and are sufficiently creative to induce positive regard in our hearts and minds.”

Ken Fadner: “I don’t want to live in a world like this one” (speaking of the hyper targeted advertising scenario I described in my last column).

These three comments are all absolutely right (with the possible exception of Searls, which I’ll come back to in a minute) and they draw a path around the gaping hole that is the future of advertising.

So let’s strip this back to the basics to try to find solid ground from which to move forward again.

Advertising depends on a triangular value exchange: We want entertainment and information – which is delivered via various media. These media need funding – which comes from advertising. Advertising wants exposure to the media audience. So, if we boil that down – we put up with advertising in return for access to entertainment and information. This is the balance that is deemed “OK” by Doc Searls and other commenters

The problem is that this is no longer the world we live in – if we ever did. The value exchange requires all three sides to agree that the value is sufficient for us to keep participating. The relatively benign and balanced model of advertising laid out by Searls just doesn’t exist anymore.

The problem is the value exchange triangle is breaking down on two sides – for advertisers and the audience.

As I explained in an earlier Online Spin, value exchanges depend on scarcity and for the audience, there is no longer a scarcity of information and entertainment. Also, there are now new models for information and entertainment delivery that disrupt our assessment of this value exchange. The cognitive context that made us accepting of commercials has been broken. Where once we sat passively and consumed advertising, we now have subscription contexts that are entirely commercial free. That makes the appearance of advertising all the more frustrating. Our brain has been trained to no longer be accepting of ads. The other issue is that ads only appeared in contexts where we were passively engaged. Now, ads appear when we’re actively engaged. That’s an entirely different mental model with different expectations of acceptability.

This traditional value exchange is also breaking down for advertisers. The inefficiencies of the previous model have been exposed and more accountable and effective models have emerged. Dyson’s point was probably the most constant bearing point we can navigate to – companies want sales. They also want more effective advertising. And much as we may hate the clutter and crap that litters the current digital landscape, when it works well it does promise to deliver a higher degree of efficiency.

So, we have the previous three sided value exchange collapsing on two of the sides, bringing the third side – media- down with it.

Look, we can bitch about digital all we want. I share Searls frustration with digital in general and Fadner’s misgivings about creepy and ineffective execution of digital targeting in particular. But this horse has already left the barn. Digital is more than just the flavor of the month. It’s the thin edge of a massive wedge of change in content distribution and consumption. For reasons far too numerous to name, we’ll never return to the benign world of clearly sponsored content and creative ads. First of all, that benign world never worked that well. Secondly, two sides of the value-exchange triangle have gotten a taste of something better- virtually unlimited content delivered without advertising strings attached and a much more effective way to deliver advertising.

Is digital working very well now? Absolutely not. Fadner and Searls are right about that, It’s creepy, poorly targeted, intrusive and annoying. And it’s all these things for the very same reason that Esther Dyson identified – companies want sales and they’ll try anything that promises to deliver it. But we’re at the very beginning of a huge disruptive wave. Stuff isn’t supposed to work very well at this point. That comes with maturity and an inevitable rebalancing. Searls may rail against digital, just like people railed against television, the telephone and horseless carriages. But it’s just too early to tell what a more mature model will look like. Corporate greed will dictate the trying of everything. We will fight back by blocking the hi-jacking of our attention. A sustainable balance will emerge somewhere in between. But we can’t see it yet from our vantage point.

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.