Whatever Happened to the Google of 2001?

Having lived through it, I can say that the decade from 2000 to 2010 was an exceptional time in corporate history. I was reminded of this as I was reading media critic and journalist Ken Auletta’s book, “Googled, The End of the World as We Know It.” Auletta, along with many others, sensed a seismic disruption in the way media worked. A ton of books came out on this topic in the same time frame, and Google was the company most often singled out as the cause of the disruption.

Auletta’s book was published in 2009, near the end of this decade, and it’s interesting reading it in light of the decade plus that has passed since. There was a sort of breathless urgency in the telling of the story, a sense that this was ground zero of a shift that would be historic in scope. The very choice of Auletta’s title reinforces this: “The End of the World as We Know It.”

So, with 10 years plus of hindsight, was he right? Did the world we knew end?

Well, yes. And Google certainly contributed to this. But it probably didn’t change in quite the way Auletta hinted at. If anything, Facebook ended up having a more dramatic impact on how we think of media, but not in a good way.

At the time, we all watched Google take its first steps as a corporation with a mixture of incredulous awe and not a small amount of schadenfreude. Larry Page and Sergey Brin were determined to do it their own way.

We in the search marketing industry had front row seats to this. We attended social mixers on the Google campus. We rubbed elbows at industry events with Page, Brin, Eric Schmidt, Marissa Mayer, Matt Cutts, Tim Armstrong, Craig Silverstein, Sheryl Sandberg and many others profiled in the book. What they were trying to do seemed a little insane, but we all hoped it would work out.

We wanted a disruptive and successful company to not be evil. We welcomed its determination — even if it seemed naïve — to completely upend the worlds of media and advertising. We even admired Google’s total disregard for marketing as a corporate priority.

But there was no small amount of hubris at the Googleplex — and for this reason, we also hedged our hopeful bets with just enough cynicism to be able to say “we told you so” if it all came crashing down.

In that decade, everything seemed so audacious and brashly hopeful. It seemed like ideological optimism might — just might — rewrite the corporate rulebook. If a revolution did take place, we wanted to be close enough to golf clap the revolutionaries onward without getting directly in the line of fire ourselves.

Of course, we know now that what took place wasn’t nearly that dramatic. Google became a business: a very successful business with shareholders, a grown-up CEO and a board of directors, but still a business not all that dissimilar to other Fortune 100 examples. Yes, Google did change the world, but the world also changed Google. What we got was more evolution than revolution.

The optimism of 2000 to 2010 would be ground down in the next 10 years by the same forces that have been driving corporate America for the past 200 years: the need to expand markets, maximize profits and keep shareholders happy. The brash ideologies of founders would eventually morph to accommodate ad-supported revenue models.

As we now know, the world was changed by the introduction of ways to make advertising even more pervasively influential and potentially harmful. The technological promise of 20 years ago has been subverted to screw with the very fabric of our culture.

I didn’t see that coming back in 2001. I probably should have known better.

The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.

Second Thoughts about the Social Dilemma

Watched “The Social Dilemma” yet? I did, a few months ago. The Netflix documentary sets off all kinds of alarms about social media and how it’s twisting the very fabric of our society. It’s a mix of standard documentary fodder — a lot of tell-all interviews with industry insiders and activists — with an (ill-advised, in my opinion) dramatization of the effects of social media addiction in one particular family.

The one most affected is a male teenager who is suddenly drawn, zombie-like,by his social media feed into an ultra-polarized political activist group. Behind the scenes, operating in a sort of evil-empire control room setting, there are literally puppet masters pulling his strings.

It’s scary as hell. But should we be scared? Or — at least — should we be that scared?

Many of us are sounding alarms about social media and how it nets out to be a bad thing. I’m one of the worst. I am very concerned about the impact of social media, and I’ve said so many, many times in this column. But I also admit that this is a social experiment playing out in real time, so it’s hard to predict what the outcome will be. We should keep our minds open to new evidence.

I’ve also said that younger generations seem to be handling this in stride. At least, they seem to be handling it better than those in my generation. They’re quicker to adapt and to use new technologies natively to function in their environments, rather than fumble as we do, searching for some corollary to the world we grew up in.

I’ve certainly had pushback on this observation. Maybe I’m wrong. Or maybe, like so many seemingly disastrous new technological trends before it, social media may turn out to be neither bad nor good. It may just be different.

That certainly seems to be the case if you read a new study from the Institute for Family Studies at Brigham Young University’s Wheatley Institution.  

One of the lead authors of the study, Jean Twenge, previously rang the alarm bells about how technology was short-circuiting the mental wiring of our youth. In a 2017 article in The Atlantic titled “Have Smartphones Destroyed a Generation?” she made this claim:

“It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.”

The article describes a generation of zombies mentally hardwired to social media through their addiction to their iPhone. One of the more startling claims was this:

“Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan.”

Again, scary as hell, right? This sounds frighteningly familiar to the scenarios laid out in ““The Social Dilemma.”  

But what if you take this same group and this same author, fast-forward three years to the middle of the worst pandemic in our lifetimes, and check in with over 1,500 teens to see how they’re doing in a time where they have every right to be depressed? Not only are they locked inside, they’re also processing societal upheavals and existential threats like systemic racial inequality, alt-right political populism and climate change. If 2017-2018 was scary for them, 2020 is a dumpster fire.

Surprisingly, those same teens appear to be doing better than they were two years ago. The study had four measures of ill-being: loneliness, life dissatisfaction, unhappiness and depression.  The results were counterintuitive, to say the least. The number of teens indicating they were depressed actually dropped substantially, from 27% in 2018 to 17% who were quarantined during the school year in 2020. Fewer said they were lonely as well. 

The study indicated that the reasons for this could be because teens were getting more sleep and were spending more time with family.

But what about smartphones and social media? Wouldn’t a quarantined teen (a Quaran-teen?) be spending even more time on his or her phone and social media? 

Well, yes – and no. The study found screen time didn’t really go up, but the way that time was spent did shift. Surprising, time spent on social media went down, but time spent on video chats with friends or watching online streaming entertainment went up. 

As I shared in my column a few weeks ago, this again indicates that it’s not how much time we spend on social media that determines our mental state. It’s how we spend that time. If we spend it looking for connection, rather than obsessing over social status, it can be a good thing. 

Another study, from the University of Oxford, examined data on more than 300,000 adolescents and found that increased screen time has no more impact on teenager’s mental health than eating more potatoes. Or wearing glasses.

If you’re really worried about your teen’s mental health, make sure they have breakfast. Or get enough sleep. Or just spend more time with them. All those things are going to have a lot more impact than the time they spend on their phone.

To be clear, this is not me becoming a fan of Facebook or social media in general. There are still many things to be concerned about. But let’s also realize that technology — any technology– is a tool. It is not inherently good or evil. Those qualities can be found in how we choose to use technology. 

Tired of Reality? Take 2 Full-Strength Schitt’s Creeks

“Schitt’s Creek” stormed the Emmys by winning awards in every comedy series category — a new record. It was co-creators Dan and Eugene Levy’s gift to the world: a warm bowl of hot cultural soup, brimming with life-affirming values, acceptance and big-hearted Canadian corniness.

It was the perfect entertainment solution to an imperfect time. It was good for what ails us.

It’s not the first time we’ve turned to entertainment for comfort. In fact, if there is anything as predictable as death and taxes, it’s that during times of trial, we need to be entertained.

There is a direct correlation between feel-good fantasy and feeling-shitty reality. The worse things get, the more we want to escape it.

But the ways we choose to be entertained have changed. And maybe — just maybe — the media channels we’re looking to for our entertainment are adding to the problem. 

The Immersiveness of Media

A medium’s ability to distract us from reality depends on how much it removes us from that reality.

Our media channels have historically been quite separate from the real world. Each channel offered its own opportunity to escape. But as the technology we rely on to be entertained has become more capable of doing multiple things, that escape from the real world has become more difficult.

Books, for example, require a cognitive commitment unlike any other form of entertainment. When we read a book, we — in effect — enter into a co-work partnership with the author. Our brains have to pick up where theirs left off, and we together build a fictional world to which we can escape. 

As the science of interpreting our brain’s behavior has advanced, we have discovered that our brains actually change while we read.

Maryanne Wolf explains in her book, “Proust and the Squid: The Story and Science of the Reading Brain”: “Human beings invented reading only a few thousand years ago. And with this invention, we rearranged the very organization of our brain, which in turn expanded the ways we were able to think, which altered the intellectual evolution of our species. . . . Our ancestors’ invention could come about only because of the human brain’s extraordinary ability to make new connections among its existing structures, a process made possible by the brain’s ability to be reshaped by experience.”

Even movies, which dramatically lowered the bar for the cognitive commitments they ask by supplying content specifically designed for two of our senses, do so by immersing us in a dedicated single-purpose environment. The distraction of the real world is locked outside the theater doors.

But today’s entertainment media platforms not only live in the real world, they are the very same platforms we use to function in said world. They are our laptops, our tablets, our phones and our connected TVs.

It’s hard to ignore that world when the flotsam and jetsam of reality is constantly bumping into us. And that brings us to the problem of the multitasking myth.

Multitasking Anxiety

The problem is not so much that we can’t escape from the real world for a brief reprise in a fictional one. It’s that we don’t want to. 

Even if we’re watching our entertainment in our home theater room on a big screen, the odds are very good that we have a small screen in our hands at the same time. We mistakenly believe we can successfully multitask, and our mental health is paying the price for that mistake.

Research has found that trying to multitask brings on a toxic mix of social anxiety, depression, a lessening of our ability to focus attention, and a sociopsychological impairment that impacts our ability to have rewarding relationships. 

When we use the same technology to be entertained that we use to stay on top of our social networks we fall prey to the fear of missing out.

It’s called Internet Communication Disorder, and it’s an addictive need to continually scroll through Facebook, Twitter, WhatsApp and our other social media platforms. It’s these same platforms that are feeding us a constant stream of the very things we’re looking to escape from. 

It may be that laughter is the best medicine, but the efficacy of that medicine is wholly dependent on where we get our laughs.

The ability for entertainment to smooth the jagged edges of reality depend on our being able to shift our minds off the track that leads to chronic anxiety and depression — and successfully escape into a fictional kinder, gentler, funnier world.

For entertainment to be a beneficial distraction, we first have to mentally disengage from the real world, and then fully engage in the fictional one.

That doesn’t work nearly as well when our entertainment delivery channel also happens to be the same addictive channel that is constantly tempting us to tiptoe through the anxiety-strewn landscape that is our social media feed. 

In other words, before going to “Schitt’s Creek,” unpack your other shit and leave it behind. I guarantee it will be waiting for you when you get back.

Looking Back at a Decade That’s 99.44% Done

Remember 2010? For me that was a pretty important year. It was the year I sold my digital marketing business. While I would continue to actively work in the industry for another 3 years, for me things were never the same as they were in 2010. And – looking back – I realize that’s pretty well true for most of us. We were more innocent and more hopeful. We still believed that the Internet would be the solution, not the problem.

In 2010, two big trends were jointly reshaping our notions of being connected. Early in the year, former Morgan Stanley analyst Mary Meeker laid them out for us in her “State of the Internet” report. Back then, just three years after the introduction of the iPhone, internet usage from mobile devices hadn’t even reached double digits as a percentage of overall traffic. Meeker knew this was going to change, and quickly. She saw mobile adoption on track to be the steepest tech adoption curve in history. She was right. Today, over 60% of internet usage comes from a mobile device.

The other defining trend was social media. Even then, Facebook had about 600 million users, or just under 10% of the world’s population. When you had a platform that big – connecting that many people – you just knew the consequences will be significant. There were some pretty rosy predications for the impact of social media.

Of course, it’s the stuff you can’t predict that will bite you. Like I said, we were a little naïve.

One trend that Meeker didn’t predict was the nasty issue of data ownership. We were just starting to become aware of the looming spectre of privacy.

The biggest Internet related story of 2010 was WikiLeaks. In February, Julian Assange’s site started releasing 260,000 sensitive diplomatic cables sent to them by Chelsea Manning, a US soldier stationed in Iraq. According to the governments of the world, this was an illegal release of classified material, tantamount to an act of espionage. According to public opinion, this was shit finally rolling uphill. We revelled in the revelations. Wikileaks and Julian Assange was taking it to the man.

That budding sense of optimism continued throughout the year. By December of 2010, the Arab Spring had begun. This was our virtual vindication – the awesome power of social media was a blinding light to shine on the darkest nooks and crannies of despotism and tyranny. The digital future was clear and bright. We would triumph thanks to technology. The Internet had helped put Obama in the White House. It had toppled corrupt regimes.

A decade later, we’re shell shocked to discover that the Internet is the source of a whole new kind of corruption.

The rigidly digitized ideals of Zuckerberg, Page, Brin et al seemed to be a call to arms: transparency, the elimination of bureaucracy, a free and open friction-free digital market, the sharing economy, a vast social network that would connect humanity in ways never imagined, connected devices in our pockets – in 2010 all things seemed possible. And we were naïve enough to believe that those things would all be good and moral and in our best interests.

But soon, we were smelling the stench that came from Silicon Valley. Those ideals were subverted into an outright attack on our privacy. Democratic elections were sold to the highest bidder. Ideals evaporated under the pressure of profit margins and expanding power. Those impossibly bright, impossibly young billionaire CEO’s of ten years ago are now testifying in front of Congress. The corporate culture of many tech companies reeks like a frat house on Sunday morning.

Is there a lesson to be learned? I hope so. I think it’s this. Technology won’t do the heavy lifting for us. It is a tool that is subject to our own frailty. It amplifies what it is to be human. It won’t eliminate greed or corruption unless we continually steer it in that direction. 

And I use the term “we” deliberately. We have to hold tech companies to a higher standard. We have to be more discerning of what we agree to. We have to start demanding better treatment and not be willing to trade our rights away with the click of an accept button. 

A lot of what could have been slipped through our fingers in the last 10 years.  It shouldn’t have happened. Not on our watch.

The Social Acceptance of Siri

There was a time, not too long ago, when I did a fairly exhaustive series of posts on the acceptance of technology. The psychology of how and when we adopted disruptive tech fascinated me. So Laurie Sullivan’s article on how more people are talking to their phone caught my eye.

If you look at tech acceptance, there are a bucket full of factors you have to consider. Utility, emotions, goals, ease of use, cost and our own attitudes all play a part. But one of the biggest factors is social acceptance. We don’t want to look like a moron in front of friends and family. It was this, more than anything else, that killed Google Glass the first time around. Call it the Glasshole factor.

So, back to Laurie’s article and the survey she referred to in it. Which shifts in the social universe are making it more acceptable to shoot the shit with Siri?

The survey has been done for the last three years by Stone Temple, so we’re starting to see some emerging trends. And here are the things that caught my attention. First of all, the biggest shifts from 2017 to 2019, in terms of percentage, are: at the gym, in Public Restrooms and in the Theatre. Usage at home has actually slipped a little (one might assume that these conversations have migrated to Alexa and other home-based digital assistants). If we’re looking at acceptance of technology and the factors driving it, one thing jumps out from the survey. All the shifts are to do with how comfortable we feel talking to our phone in publicly visible situations. There is obviously a moving threshold of acceptability here.

As I mentioned, the three social “safe zones” – those instances where we wouldn’t be judged for speaking to our phones – have shown little movement in the last three years. These are “Home Alone”, “Home with Friends” (public but presumably safe from social judgment), and “Office Alone.” As much as possible in survey-based research, this isolates the social factor from all the other variables rather nicely and shows its importance in our collective jumping on the voice technology band wagon.

This highlights an important lesson is acceptance of new technologies: you have to budget in the time required for society to absorb and accept new technologies. The more that the technology will be utilized in visibly social situations, the more time you need to budget. Otherwise, the tech will only be adopted by a tiny group of socially obtuse techno-weenies and will be stranded on the wrong side of the bleeding edge. As technology becomes more personal and tags along with us in more situations, the designers and marketers of that tech will have to understand this.

This places technology acceptance in a whole new ball park. As the tech we use increasingly becomes part of our own social facing brand, our carefully constructed personas and the social norms we have in place become key factors that determine the pace of acceptance.

This becomes a delicate balancing act. How do you control social acceptance? As an example, let’s take out one of my favorite marketing punching bags – influencer marketing – and see if we could accelerate acceptance by seeding tech acceptance with a few key social connectors. That same strategy failed miserably when it came to promoting Google Glass to the public. And there’s a perfectly irrational reason for it. It has nothing to do with rational stuff like use cases, aesthetics or technology. It had to do with Google picking the wrong influencers – the so-called Google Glass Explorers. As a group, they tended to be tech-obsessed, socially awkward and painfully uncool. They were the people you avoid getting stuck in the corner with at a party because you just aren’t up for a 90-minute conversation on the importance of regular hard drive hygiene. No one wants to be them.

If this survey tells us anything, it tells us that – sometimes – you just have to hope and wait. Ever since Everett Rogers first sketched it out in 1962, we’ve known that innovation diffusion happens on a bell curve. Some innovations get stranded on the upside of the slope and wither away to nothingness while some make it over the hump and become part of our everyday lives. Three years ago, there were certainly people talking to their phones on buses, in gyms and at movie theatres. They didn’t care if they were judged for it. But most of us did care. Today, apparently, the social stigma has disappeared for many of us. We were just waiting for the right time – and the right company.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

Is Live the New Live?

HQ Trivia – the popular mobile game app –  seems to be going backwards. It’s an anachronism – going against all the things that technology promises. It tethers us to a schedule. It’s essentially a live game show broadcast (when everything works as it should, which is far from a sure bet) on a tiny screen – It also gets about a million players each and every time it plays, which is usually only twice a day.

My question is: Why the hell is it so popular?

Maybe it’s the Trivia Itself…

(Trivial Interlude – the word trivia comes from the Latin for the place where three roads come together. Originally in Latin it was used to refer to the three foundations of basic education – grammar, logic and rhetoric. The modern usage came from a book by Logan Pearsall Smith in 1902 – “Trivialities, bits of information of little consequence”. The singular of trivia is trivium)

As a spermologist (that’s a person who loves trivia – seriously – apparently the “sperm” has something to do with “seeds of knowledge”) I love a trivia contest. It’s one thing I’m pretty good at – knowing a little about a lot of things that have absolutely no importance. And if you too fancy yourself a spermologist (which, by the way, is how you should introduce yourself at social gatherings) you know that we always want to prove we’re the smartest people in the room. In HQ Trivia’s case, that room usually holds about a million people. That’s the current number of participants in the average broadcast. So the odds of being the smartest person is the room is – well – about one in a million. And a spermologist just can’t resist those odds.

But I don’t think HQ’s popularity is based on some alpha-spermology complex. A simple list of rankings would take care of that. No, there must be more to it. Let’s dig deeper.

Maybe it’s the Simoleons…

(Trivial Interlude: Simoleons is sometimes used as slang for American dollars, as Jimmy Stewart did in “It’s a Wonderful Life.” The word could be a portmanteau of “simon” and “Napoleon” – which was a 20 franc coin issued in France. The term seems to have originated in New Orleans, where French currency was in common use at the turn of the last century.)

HQ Trivia does offer up cash for smarts. Each contest has a prize, which is usually $5000. But even if you make it through all 12 questions and win, by the time the prize is divvied up amongst the survivors, you’ll probably walk away with barely enough money to buy a beer. Maybe two. So I don’t think it’s the prize money that accounts for the popularity of HQ Trivia.

Maybe It’s Because it’s Live..

(Trivial Interlude – As a Canadian, Trivia is near and dear to my heart. America’s favorite trivia quiz master, Alex Trebek, is Canadian, born in Sudbury, Ontario. Alex is actually his middle name. George is his first name. He is 77 years old. And Trivial Pursuit, the game that made trivia a household name in the 80’s, was invented by two Canadians, Chris Haney and Scott Abbott. It was created after the pair wanted to play Scrabble but found their game was missing some tiles. So they decided to create their own game. In 1984, more than 20 million copies of the game were sold. )

There is just something about reality in real time. Somehow, subconsciously, it makes us feel connected to something that is bigger than ourselves. And we like that. In fact, one of the other etymological roots of the word “trivia” itself is a “public place.”

The Hotchkiss Movie Choir Effect

If you want to choke up a Hotchkiss (or at least the ones I’m personally familiar with) just show us a movie where people spontaneously start singing together. I don’t care if it’s Pitch Perfect Twelve and a Half – we’ll still mist up. I never understood why, but I think it has to do with the same underlying appeal of connection. Dan Levitin, author of “This is Your Brain on Music,” explained what happens in our brain when we sing as part of a group in a recent interview on NPR:

“We’ve got to pay attention to what someone else is doing, coordinate our actions with theirs, and it really does pull us out of ourselves. And all of that activates a part of the frontal cortex that’s responsible for how you see yourself in the world, and whether you see yourself as part of a group or alone. And this is a powerful effect.”

The same thing goes for flash mobs. I’m thinking there has to be some type of psychological common denominator that HQ Trivia has somehow tapped into. It’s like a trivia-based flash mob. Even when things go wrong, which they do quite frequently, we feel that we’re going through it together. Host Scott Rogowsky embraces the glitchiness of the platform and commiserates with us. Misery – even when it’s trivial – loves company.

Whatever the reason for its popularity, HQ Trivia seems to be moving forward by taking us back to a time when we all managed to play nicely together.

 

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

To Buy or Not to Buy: The Touchy Subject of Mobile ECommerce

A recent report from Akamai indicates that users have little patience when it comes to making purchases on a mobile device. Here are just a few of the stats:

  • While almost half of all consumers browse via their phones, only 1 in 5 complete transactions on mobile
  • Optimal load times for peak conversions ranged from 1.8 to 2.7 seconds across device types
  • Just a 100-millisecond delay in load time hurt conversion rates by up to 7%
  • Bounce rates were highest among mobile shoppers and lowest among those using tablets

But there may be more behind this than just slow load times. We also have to consider what modes we’re in when we’re interacting with our mobile device.

In 2010, Microsoft did a fascinating research project that looked at how user behaviors varied from desktop to tablet to smart phone. The research was headed by Jacquelyn Krones, who was a Search Product Manager at the time. Search was the primary activity examined, but there was a larger behavioral context that was explored. While the study is 7 years old, I think the core findings are still relevant. The researchers found that we tend to have three large buckets of behaviors: missions, explorations and excavations. Missions were focused tasks that were usually looking for a specific piece of information – i.e. looking for an address or phone number. Explorations where more open ended and less focused on a given destination – i.e. seeing if there was any thing you wanted to do this Friday night. Excavations typically involved multiple tasks within an overarching master task – i.e. researching an article. In an interview with me, Krones outlined their findings:

“There’s clearly a different profile of these activities on the different platforms. On desktops and laptops, people do all three of the activities – they conduct missions and excavations and explorations.

“On their phones we expected to see lots of missions – usually when you use your mobile phone and you’re conducting a search, whatever you’re doing in terms of searching is less important than what’s going on with you in the real world – you’re trying to get somewhere, you’re having a discussion with somebody and you want to look something up quick or you’re trying to make a decision about where to go for dinner.

“But we were surprised to find that people are using their mobile phones for exploration. But once we saw the context, it made sense – people have a low tolerance for boredom. Their phone is actually pretty entertaining, much more entertaining than just looking at the head in front of you while you’re waiting in line. You can go check a sports score, read a story, or look at some viral video and have a more engaged experience.

“On tablets, we found that people are pretty much only using them for exploration today. I had expected to see more missions on tablets, and I think that that will happen in the future, but today people perceive their mobile phone as always with them, very personal, always on, and incredibly efficient for getting information when they’re in mission mode.”

Another study, coming out The University of British Columbia Okanagan, also saw a significant difference in behavioral modality when it came to interacting with touch screens. Assistant Professor Ying Zhu was the principal author:

“The playful and fun nature of the touchscreen enhances consumers’ favour of hedonic products; while the logical and functional nature of a desktop endorses the consumers’ preference for utilitarian products,” explains Zhu.

“Zhu’s study also found that participants using touchscreen technology scored significantly higher on experiential thinking than those using desktop computers. However, those on desktops scored significantly higher on rational thinking.”

I think what we have here is an example of thinking: fast and slow. I suspect we’re compartmentalizing our activities, subconsciously setting some aside for completion on the desktop. I would suspect utilitarian type purchasing would fall into this category. I know that’s certainly true in my case. As Dr. Zhu noted, we have a very right brain relationship with touchscreens, while desktops tend to bring out our left-brain. I have always been amazed at how our brains subconsciously prime us based on anticipating an operating environment. Chances are, we don’t even realize how much our behaviors change when we move from a smart phone to a tablet to a desktop. But I’d be willing to place a significant wager that it’s this subconscious techno-priming that’s causing some of these behavioural divides between devices.

Slow load times are never a good thing, on any device, but while they certainly don’t help with conversions, they may not be the only culprit sitting between a user and a purchase. The device itself could also be to blame.