Is Live the New Live?

HQ Trivia – the popular mobile game app –  seems to be going backwards. It’s an anachronism – going against all the things that technology promises. It tethers us to a schedule. It’s essentially a live game show broadcast (when everything works as it should, which is far from a sure bet) on a tiny screen – It also gets about a million players each and every time it plays, which is usually only twice a day.

My question is: Why the hell is it so popular?

Maybe it’s the Trivia Itself…

(Trivial Interlude – the word trivia comes from the Latin for the place where three roads come together. Originally in Latin it was used to refer to the three foundations of basic education – grammar, logic and rhetoric. The modern usage came from a book by Logan Pearsall Smith in 1902 – “Trivialities, bits of information of little consequence”. The singular of trivia is trivium)

As a spermologist (that’s a person who loves trivia – seriously – apparently the “sperm” has something to do with “seeds of knowledge”) I love a trivia contest. It’s one thing I’m pretty good at – knowing a little about a lot of things that have absolutely no importance. And if you too fancy yourself a spermologist (which, by the way, is how you should introduce yourself at social gatherings) you know that we always want to prove we’re the smartest people in the room. In HQ Trivia’s case, that room usually holds about a million people. That’s the current number of participants in the average broadcast. So the odds of being the smartest person is the room is – well – about one in a million. And a spermologist just can’t resist those odds.

But I don’t think HQ’s popularity is based on some alpha-spermology complex. A simple list of rankings would take care of that. No, there must be more to it. Let’s dig deeper.

Maybe it’s the Simoleons…

(Trivial Interlude: Simoleons is sometimes used as slang for American dollars, as Jimmy Stewart did in “It’s a Wonderful Life.” The word could be a portmanteau of “simon” and “Napoleon” – which was a 20 franc coin issued in France. The term seems to have originated in New Orleans, where French currency was in common use at the turn of the last century.)

HQ Trivia does offer up cash for smarts. Each contest has a prize, which is usually $5000. But even if you make it through all 12 questions and win, by the time the prize is divvied up amongst the survivors, you’ll probably walk away with barely enough money to buy a beer. Maybe two. So I don’t think it’s the prize money that accounts for the popularity of HQ Trivia.

Maybe It’s Because it’s Live..

(Trivial Interlude – As a Canadian, Trivia is near and dear to my heart. America’s favorite trivia quiz master, Alex Trebek, is Canadian, born in Sudbury, Ontario. Alex is actually his middle name. George is his first name. He is 77 years old. And Trivial Pursuit, the game that made trivia a household name in the 80’s, was invented by two Canadians, Chris Haney and Scott Abbott. It was created after the pair wanted to play Scrabble but found their game was missing some tiles. So they decided to create their own game. In 1984, more than 20 million copies of the game were sold. )

There is just something about reality in real time. Somehow, subconsciously, it makes us feel connected to something that is bigger than ourselves. And we like that. In fact, one of the other etymological roots of the word “trivia” itself is a “public place.”

The Hotchkiss Movie Choir Effect

If you want to choke up a Hotchkiss (or at least the ones I’m personally familiar with) just show us a movie where people spontaneously start singing together. I don’t care if it’s Pitch Perfect Twelve and a Half – we’ll still mist up. I never understood why, but I think it has to do with the same underlying appeal of connection. Dan Levitin, author of “This is Your Brain on Music,” explained what happens in our brain when we sing as part of a group in a recent interview on NPR:

“We’ve got to pay attention to what someone else is doing, coordinate our actions with theirs, and it really does pull us out of ourselves. And all of that activates a part of the frontal cortex that’s responsible for how you see yourself in the world, and whether you see yourself as part of a group or alone. And this is a powerful effect.”

The same thing goes for flash mobs. I’m thinking there has to be some type of psychological common denominator that HQ Trivia has somehow tapped into. It’s like a trivia-based flash mob. Even when things go wrong, which they do quite frequently, we feel that we’re going through it together. Host Scott Rogowsky embraces the glitchiness of the platform and commiserates with us. Misery – even when it’s trivial – loves company.

Whatever the reason for its popularity, HQ Trivia seems to be moving forward by taking us back to a time when we all managed to play nicely together.

 

Advertising Meets its Slippery Slope

We’ve now reached the crux of the matter when it comes to the ad biz.

For a couple of centuries now, we’ve been refining the process of advertising. The goal has always been to get people to buy stuff. But right now, there is now a perfect storm of forces converging that requires some deep navel gazing on the part of us insiders.

It used to be that to get people to buy, all we had to do was inform. Pent up consumer demand created by expanding markets and new product introductions would take care of the rest. We just had to connect better the better mousetraps with the world, which would then duly beat the path to the respective door.  Advertising equaled awareness.

But sometime in the waning days of the consumer orgy that followed World War Two, we changed our mandate. Not content with simply informing, we decided to become influencers. We slipped under the surface of the brain, moving from providing information for rational consideration to priming subconscious needs. We started messing with the wiring of our market’s emotional motivations.  We became persuaders.

Persuasion is like a mental iceberg – 90% of the bulk lies below the surface. Rationalization is typically the hastily added layer of ad hoc logic that happens after the decision is already made.  This is true to varying degrees for almost any consumer category you can think including – unfortunately – our political choices.

This is why, a few columns ago – I said Facebook’s current model is unsustainable. It is based on advertising, and I think advertising may have become unsustainable. The truth is, advertisers have gotten so good at persuading us to do things that we are beginning to revolt. It’s getting just too creepy.

To understand how we got here, let’s break down persuasion. It requires the persuader to shift the beliefs of the persuadee. The bigger the shift required, the tougher the job of persuasion.  We tend to build irrational (aka emotional) bulwarks around our beliefs to preserve them. For this reason, it’s tremendously beneficial to the persuader to understand the belief structure of their target. If they can do this, they can focus on those whose belief structure is most conducive to the shift required.

When it comes to advertisers, the needle on our creative powers of persuasion hasn’t really moved that much in the last half century. There were very persuasive ads created in the 1960’s and there are still great ads being created. The disruption that has moved our industry to the brink of the slippery slope has all happened on the targeting end.

The world we used to live in was a bunch of walled and mostly unconnected physical gardens. Within each, we would have relevant beliefs but they would remain essentially private. You could probably predict with reasonable accuracy the religious beliefs of the members of a local church. But that wouldn’t help you if you were wondering whether the congregation leaned towards Ford or Chevy.  Our beliefs lived inside us, typically unspoken and unmonitored.

That all changed when we created digital mirrors of ourselves through Facebook, Twitter, Google and all the other usual suspects. John Battelle, author of The Search,  once called Google the Database of Intentions. It is certainly that. But our intent also provides an insight into our beliefs. And when it comes to Facebook, we literally map out our entire previously private belief structure for the world to see. That is why Big Data is so potentially invasive. We are opening ourselves up to subconscious manipulation of our beliefs by anyone with the right budget. We are kidding ourselves if we believe ourselves immune to the potential abuse that comes with that. Like I said, 90% of our beliefs are submerged in our subconscious.

We are just beginning to realize how effective the new tools of persuasion are. And as we do so, we are beginning to feel that this is all very unfair. No one likes being manipulated; even if they have willing laid the groundwork for that manipulation. Our sense of retroactive justice kicks in. We post rationalize and point fingers. We blame Facebook, or the government, or some hackers in Russia. But these are all just participants in a new eco-system that we have helped build. The problem is not the players. The problem is the system.

It’s taken a long time, but advertising might just have gotten to the point where it works too well.

 

Bose Planning to Add a Soundtrack to Our World

Bose is placing a big bet on AR…

Or more correctly: AAR.

When we think of AR (Augmented Reality) we tend to think of digital data superimposed on our field of vision. But Bose is sticking to their wheelhouse and bringing audio to our augmented world – hence AAR – Audio Augmented Reality.

For me – who started my career as a radio copywriter and producer – it’s an intriguing idea. And it just might be a perfect match for how our senses parse the world around us.

Sound tends to be underappreciated when we think about how we experience the world. But it packs a hell of an emotional wallop. Theme park designers have known this for years. They call it underscoring. That’s the music that you hear when you walk down Main Street USA in Disneyland (which could be the Desecration Rag by Felix Arndt), or visit the Wizarding World of Harry Potter at Universal (perhaps Hedwig’s Theme by John Williams). You might not even be aware of it. But it bubbles just below the level of consciousness, wiring itself directly to your emotional hot buttons. Theme parks would be much less appealing without a sound track. The same is true for the world in general

Cognitively, we process sound entirely differently than we process sights. Our primary sensory portal is through our eyes and because of this, it tends to dominate our attentional focus. This means the brain has limited bandwidth to process conflicting visual stimuli. If we layer additional information over our view of the world, as most AR does, we force the brain to make a context switch. Even with a heads up display, the brain has to switch between the two. We can’t concentrate on both at the same time.

But our brains can handle the job of combining sight and sound very nicely. It’s what we evolved to do. We automatically synthesize the two. Unlike visual information which must borrow attention from something else, sight and sound is not a zero sum game.

Bose made their announcement at SXSW, but I first became aware of the plan just last week. And I became aware because Bose had bought out Detour, a start up based in San Francisco that produced audio immersive walking tours. I was using the Detour platform to create audio tours that could be done on bike. At the end of February, I received an email abruptly announcing that access to the Detour platform would end on the very next day. I’ve been around the high tech biz long enough to know that there was more to this than just a simple discontinuation of the platform. There was another shoe that was yet to drop.

Last week, it dropped. The reason for the abrupt end was that Detour had been purchased by Bose.

Although Detour never gained the traction that I’m sure founder Andrew Mason (who was also the founder of GroupOn) hoped for, the tours were exceptionally well produced. I had the opportunity to take several of them while in San Francisco. It was my first real experience with augmented audio reality. I felt like I was walking through a documentary. At no time did I feel my attention was torn. For the most part, my phone stayed in my pocket. It was damned near seamless.

Regular readers of mine will know that I’m more than a little apprehensive about the whole area of Virtual and Augmented Reality. But I have to admit, Bose’s approach sounds pretty good so far.

 

 

 

Why Do Cities Work?

It always amazes me how cities just seem to work. Take New York – for example. How the hell does everything a city of nine million needs to continue to exist happen? Cities are perhaps the best example I can think of how complex adaptive systems can work in the real world. They may be the answer to our future as the world becomes a more complex and connected place.

It’s not due to any centralized sense of communal collaboration. If anything, cities make us more individualistic. Small towns are much more collaborative. I feel more anonymous and autonomous in a big city than I ever do in a small town. It’s something else, more akin to Adam Smith’s Invisible Hand – but different. Millions of individual agents can all do their own thing based on their own requirements, but it works out okay for all involved.

Actually, according to Harvard economist Ed Glaeser, cities are more than just okay. He calls them mankind’s greatest invention. “So much of what humankind has achieved over the past three millennia has come out of the remarkable collaborative creations that come out of cities. We are a social species. We come out of the womb with the ability to sop up information from people around us. It’s almost our defining characteristic as creatures. And cities play to that strength. Cities enable us to learn from other people.”

Somehow, cities manage to harness the collective potential of their population without dipping into chaos. This is all the more amazing when you consider that cities aren’t natural for humans – at least – not in evolutionary terms. If you considered just that, we should all live in clusters of 150 people – otherwise known as Dunbar’s number. That’s the brain’s cognitive limit for keeping track of our own immediate social networks. It we’re looking for a magic number in terms of maximizing human cooperation and collaboration that would be it. But somehow cities allow us to far surpass that number and still deliver exponential returns.

Most of our natural defense mechanisms are based on familiarity. Trust, in it’s most basic sense, is Pavlovian. We trust strangers who happen to resemble people we know and trust. We are wary of strangers that remind us of people who have taken advantage of us. We are primed to trust or distrust in a few milliseconds, far under the time threshold of rational thought. Humans evolved to live in communities where we keep seeing the same faces over and over – yet cities are the antithesis of this.

Cities work because it’s in everyone’s best interest to make cities work. In a city, people may not trust each other, but they do trust the system. And it’s that system – or rather – thousands of complementary systems, that makes cities work. We contribute to these systems because we have a stake in them. The majority of us avoid the Tragedy of the Commons because we understand that if we screw the system, the system becomes unsustainable and we all lose. There is an “invisible network of trust” that makes cities work.

The psychology of this trust is interesting. As I mentioned before, in evolutionary terms, the mechanisms that trigger trust are fairly rudimentary: Familiarity = Trust. But system trust is a different beast. It relies on social norms and morals – on our inherent need to conform to the will of the herd. In this case, there is at least one degree of separation between trust and the instincts that govern our behaviors. Think of it as a type of “meta-trust.” We are morally obligated to contribute to the system as long as we believe the system will increase our own personal well-being.

This moral obligation requires feedback. There needs to be some type of loop that shows our that our moral behaviors are paying off for us. As long as that loop is working, it creates a virtuous cycle. Moral behaviors need to lead to easily recognized rewards, both individually and collectively. As long as we have this loop, we will continue to be governed by social norms that maintain the systems of a city.

When we look to cities to provide us clues on how to maintain stability in a more connected world, we need to understand this concept of feedback. Cities provide feedback through physical proximity. When cities start to break down, the results become obvious to all who live there. But when it’s digital bonds rather than physical ones that link our networks, feedback becomes trickier. We need to ponder other ways of connecting cause, effect and consequences. As we move from physical communities to ideological ones, we have to overcome the numbing effects of distance.

 

Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.

 

 

Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.