Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.

 

 

Fat Heads and Long Tails: Living in a Viral World

I, and the rest of the world, bought “Fire and Fury: Inside the Trump White House” last Friday. Forbes reports that in one weekend, it has climbed to the top of the Amazon booklist, and demand for the book is “unprecedented.”

We use that word a lot now. Our world seems to be a launching pad for “unprecedented” events. Nassim Nicholas Taleb’s black swans used to be the exception — that was the definition of  the term. Now they’re becoming the norm. You can’t walk down the street without accidentally kicking one.

Our world is a hyper-connected feedback loop that constantly engenders the “unprecedented”: storms, blockbusters, presidents. In this world, historical balance has disappeared and all bets are off.

One of the many things that has changed is the distribution pattern of culture. In 2006, Chris Anderson wrote the book “The Long Tail,” explaining how online merchandising, digital distribution and improved fulfillment logistics created an explosion of choices. Suddenly, the distribution curve of pretty much everything  — music, books, apps, video, varieties of cheese — grew longer and longer, creating Anderson’s “Long Tail.”

But let’s flip the curve and look at the other end. The curve has not just grown longer. The leading edge of it has also grown on the other axis. Heads are now fatter.

“Fire and Fury” has sold more copies in a shorter period of time than would have ever been possible at any other time in history. That’s partly because of the  same factors that created the Long Tail: digital fulfillment and more efficient distribution. But the biggest factor is that our culture is now a digitally connected echo chamber that creates the perfect conditions for virality. Feeding frenzies are now an essential element of our content marketing strategies.

If ever there was a book written to go viral, it’s “Fire and Fury.” Every page should have a share button. Not surprisingly, given its subject matter,  the book has all the subtlety and nuance of a brick to the head. This is a book built to be a blockbuster.

And that’s the thing about the new normal of virality: Blockbusters become the expectation out of the starting gate.

As I said last week, content producers have every intention of addicting their audience, shooting for binge consumption of each new offering. Wolff wrote this book  to be consumed in one sitting.

As futurist (or “futuristorian”) Brad Berens writes, the book is “fascinating in an I-can’t-look-away-at-the-17-car-pileup-with-lots-of-ambulances way.” But there’s usually a price to be paid for going down the sensational path. “Fire and Fury” has all the staying power of a “bag of Cheetos.” Again, Berens hits the nail on the head: “You can measure the relevance of Wolff’s book in half-lives, with each half-life being about a day.”

One of the uncanny things about Donald Trump is that he always out-sensationalizes any attempt to sensationalize him. He is the ultimate “viral” leader, intentionally — or not — the master of the “Fat Head.” Today that head is dedicated to Wolff’s book. Tomorrow, Trump will do something to knock it out of the spotlight.

Social media analytics developer Tom Maiaroto found the average sharing lifespan of viral content is about a day. So while the Fat Head may indeed be Fat, it’s also extremely short-lived. This means that, increasingly, content intended to go viral  — whether it be books, TV shows or movies — is intentionally developed to hit this short but critical window.

So what is the psychology behind virality? What buttons have to be pushed to start the viral cascade?

Wharton Marketing Professor Jonah Berger, who researched what makes things go viral, identified six principles: Social Currency, Memory Triggers, Emotion, Social Proof, Practical Value and Stories. “Fire and Fury” checks almost all these boxes, with the possible exception of practical value.

But it most strongly resonates with social currency, social proof and emotion. For everyone who thinks Trump is a disaster of unprecedented proportions, this book acts as kind of an ideological statement, a social positioner, an emotional rant and confirmation bias all rolled into one. It is a tribal badge in print form.

When we look at the diffusion of content through the market, technology has again acted as a polarizing factor. New releases are pushed toward the outlier extremes, either far down the Long Tail or squarely aimed at cashing in on the Fat Head. And if it’s the latter of these, then going viral becomes critical.

Expect more fire. Expect more fury.

Watching TV Through The Overton Window

Tell me, does anyone else have a problem with this recent statement by HBO CEO Richard Plepler: “I am trying to build addicts — and I want people addicted to something every week”?

I read this in a MediaPost column about a month ago. At the time, I filed it away as something vaguely troubling. I just checked and found no one else had commented on it. Nothing. We all collectively yawned as we checked out the next series to binge watch. That’s just what we do now.

When did enabling addiction become a goal worth shooting for? What made the head of a major entertainment corporation think it was OK to use a term that is defined as “persistent, compulsive use of a substance known to the user to be harmful” to describe a strategic aspiration? And, most troubling of all, when did we all collectively decide that that was OK?

Am I overreacting? Is bulk consuming an entire season’s worth of “Game of Thrones” or “Big Little Lies” over a 48-hour period harmless?

Speaking personally, when I emerge from my big-screen basement cave after watching more than two episodes of anything in a row, I feel like crap. And there’s growing evidence that I’m not alone. I truly believe this is not a healthy direction for us.

But my point here is not to debate the pros and cons of binge watching. My point is that Plepler’s statement didn’t cause any type of adverse reaction. We just accepted it. And that may because of something called the Overton Window.

The Overton Window was named after Joseph Overton, who developed the concept at a libertarian think tank  — the Mackinac Center for Public Policy — in the mid-1990s.

Typically, the term is used to talk about the range of policies acceptable to the public in the world of politics. In the middle of the window lies current policy. Moving out from the center in both directions (right and left) are the degrees of diminishing acceptability. In order, these are: Popular, Sensible, Acceptable, Radical and Unthinkable.

Overton_Window_diagram.svgThe window can move, with ideas that were once unthinkable eventually becoming acceptable or even popular due to the shifting threshold of public acceptance. The concept, which has roots going back over 150 years, has again bubbled to the top of our consciousness thanks to Trumpian politics, which make “extreme things look normal,” according to a post on Vox.

Political strategists have embraced and leveraged the concept to try to bring their own agendas within the ever-moving window. Because here’s the interesting thing about the Overton Window: If you want to move it substantially, the fastest way to do it is to float something outrageous to the public and ask them to consider it. Once you’ve set a frame of consideration towards the outliers, it tends to move the window substantially in that direction, bringing everything less extreme suddenly within the bounds of the window.

This has turned The Overton Window into a political strategic tug of war, with the right and left battling to shift the window by increasingly moving to the extremes.

What’s most intriguing about the Overton Window is how it reinforces the idea that much of our social sensibility is relative rather than absolute. Our worldview is shaped not only by what we believe, but what we believe others will find acceptable. Our perspective is constantly being framed relative to societal norms.

Perhaps — just perhaps — the CEO of HBO can now use the word “addict” when talking about entertainment because our perspective has been shifted toward an outlying idea that compulsive consumption is OK, or even desirable.

But I have to call bullshit on that. I don’t believe it’s OK. It’s not something we as an industry — whether that industry is marketing or entertainment — should be endorsing. It’s not ennobling us; it’s enabling us.

There’s a reason why the word “addict” has a negative connotation. If our “window” of acceptability has shifted to the point where we just blithely accept these types of statements and move on, perhaps it’s time to shift the window in the opposite direction.

Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

What Price Privacy?

As promised, I’m picking up the thread from last week’s column on why we seem okay with trading privacy for convenience. The simple – and most plausible – answer is that we’re really not being given a choice.

As Mediapost Senior Editor Joe Mandese pointed out in an very on-point comment, what is being creating is an transactional marketplace where offers of value are exchanged for information.:

“Like any marketplace, you have to have your information represented in it to participate. If you’re not “listed” you cannot receive bids (offers of value) based on who you are.”

Amazon is perhaps the most relevant example of this. Take Alexa and Amazon Web Services (AWS). Alexa promises to “make your life easier and more fun.” But this comes at a price. Because Alexa is voice activated, it’s always listening. That means that privacy of anything we say in our homes has been ceded to Amazon through their terms of service. The same is true for Google Assist and Apple Siri.

But Amazon is pushing the privacy envelope even further as they test their new in-home delivery service – Amazon Key. In exchange for the convenience of having your parcels delivered inside your home when you’re away, you literally give Amazon the keys to your home. Your front door will have a smart door lock that can be opened via the remote servers of AWS. Opt in to this and suddenly you’ve given Amazon the right to not only listen to everything you say in your home but also to enter your home whenever they wish.

How do you feel about that?

This becomes the key question. How do we feel about the convenience/privacy exchange. But it turns out that our response depends in large part on how that question is framed. In a study conducted in 2015 by the Annenberg School for Communications at the University of Pennsylvania, researchers gathered responses from participants probing their sensitivity around the trading of privacy for convenience. Here is a sampling of the results:

  • 55% of respondents disagreed with the statement: “It’s OK if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
  • 71% disagreed with: “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.
  • 91% disagreed that: “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”

Here, along the spectrum of privacy pushback, we start to see what the real problem is. We’re willing to exchange private information, as long as we’re aware of all that is happening and feel in control of it. But that, of course, is unrealistic. We can’t control it. And even if we could, we’d soon learn that the overhead required to do so is unmanageable. It’s why Vint Cerf said we’re going to have to learn to live with transparency.

Again, as Mr. Mandese points out, we’re really not being given a choice. Participating in the modern economy required us anteing up personal information. If we choose to remain totally private, we cut ourselves off from a huge portion of what’s available. And we are already at the point where the vast majority of us really can’t opt out. We all get pissed off when we hear of a security breach a la the recent Equifax debacle. Our privacy sensitivities are heightened for a day or two and we give lip service to outrage. But unless we go full out Old Order Amish, what are our choices?

We may rationalize the trade off by saying the private information we’re exchanging for services is not really that sensitive. But that’s where the potential threat of Big Data comes in. Gather enough seemingly innocent data and soon you can start predicting with startling accuracy the aspects of our lives that we are sensitive about. We run headlong into the Target Pregnant Teen dilemma. And that particular dilemma becomes thornier as the walls break down between data siloes and your personal information becomes a commodity on an open market.

The potential risk of trading away our privacy becomes an escalating aspect here – it’s the frog in boiling water syndrome. It starts innocently but can soon develop into a scenario that will keep most anyone up at night with the paranoiac cold sweats. Let’s say the data is used for targeting – singling us out of the crowd for the purpose of selling stuff to us. Or – in the case of governments – seeing if we have a proclivity for terrorism. Perhaps that isn’t so scary if Big Brother is benevolent and looking out for our best interests. But what if Big Brother becomes a bully?

There is another important aspect to consider here, and one that may have dire unintended consequences. When our personal data is used to make our world more convenient for us, that requires a “filtering” of that world by some type of algorithm to remove anything that algo determines to be irrelevant or uninteresting to us. Essentially, the entire physical world is “targeted” to us. And this can go horribly wrong, as we saw in the last presidential election. Increasingly we live in a filtered “bubble” determined by things beyond our control. Our views get trapped in an echo chamber and our perspective narrows.

But perhaps the biggest red flag is the fact that in signing away our privacy by clicking accept, we often also sign away any potential protection when things do go wrong. In another study called “The Biggest Lie on the Internet,” researchers found that when students were presented with a fictitious terms of service and privacy policy, 74% skipped reading it. And those that took the time to read didn’t take very much time – just 73 seconds on average. What almost no one caught were “gotcha clauses” about data sharing with the NSA and giving up your first-born child. While these were fictitious, real terms of service and privacy notifications often include clauses that include total control over the information gathered about you and giving up your right to sue if anything went bad. Even if you could sue, there might not be anyone left to sue. One analyst calculated that even if all the people who had their financial information stolen from Equifax won a settlement, it would actually amount to about $81 dollars.

 

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

The Retrofitting of Broadcasting

I returned to my broadcast school for a visit last week. Yes, it was nostalgic, but it was also kind of weird.

Here’s why…

I went to broadcast school in the early 80’s. The program I attended, at the Northern Alberta Institute of Technology, had just built brand new studios, outfitted with the latest equipment. We were the first group of students to get our hands on the stuff. Some of the local TV stations even borrowed our studio to do their own productions. SCTV – with the great John Candy, Catherine O’Hara, Eugene Levy, Rick Moranis and Andrea Martin – was produced just down the road at ITV. It was a heady time to be in TV. I don’t want to brag, but yeah, we were kind of a big deal on campus.

That was then. This was now. I went back for my first visit in 35 years, and nothing had really changed physically. The studios, the radio production suites, the equipment racks, the master control switcher – it was all still there – in all its bulky, behemoth-like glory. They hadn’t even changed the lockers. My old one was still down from Equipment Stores and right across from one of the classrooms.

The disruption of the past four decades was instantly crystallized. None of the students today touched any of that 80’s era technology – well – except for the locker. That was still functional. The rows and rows of switches, rotary pots, faders and other do-dads hadn’t been used in years. The main switching board served as a makeshift desk for a few computer monitors and a keyboard. The radio production suites were used to store old office chairs. The main studio; where we once taped interviews, music videos, multi-camera dramas, sketch comedies and even a staged bar fight? Yep, more storage.

The campus news show was still shot in the corner, but the rest of that once state-of-the-art studio was now a very expensive warehouse. The average iPhone today has more production capability than the sum total of all that analog wizardry. Why use a studio when all you need is a green wall?

I took the tour with my old friend Daryl, who is still in broadcasting. He is the anchor of the local 6 o’clock news. Along the way we ran into a couple of other old schoolmates who were now instructors. And we did what middle-aged guys do. We reminisced about the glory days. We roamed our old domain like dinosaurs ambling towards our own twilight.

When we entered the program, it was the hottest ticket in town. They had 10 potential students vying for every program seat available. Today, on a good year, it’s down to 2 to 1. On a bad year, everyone who applies gets in. The program has struggled to remain relevant in an increasingly digital world and now focuses on those who actually want to work in television news. All the other production we used to do has been moved to a digital production program.

We couldn’t know it at the time, but we were entering broadcasting just when broadcasting had reached the apex of its arc. You still needed bulk to be a broadcaster. An ENG camera (Electronic News Gathering) weighed in at a hefty 60 pounds plus, not including the extra battery belt. Now, all you need a smartphone and a YouTube account. The only thing produced at most local stations is the news. And the days are numbered for even that.

If you are middle aged like I am, your parents depend on TV for their news. For you, it’s an option – one of many places you can get it. You probably watch the 6 o’clock news more out of habit than anything. And your kids never watch it. I know mine don’t. According to the Pew Research Center, only 27% of those 18-29 turn to TV for their news. Half of them get their news online. In my age group, 72% of us still get our news from TV, with 29% of us turning online. The TV news audience is literally aging to death.

My friend Daryl sees the writing on the wall. Everybody in the business does. When I met his co-anchor and told her that I had taken the digital path, she said, “Ah, an industry with a future.”

Perhaps, but then again, I never got my picture on the side of a bus.