What Price Privacy?

As promised, I’m picking up the thread from last week’s column on why we seem okay with trading privacy for convenience. The simple – and most plausible – answer is that we’re really not being given a choice.

As Mediapost Senior Editor Joe Mandese pointed out in an very on-point comment, what is being creating is an transactional marketplace where offers of value are exchanged for information.:

“Like any marketplace, you have to have your information represented in it to participate. If you’re not “listed” you cannot receive bids (offers of value) based on who you are.”

Amazon is perhaps the most relevant example of this. Take Alexa and Amazon Web Services (AWS). Alexa promises to “make your life easier and more fun.” But this comes at a price. Because Alexa is voice activated, it’s always listening. That means that privacy of anything we say in our homes has been ceded to Amazon through their terms of service. The same is true for Google Assist and Apple Siri.

But Amazon is pushing the privacy envelope even further as they test their new in-home delivery service – Amazon Key. In exchange for the convenience of having your parcels delivered inside your home when you’re away, you literally give Amazon the keys to your home. Your front door will have a smart door lock that can be opened via the remote servers of AWS. Opt in to this and suddenly you’ve given Amazon the right to not only listen to everything you say in your home but also to enter your home whenever they wish.

How do you feel about that?

This becomes the key question. How do we feel about the convenience/privacy exchange. But it turns out that our response depends in large part on how that question is framed. In a study conducted in 2015 by the Annenberg School for Communications at the University of Pennsylvania, researchers gathered responses from participants probing their sensitivity around the trading of privacy for convenience. Here is a sampling of the results:

  • 55% of respondents disagreed with the statement: “It’s OK if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
  • 71% disagreed with: “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.
  • 91% disagreed that: “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”

Here, along the spectrum of privacy pushback, we start to see what the real problem is. We’re willing to exchange private information, as long as we’re aware of all that is happening and feel in control of it. But that, of course, is unrealistic. We can’t control it. And even if we could, we’d soon learn that the overhead required to do so is unmanageable. It’s why Vint Cerf said we’re going to have to learn to live with transparency.

Again, as Mr. Mandese points out, we’re really not being given a choice. Participating in the modern economy required us anteing up personal information. If we choose to remain totally private, we cut ourselves off from a huge portion of what’s available. And we are already at the point where the vast majority of us really can’t opt out. We all get pissed off when we hear of a security breach a la the recent Equifax debacle. Our privacy sensitivities are heightened for a day or two and we give lip service to outrage. But unless we go full out Old Order Amish, what are our choices?

We may rationalize the trade off by saying the private information we’re exchanging for services is not really that sensitive. But that’s where the potential threat of Big Data comes in. Gather enough seemingly innocent data and soon you can start predicting with startling accuracy the aspects of our lives that we are sensitive about. We run headlong into the Target Pregnant Teen dilemma. And that particular dilemma becomes thornier as the walls break down between data siloes and your personal information becomes a commodity on an open market.

The potential risk of trading away our privacy becomes an escalating aspect here – it’s the frog in boiling water syndrome. It starts innocently but can soon develop into a scenario that will keep most anyone up at night with the paranoiac cold sweats. Let’s say the data is used for targeting – singling us out of the crowd for the purpose of selling stuff to us. Or – in the case of governments – seeing if we have a proclivity for terrorism. Perhaps that isn’t so scary if Big Brother is benevolent and looking out for our best interests. But what if Big Brother becomes a bully?

There is another important aspect to consider here, and one that may have dire unintended consequences. When our personal data is used to make our world more convenient for us, that requires a “filtering” of that world by some type of algorithm to remove anything that algo determines to be irrelevant or uninteresting to us. Essentially, the entire physical world is “targeted” to us. And this can go horribly wrong, as we saw in the last presidential election. Increasingly we live in a filtered “bubble” determined by things beyond our control. Our views get trapped in an echo chamber and our perspective narrows.

But perhaps the biggest red flag is the fact that in signing away our privacy by clicking accept, we often also sign away any potential protection when things do go wrong. In another study called “The Biggest Lie on the Internet,” researchers found that when students were presented with a fictitious terms of service and privacy policy, 74% skipped reading it. And those that took the time to read didn’t take very much time – just 73 seconds on average. What almost no one caught were “gotcha clauses” about data sharing with the NSA and giving up your first-born child. While these were fictitious, real terms of service and privacy notifications often include clauses that include total control over the information gathered about you and giving up your right to sue if anything went bad. Even if you could sue, there might not be anyone left to sue. One analyst calculated that even if all the people who had their financial information stolen from Equifax won a settlement, it would actually amount to about $81 dollars.

 

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

The Era of Amplification

First published in Mediapost’s Search Insider, May 1, 2014

AmandaToddVideoMediapost columnist Joseph Jaffe wrote a great piece Tuesday on the Death of Anonymity. He shows how anonymity in the era of digital has become both a blessing and a curse, leading to an explosion of cowardly, bone-headed comments and cyber-bullying.  This reinforces something I’ve said repeated: technology doesn’t change human behavior; it just enables it in new ways. Heroes will find new ways to be heroes, and idiots will find new ways to be idiots.

But there is something important happening here. It’s not that technology is making us meaner, more cowardly or more stupid. I grew up with bullies, my father grew up with bullies and his father grew up with bullies. You could trace a direct line of bullies going back to the first time our ancestors walked erect, and probably further than that. So what’s different today? Why do we now need laws against cyber-bullying?

It’s because we now live in a time of increased amplification. The waves that spread from an individual’s actions go farther than ever before.  Technology increases the consequences of those actions.  A heroic act can spread through a network and activate other heroes, creating a groundswell of heroism. Unfortunately, the flip side is also true – bullying can begat more bullying. The viral spread of bullying that technology enables can make the situation hopeless for the victim.

Consider the case of Amanda Todd, a grade 10 student from Port Coquitlam, BC, Canada. Todd had been bullied for over a year by a guy who wanted “a show”. She finally relented and flashed her breasts. While not advisable, Todd’s actions were not that unusual. She wasn’t the first 15 year-old to experiment with a little sexual promiscuity after prolonged male pleading. It certainly shouldn’t have turned into a death sentence for Todd. But it did – because of amplification.

First of all, Todd’s tormentor was a man who lived thousands of miles away, in Holland. They never met. Secondly, Todd’s indiscretion was captured in a digital picture and was soon circulated worldwide. As teen-agers have been since time began, Todd was mercilessly teased. But it wasn’t just at the hands of a small circle of bullies at her high school. Taunts from around the world came from jerks who jumped on the bandwagon. A teen-ager’s psyche is typically a fragile thing, and the amplitude of that teasing was psychologically crushing for Todd. Desperate for escape, she first recorded a plea for understanding that she posted online, and then took her own life. The act that started all this should have been added to that pile of minor regrets we all assemble in our adolescence. It should not have ended the way it did. Unfortunately, Todd was a victim of amplification.

My wife and I have two daughters, one of which is about the same age as Todd. Because they grew up in the era of Amplification, we pounded home the fact that anything captured online can end up anywhere. You just can’t be careless, not even for the briefest of moments. But, of course, teenagers are occasionally careless. It’s part of the job description. They’re testing the world as a place to live in – experimenting with what it means to be an adult –  and mistakes are inevitable. Unfortunately, the potential price to be paid for those mistakes has been raised astronomically.

Here’s perhaps the most frightening thing about this. Todd’s Youtube video has been seen over 17 million times, so it too has been amplified by technology. Amanda’s story has spread through the world online. The vast majority of comments are those you would hope to see – expressions of sympathy, support, understanding and caring. But there are a handful of hateful comments of the sort that drove Todd to suicide. Technology allows us to sort and filter for negativity. In other words, technology allows bullies to connect to bullies.

In social networks, there is something called “threshold-limited spreading.” Essentially, it means that for something to spread through a network, the number of incidences needs to reach a certain threshold. In the case of bullying, as in the case of rioting or social movements, the threshold depends on the connections between like-minded individuals. If bullies can connect in a cluster, they draw courage from each other. This can then trigger a cascade effect, encouraging those “on the margin” to also engage in bullying. Technology, because of its unique ability to enable connections between those who think alike, can trigger these cascades of bullying. It doesn’t matter if the ratio of positive to negative is ten to one or even one hundred to one. All that matters is there are a sufficient number of negative comments for the would-be bully to feel that he or she has support.

I don’t know what the lasting impact of the Era of Amplification will be.  I do know that Technology has made the world a much more promising place than it was when I was born. I also know it’s made it much crueler and more frightening.  And it’s not because of any changes in who we are. It’s because the ripples of our actions now can spread further than we can even imagine.

Who Owns Your Data (and Who Should?)

First published January 23, 2104 in Mediapost’s Search Insider

Lock backgroundLast week, I talked about a backlash to wearable technology. Simon Jones, in his comment, pointed to a recent post where he raised the very pertinent point – your personal data has value. Today, I’d like to explore this further.

I think we’re all on the same page when we say there is a tidal wave of data that will be created in the coming decade. We use apps – which create data. We use/wear various connected personal devices – which create data. We go to online destinations – which create data. We interact with an ever-increasing number of wired “things” – which create data. We interact socially through digital channels – which create data.  We entertain ourselves with online content – which creates data. We visit a doctor and have some tests done – which creates data. We buy things, both online and off, and these actions also create data. Pretty much anything we do now, wherever we do it, leaves a data trail. And some of that data, indeed, much of it, can be intensely personal.

As I said some weeks ago, all this data is creating a eco-system that is rapidly multiplying and, in its current state, is incredibly fractured and chaotic. But, as Simon Jones rightly points out, there is significant value in that data. Marketers will pay handsomely to have access to it.

But what, or whom, will bring order to this chaotic and emerging market? The value of the data compounds quickly when it’s aggregated, filtered, cross-tabulated for correlations and then analyzed. As I said before, the captured data is its fragmented state is akin to a natural resource. To get to a more usable end state, you need to add a value layer on top of it. This value layer will provide the required additional steps to extract the full worth of that data.

So, to retrace my logic, data has value, even in it’s raw state. Data also has significant privacy implications. And right now, it’s not really clear who owns what data. To move forward into a data market that we can live with, I think we need to set some basic ground rules.

First of all, most of us who are generating data have implicitly agreed to a quid pro quo arrangement – we’ll let you collect data from us if we get an acceptable exchange of something we value. This could be functionality, monetary compensation (usually in the form of discounts and rewards), social connections or entertainment. But here’s the thing about that arrangement – up to now, we really haven’t quantified the value of our personal data. And I think it’s time we did that. We may be trading away too much for much too little.

To this point we haven’t worried much about what we traded off and to whom because any data trails we left have been so fragmented and specific to one context, But, as that data gains more depth and, more importantly, as it combines with other fragments to provide much more information about who we are, what we do, where we go, who we connect with, what we value and how we think, it becomes more and more valuable. It represents an asset for those marketers who want to persuade us, but more critically, that data -our digital DNA – becomes vitally important to us. In it lays the quantifiable footprint of our lives and, like all data, it can yield insights we may never gain elsewhere. In the right hands, it could pinpoint critical weaknesses in our behavioral patterns, red flags in our lifestyle that could develop into future health crises, financial opportunities and traps and ways to allocate time and resources more efficiently. As the digitally connected world becomes denser, deeper and more functional, that data profile will act as our key to it. All the potential of a new fully wired world will rely on our data.

There are millions of corporations that are more than happy to warehouse their respective data profiles of you and sell it back to you on demand as you need it to access their services or tools.  They will also be happy to sell it to anyone else who may need it for their own purposes. Privacy issues aside (at this point, data is commonly aggregated and anonymized) a more fundamental question remains – whose data is this? Whose data should it be? Is this the reward they reap for harvesting the data? Or because this represents you, should it remain your property, with you deciding who uses it and for what?

This represents a slippery slope we may already be starting down.  And, if you believe this is your data and should remain so, it also marks a significant change from what’s currently happening. Remember, the value is not really in the fragments. It’s in bringing it together to create a picture of who you are. And we should be asking the question – who should have the right to create that picture of you – you – or a corporate data marketplace that exists beyond your control ?

The Inevitable Wearable Technology Backlash

First published January 16, 2014 in Mediapost’s Search Insider

piem-1024x705Okay, I’ve gone on record – I think wearable technology is a huge disruptive wave currently bearing down on us. Accept it.

And I’ve also said that stupid wearable technology is inevitable. Accept that as well.

It appears that this dam is beginning to burst.

Catharine Taylor had a humorous and totally on-point reaction to the “tech-togs” that were unveiled at CES. Her take: “Thanks but no thanks”

Maarten Albarda a similar reaction to his first go-around with Google Glass – “Huh?”

Look – don’t get me wrong. Wearable technology, together with the “web of everything,” will eventually change our lives, but most of us won’t be going willingly. We’re going to have to get through the “bubble of silliness” first. Some of this stuff will make sense and elicit a well-earned “Cool” (or “Dope” or “Sick” or what ever generational thumbs-up is appropriate). Other things will garner an equally well-earned WTF? And some will be imminently sensible but will still end up being tossed out with the bathwater anyway.

Rob Garner always says “adoption follows function” This is true, but each of us has different thresholds for what we deem to be functional. If technology starts moving that bar, we know, thanks to the work of Everett Rogers and others, that the audience’s acceptance of that will follow the inevitable bell curve. Functionality is not equal in the eyes of all beholders.

The other problem with these new interfaces with technology is that function is currently scattered around like a handful of grass clippings in the wind. Sure, there are shards of usefulness, but unless you’re willing to wear more layers of wearable tech than your average early adopting Eskimo (or, as we say here in the politically correct north – Inuit), it’s difficult to see how this can significantly improve our day-to-day lives.

The other thing we have to grapple with is what I would call the WACF – The Weird and Creepy Factor. How exactly do we feel about having the frequency of our butt imprinting our sofa, our bank balance, our blood pressure and our body fat percentage beamed up to the data center of a start up we’d never heard of before last Friday? I’m an admitted early adopter and I have to confess – I’m not ready to make that leap right now.

It’s not just the privacy of my personal data that’s holding me back, although that is certainly a concern. Part of this goes back to something I talked about a few columns back – the redefinition of what it means to “be” online rather than “go” online. With wearable technology, we’re always “on” – plugged into the network and sharing data whether we’re aware of it or not.  This requires us with a philosophical loss of control. Chances are that we haven’t given this a lot of rational consideration, but it contributes to that niggling WACF that may be keeping us from donning the latest piece of wearable tech.

Eventually, the accumulated functionality of all this new technology will overcome all these barriers to adoption, but we will all have differing thresholds marking our surrender to the inevitable.  Garner’s assertion that adoption follows function is true, but it’s true of the functional wave as a whole and in that wave there will be winners and losers. Not all functional improvements get adopted. If all adoption followed all functional improvements, I’d be using a Dvorak keyboard right now. Betamax would have become the standard for videocassettes. And we’d be conversing in Esperanto. All functional improvements – all casualties to an audience not quite ready to embrace them.

Expect more to come.

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.