Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.

 

 

Drowning in a Sea of Tech

The world is becoming a pretty technical place. The Internet of Things is surrounding us. Which sounds exciting. Until the Internet of Things doesn’t work.

Then what?

I know all these tech companies have scores of really smart people who work to make their own individual tech as trouble free as possible. Although the term has lost its contextual meaning, we’re all still aiming for “plug and play”. For people of a certain age – me, for example – this used to refer to a physical context; being able to plug stuff into a computer and have it simply started working. Now, we plug technology into our lives and hopes it plays well with all the other technology that it finds there.

But that isn’t always the case – is it? Sometimes, as Mediapost IoT Daily editor Chuck Martin recently related, technology refuses to play nice together. And because we now have so much technology interacting in so many hidden ways, it becomes very difficult to root out the culprit when something goes wrong.

Let me give you an example. My wife has been complaining for some time that her iPhone has been unable to take a picture because it has no storage available, even though it’s supposed to magically transport stuff off to the “Cloud”. This past weekend, I finally dug in to see what the problem was. The problem, as it turned out, was that the phone was bloated with thousands of emails and Messenger chats that were hidden and couldn’t be deleted. They were sucking up all the available storage. After more than an hour of investigation, I managed to clear up the Messenger cache but the email problem – which I’ve traced back to some issues with configuration of the account at her email provider – is still “in progress.”

We – and by “we” I include me and all you readers – are a fairly tech savvy group. With enough time and enough Google searches, we can probably hunt down and eliminate most bugs that might pop up. But that’s us. There are many more people who are like my wife. She doesn’t care about incorrectly configured email accounts or hidden caches. She just wants shit to work. She wants to be able to take a picture of my nephew on his 6th birthday. And when she can’t do that, the quality of my life takes a sudden downturn.

The more that tech becomes interconnected, the more likely it is that stuff can stop working for some arcane reason that only a network or software engineer can figure out. It’s getting to the point where all of us are going to need a full-time IT tech just to keep our households running. And I don’t know about you, but I don’t know where they’re going to sleep. Our guest room is full of broken down computers and printers right now.

For most of us, there is a triage sequence of responses to tech-related pains in the ass:

  1. First, we ignore the problem, hoping it will go away.
  2. Second, we reboot every piece of tech related to the problem, hoping it will go away.
  3. If neither of the above work, we marginalize the problem, working around it and hoping that eventually it will go away.
  4. If none of this works, we try to upgrade our way out of the problem, buying newer tech hoping that by tossing our old tech baby out the window, the problem will be flushed out along with the bath water.
  5. Finally, in rare cases (with the right people) – we actually dig into the problem, trying to resolve it

By the way, it hasn’t escaped my notice that there’s a pretty significant profit motive in point number 4 above. A conspiracy, perchance? Apple, Microsoft and Google wouldn’t do that to us, would they?

I’m all for the Internet of Things. I’m ready for self-driving cars, smart houses and bio-tech enhanced humans. But my “when you get a chance could you check…” list is getting unmanageably long. I’d be more than happy to live the rest of my life without having to “go into settings” or “check my preferences.”

Just last night I dreamt that I was trying to swim to a deserted tropical island but I kept drowning in a sea of Apple Watches. I called for help but the only person that could hear me was Siri. And she just kept saying, “I’m really sorry about this but I cannot take any requests right now. Please try again later…”

Do you think it means anything?

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Luddites Unite…

Throw off the shackles of technology. Rediscover the true zen of analog pleasures!

The Hotchkisses had a tech-free Christmas holiday – mostly. The most popular activity around our home this year was adult coloring. Whodathunkit?

There were no electronic gadgets, wired home entertainment devices or addictive apps exchanged. No personal tech, no connected platforms, no internet of things (with one exception). There were small appliances, real books printed on real paper, various articles of clothing – including designer socks – and board games.

As I mentioned, I did give one techie gift, but with a totally practical intention. I gave everyone Tiles to keep track of the crap we keep losing with irritating regularity. Other than that, we were surprisingly low tech this year.

Look, I’m the last person in the world that could be considered a digital counter-revolutionary. I love tech. I eat, breathe and revel in stuff that causes my wife’s eyes to repeatedly roll. But this year – nada. Not once did I sit down with a Chinglish manual that told me “When the unit not work, press “C” and hold on until you hear (you should loose your hands after you hear each sound) “

This wasn’t part of any pre-ordained plan. We didn’t get together and decide to boycott tech this holiday. We were just technology fatigued.

Maybe it’s because technology is ceasing to be fun. Sometimes, it’s a real pain in the ass. It nags us. It causes us to fixate on stupid things. It beeps and blinks and points out our shortcomings. It can lull us into catatonic states for hours on end. And this year, we just said “Enough!” If I’m going to be catatonic, it’s going to be at the working end of a pencil crayon, trying to stay within the lines.

Even our holiday movie choice was anti-tech, in a weird kind of way. We, along with the rest of the world, went to see Star Wars, the Force Awakens. Yes, it’s a sci-fi movive, but no one is going to see this movie for its special effects or CGI gimcrackery. Like the best space opera entries, we want to get reacquainted with people in the story. The Force’s appeal is that it is a long-awaited (32 years!) family reunion. We want to see if Luke Skywalker got bald and fat, despite the force stirring within him.

I doubt that this is part of any sustained move away from tech. We are tech-dependent. But maybe that’s the point. It used to be that tech gadgets separated us from the herd. It made us look coolly nerdish and cutting edge. But when the whole world is wearing an iWatch, the way to assert your independence is to use a pocket watch. Or maybe a sundial.

And you know what else we discovered? Turning away from tech usually means you turn towards people. We played board games together – actual board games, with cards and dice and boards that were made of pasteboard, not integrated circuits. We were in the same room together. We actually talked to each other. It was a form of communication that – for once – didn’t involve keyboards, emojis or hashtags.

I know this was a fleeting anomaly. We’re already back to our regular tech-dependent habits, our hands nervously seeking the nearest connected device whenever we have a millisecond to spare.

But for a brief, disconnected moment, it was nice.

Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Can A Public Company Keep a Start Up Attitude?

google-glass1

Google is possibly the most interesting company in the world right now. But being interesting does not necessarily equate with being successful. And therein lies the rub.

Case in point. Google is taking another crack at Google Glass. Glass has the potential to be a disruptive technology. And the way Google approached it was very much in the Google way of doing things. They put a beta version out there and asked for feedback from the public. Some of that feedback was positive, but much of it was negative. That is natural. It’s the negative feedback you’re looking for, because it shows what has to be changed. The problem is that Glass V 0.9 is now pegged as a failure. So as Laurie Sullivan reported, Google is trying a different approach, which appears to be taken from Apple’s playbook. They’re developing under wraps, with a new product lead, and you probably won’t see another version of Glass until it’s ready to ship as a viable market-ready product.

The problem here is that Google may have lost too much time. As Sullivan points out, Intel, Epson and Microsoft are all working on consumer versions of wearable visual interfaces. And they’re not alone. A handful of aggressive start-ups are also going after Glass, including Meta, Vuzix, Optinvent, Glassup and Recon. And none of them will attract the attention of Google, simply because they’re not Google.

Did Google screw up with the first release of Google Glass? Probably not. In fact, if you read Eric Ries’s The Lean Start Up, they did a lot of things right. They got a minimally viable product in front of a market to test it and see what to improve. No, Google’s problem wasn’t with their strategy; it was with their speed. As Ries states,

“The goal of a startup is to figure out the right thing to build—the thing customers want and will pay for—as quickly as possible.”

Google didn’t move fast enough with Glass. And I suspect it was because Google isn’t a start up, so it can’t act like one. Again, from Ries,

“The problem isn’t with the teams or the entrepreneurs. They love the chance to quickly get their baby out into the market. They love the chance to have the customer vote instead of the suits voting. The real issue is with the leaders and the middle managers.”

Google isn’t the only company to feel the constricting bonds of being a public company. There is a long list of world changing technologies that were pioneered at places like Xerox and Microsoft and were tagged as corporate failures, only to eventually change the world in someone else’s hands.

I suspect the days are many when Larry Page and Sergey Brin are sorry they ever decided to take Google public. Back then, they probably thought that the vast economic resources that would become available, combined with their vision, would make an unbeatable combination. But in the process of going public, they were forced to compromise on the very spirit that was defined by that vision. They want to do great things, but they still need to hit their quarterly targets and keep shareholders happy. The two things shouldn’t be mutually exclusive, but sadly they almost always are.

It’s probably no accident that Apple does their development in stealth mode. Apple has much more experience than Google in being a public company. They have probably realized that it’s not the buying public that you keep in the dark, it’s the analysts and shareholders. Otherwise, they’ll look at the early betas, an essential step in the development process, and pass judgment, tagging them as failures long before such judgments are justified. It would be like condemning a newborn baby as hopeless because they can’t drive a car yet.

Google is dreaming big dreams. I admire that. I just worry that the structure of Google might not be the right vehicle in which to pursue those dreams.

The Apple Watch – More Than Just a Pretty Face

wpid-iwatch-goldI just caught Tim Cook’s live streaming introduction of the Apple Watch (I guess they’ve given up the long running “i” naming theme). What struck me most is how arduously Apple has stuck with traditional touch points in introducing a totally new product category (well, new for Apple anyway).

If you glanced quickly across the room at someone wearing Apple’s new wonder, you probably wouldn’t even know they’re wearing technology. The Apple Watch looks a lot like an analog watch. There is even a Mickey Mouse face you can choose. The interchangeable bracelets smack of tradition. Jon Ive verified this point in the video that ran at the introduction, saying they borrowed heavily from the “watchmaker’s vocabulary” in the design process. They even consulted “horological experts from around the world” to provide a time keeping experience rooted in cultural nuance. The primary interface to the watch is a modified version of the very old fashioned watch-winding crown.

Now, appearances can be deceiving. As Cook, Ive and Kevin Lynch put the watch through its paces, it was clear that this is an impressive little piece of technology. Particular attention has been paid to making this an intimate device, with new advances in touch technology, biometric and motion sensors and the ability to personalize interfaces and hardware to make it uniquely yours. Watching, I couldn’t help but compare this to Google’s introduction of Google Glass. In many ways, Glass is the more revolutionary device. But the Apple Watch will have a much faster adoption path.

Google impresses first with sheer brute-force technological effort. Design is an afterthought. Google uses UI testing and design to try to corral a Pandora’s box full of raw innovation into a usable package. Apple takes a much different approach. They look first at the user experience and then they pick and choose the technologies required to deliver the intended experience. They lavish ridiculous amounts of time on seemingly miniscule design details but the end result is typically nothing less than breathtaking. We’re impressed with the technology, sure, but the overriding emotion is one of lust. We just have to have what ever the hell it is that is being introduced on the main stage of the Flint Center.

larrygiseleDespite the many who have said otherwise, including the late Steve Jobs, Apple has never really made a revolutionary device. Others have always been there first. What they have done, however, is taken raw innovation and packaged it in a way that resonates with its audience at a deep and hormonal level. Apple products are stylish and sexy – the Gisele Bündchen of technology – yet attainable to mere mortals. They take the “next big thing” and push them past the tipping point by kindling lust in the hearts and wallets of the market. Google products, despite their geeky technical prowess, have a nasty habit of getting stuck on the wrong side of the adoption curve. They are the – well, let’s face it – they are the Larry Page of technology – smart, but considerably less sexy.

Apple times entrance to the adoption curve to near perfection. They have a knack of positioning just ahead of the masses. Google’s target is much further down the road. They release betas well ahead of any market demand. That’s why most of us can’t wait to wear an Apple Watch, but wouldn’t be caught dead in a pair of Google Glass.

One last thought on this week’s introduction of the Apple Watch. Wearable technology is following an interesting path. Your smartphone now acts as a connected main base for more intimate pieces of tech like the Apple Watch or Google Glass. Increasingly, the actual user interfaces will be on these types of devices, but the heavy lifting will happen on a smart phone tucked into a pocket, purse or backpack. Expect specific purpose devices to proliferate, all connected to increasingly powerful MPUs (Mobile Processing Units) that will orchestrate the symphony of tech that you’re wearing.