What Would a “Time Well Spent” World Look Like?

Close up of friends with circle of smart phones

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Luddites Unite…

A cartoon showing a luddite leader dressed as a woman. This is possibly a Tory caricature comparing the Luddites to the mobs in the French Revolution, whose leaders dressed as women at the storming of the bastille and the visit to Versailles, in order to avoid being beaten down by the Royal soldiers.   (Photo by Henry Guttmann/Getty Images)

Throw off the shackles of technology. Rediscover the true zen of analog pleasures!

The Hotchkisses had a tech-free Christmas holiday – mostly. The most popular activity around our home this year was adult coloring. Whodathunkit?

There were no electronic gadgets, wired home entertainment devices or addictive apps exchanged. No personal tech, no connected platforms, no internet of things (with one exception). There were small appliances, real books printed on real paper, various articles of clothing – including designer socks – and board games.

As I mentioned, I did give one techie gift, but with a totally practical intention. I gave everyone Tiles to keep track of the crap we keep losing with irritating regularity. Other than that, we were surprisingly low tech this year.

Look, I’m the last person in the world that could be considered a digital counter-revolutionary. I love tech. I eat, breathe and revel in stuff that causes my wife’s eyes to repeatedly roll. But this year – nada. Not once did I sit down with a Chinglish manual that told me “When the unit not work, press “C” and hold on until you hear (you should loose your hands after you hear each sound) “

This wasn’t part of any pre-ordained plan. We didn’t get together and decide to boycott tech this holiday. We were just technology fatigued.

Maybe it’s because technology is ceasing to be fun. Sometimes, it’s a real pain in the ass. It nags us. It causes us to fixate on stupid things. It beeps and blinks and points out our shortcomings. It can lull us into catatonic states for hours on end. And this year, we just said “Enough!” If I’m going to be catatonic, it’s going to be at the working end of a pencil crayon, trying to stay within the lines.

Even our holiday movie choice was anti-tech, in a weird kind of way. We, along with the rest of the world, went to see Star Wars, the Force Awakens. Yes, it’s a sci-fi movive, but no one is going to see this movie for its special effects or CGI gimcrackery. Like the best space opera entries, we want to get reacquainted with people in the story. The Force’s appeal is that it is a long-awaited (32 years!) family reunion. We want to see if Luke Skywalker got bald and fat, despite the force stirring within him.

I doubt that this is part of any sustained move away from tech. We are tech-dependent. But maybe that’s the point. It used to be that tech gadgets separated us from the herd. It made us look coolly nerdish and cutting edge. But when the whole world is wearing an iWatch, the way to assert your independence is to use a pocket watch. Or maybe a sundial.

And you know what else we discovered? Turning away from tech usually means you turn towards people. We played board games together – actual board games, with cards and dice and boards that were made of pasteboard, not integrated circuits. We were in the same room together. We actually talked to each other. It was a form of communication that – for once – didn’t involve keyboards, emojis or hashtags.

I know this was a fleeting anomaly. We’re already back to our regular tech-dependent habits, our hands nervously seeking the nearest connected device whenever we have a millisecond to spare.

But for a brief, disconnected moment, it was nice.

Talking Back to Technology

130318_siri_0078

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Can A Public Company Keep a Start Up Attitude?

google-glass1

Google is possibly the most interesting company in the world right now. But being interesting does not necessarily equate with being successful. And therein lies the rub.

Case in point. Google is taking another crack at Google Glass. Glass has the potential to be a disruptive technology. And the way Google approached it was very much in the Google way of doing things. They put a beta version out there and asked for feedback from the public. Some of that feedback was positive, but much of it was negative. That is natural. It’s the negative feedback you’re looking for, because it shows what has to be changed. The problem is that Glass V 0.9 is now pegged as a failure. So as Laurie Sullivan reported, Google is trying a different approach, which appears to be taken from Apple’s playbook. They’re developing under wraps, with a new product lead, and you probably won’t see another version of Glass until it’s ready to ship as a viable market-ready product.

The problem here is that Google may have lost too much time. As Sullivan points out, Intel, Epson and Microsoft are all working on consumer versions of wearable visual interfaces. And they’re not alone. A handful of aggressive start-ups are also going after Glass, including Meta, Vuzix, Optinvent, Glassup and Recon. And none of them will attract the attention of Google, simply because they’re not Google.

Did Google screw up with the first release of Google Glass? Probably not. In fact, if you read Eric Ries’s The Lean Start Up, they did a lot of things right. They got a minimally viable product in front of a market to test it and see what to improve. No, Google’s problem wasn’t with their strategy; it was with their speed. As Ries states,

“The goal of a startup is to figure out the right thing to build—the thing customers want and will pay for—as quickly as possible.”

Google didn’t move fast enough with Glass. And I suspect it was because Google isn’t a start up, so it can’t act like one. Again, from Ries,

“The problem isn’t with the teams or the entrepreneurs. They love the chance to quickly get their baby out into the market. They love the chance to have the customer vote instead of the suits voting. The real issue is with the leaders and the middle managers.”

Google isn’t the only company to feel the constricting bonds of being a public company. There is a long list of world changing technologies that were pioneered at places like Xerox and Microsoft and were tagged as corporate failures, only to eventually change the world in someone else’s hands.

I suspect the days are many when Larry Page and Sergey Brin are sorry they ever decided to take Google public. Back then, they probably thought that the vast economic resources that would become available, combined with their vision, would make an unbeatable combination. But in the process of going public, they were forced to compromise on the very spirit that was defined by that vision. They want to do great things, but they still need to hit their quarterly targets and keep shareholders happy. The two things shouldn’t be mutually exclusive, but sadly they almost always are.

It’s probably no accident that Apple does their development in stealth mode. Apple has much more experience than Google in being a public company. They have probably realized that it’s not the buying public that you keep in the dark, it’s the analysts and shareholders. Otherwise, they’ll look at the early betas, an essential step in the development process, and pass judgment, tagging them as failures long before such judgments are justified. It would be like condemning a newborn baby as hopeless because they can’t drive a car yet.

Google is dreaming big dreams. I admire that. I just worry that the structure of Google might not be the right vehicle in which to pursue those dreams.

The Apple Watch – More Than Just a Pretty Face

wpid-iwatch-goldI just caught Tim Cook’s live streaming introduction of the Apple Watch (I guess they’ve given up the long running “i” naming theme). What struck me most is how arduously Apple has stuck with traditional touch points in introducing a totally new product category (well, new for Apple anyway).

If you glanced quickly across the room at someone wearing Apple’s new wonder, you probably wouldn’t even know they’re wearing technology. The Apple Watch looks a lot like an analog watch. There is even a Mickey Mouse face you can choose. The interchangeable bracelets smack of tradition. Jon Ive verified this point in the video that ran at the introduction, saying they borrowed heavily from the “watchmaker’s vocabulary” in the design process. They even consulted “horological experts from around the world” to provide a time keeping experience rooted in cultural nuance. The primary interface to the watch is a modified version of the very old fashioned watch-winding crown.

Now, appearances can be deceiving. As Cook, Ive and Kevin Lynch put the watch through its paces, it was clear that this is an impressive little piece of technology. Particular attention has been paid to making this an intimate device, with new advances in touch technology, biometric and motion sensors and the ability to personalize interfaces and hardware to make it uniquely yours. Watching, I couldn’t help but compare this to Google’s introduction of Google Glass. In many ways, Glass is the more revolutionary device. But the Apple Watch will have a much faster adoption path.

Google impresses first with sheer brute-force technological effort. Design is an afterthought. Google uses UI testing and design to try to corral a Pandora’s box full of raw innovation into a usable package. Apple takes a much different approach. They look first at the user experience and then they pick and choose the technologies required to deliver the intended experience. They lavish ridiculous amounts of time on seemingly miniscule design details but the end result is typically nothing less than breathtaking. We’re impressed with the technology, sure, but the overriding emotion is one of lust. We just have to have what ever the hell it is that is being introduced on the main stage of the Flint Center.

larrygiseleDespite the many who have said otherwise, including the late Steve Jobs, Apple has never really made a revolutionary device. Others have always been there first. What they have done, however, is taken raw innovation and packaged it in a way that resonates with its audience at a deep and hormonal level. Apple products are stylish and sexy – the Gisele Bündchen of technology – yet attainable to mere mortals. They take the “next big thing” and push them past the tipping point by kindling lust in the hearts and wallets of the market. Google products, despite their geeky technical prowess, have a nasty habit of getting stuck on the wrong side of the adoption curve. They are the – well, let’s face it – they are the Larry Page of technology – smart, but considerably less sexy.

Apple times entrance to the adoption curve to near perfection. They have a knack of positioning just ahead of the masses. Google’s target is much further down the road. They release betas well ahead of any market demand. That’s why most of us can’t wait to wear an Apple Watch, but wouldn’t be caught dead in a pair of Google Glass.

One last thought on this week’s introduction of the Apple Watch. Wearable technology is following an interesting path. Your smartphone now acts as a connected main base for more intimate pieces of tech like the Apple Watch or Google Glass. Increasingly, the actual user interfaces will be on these types of devices, but the heavy lifting will happen on a smart phone tucked into a pocket, purse or backpack. Expect specific purpose devices to proliferate, all connected to increasingly powerful MPUs (Mobile Processing Units) that will orchestrate the symphony of tech that you’re wearing.

Who Owns Your Data (and Who Should?)

First published January 23, 2104 in Mediapost’s Search Insider

Lock backgroundLast week, I talked about a backlash to wearable technology. Simon Jones, in his comment, pointed to a recent post where he raised the very pertinent point – your personal data has value. Today, I’d like to explore this further.

I think we’re all on the same page when we say there is a tidal wave of data that will be created in the coming decade. We use apps – which create data. We use/wear various connected personal devices – which create data. We go to online destinations – which create data. We interact with an ever-increasing number of wired “things” – which create data. We interact socially through digital channels – which create data.  We entertain ourselves with online content – which creates data. We visit a doctor and have some tests done – which creates data. We buy things, both online and off, and these actions also create data. Pretty much anything we do now, wherever we do it, leaves a data trail. And some of that data, indeed, much of it, can be intensely personal.

As I said some weeks ago, all this data is creating a eco-system that is rapidly multiplying and, in its current state, is incredibly fractured and chaotic. But, as Simon Jones rightly points out, there is significant value in that data. Marketers will pay handsomely to have access to it.

But what, or whom, will bring order to this chaotic and emerging market? The value of the data compounds quickly when it’s aggregated, filtered, cross-tabulated for correlations and then analyzed. As I said before, the captured data is its fragmented state is akin to a natural resource. To get to a more usable end state, you need to add a value layer on top of it. This value layer will provide the required additional steps to extract the full worth of that data.

So, to retrace my logic, data has value, even in it’s raw state. Data also has significant privacy implications. And right now, it’s not really clear who owns what data. To move forward into a data market that we can live with, I think we need to set some basic ground rules.

First of all, most of us who are generating data have implicitly agreed to a quid pro quo arrangement – we’ll let you collect data from us if we get an acceptable exchange of something we value. This could be functionality, monetary compensation (usually in the form of discounts and rewards), social connections or entertainment. But here’s the thing about that arrangement – up to now, we really haven’t quantified the value of our personal data. And I think it’s time we did that. We may be trading away too much for much too little.

To this point we haven’t worried much about what we traded off and to whom because any data trails we left have been so fragmented and specific to one context, But, as that data gains more depth and, more importantly, as it combines with other fragments to provide much more information about who we are, what we do, where we go, who we connect with, what we value and how we think, it becomes more and more valuable. It represents an asset for those marketers who want to persuade us, but more critically, that data -our digital DNA – becomes vitally important to us. In it lays the quantifiable footprint of our lives and, like all data, it can yield insights we may never gain elsewhere. In the right hands, it could pinpoint critical weaknesses in our behavioral patterns, red flags in our lifestyle that could develop into future health crises, financial opportunities and traps and ways to allocate time and resources more efficiently. As the digitally connected world becomes denser, deeper and more functional, that data profile will act as our key to it. All the potential of a new fully wired world will rely on our data.

There are millions of corporations that are more than happy to warehouse their respective data profiles of you and sell it back to you on demand as you need it to access their services or tools.  They will also be happy to sell it to anyone else who may need it for their own purposes. Privacy issues aside (at this point, data is commonly aggregated and anonymized) a more fundamental question remains – whose data is this? Whose data should it be? Is this the reward they reap for harvesting the data? Or because this represents you, should it remain your property, with you deciding who uses it and for what?

This represents a slippery slope we may already be starting down.  And, if you believe this is your data and should remain so, it also marks a significant change from what’s currently happening. Remember, the value is not really in the fragments. It’s in bringing it together to create a picture of who you are. And we should be asking the question – who should have the right to create that picture of you – you – or a corporate data marketplace that exists beyond your control ?

The Inevitable Wearable Technology Backlash

First published January 16, 2014 in Mediapost’s Search Insider

piem-1024x705Okay, I’ve gone on record – I think wearable technology is a huge disruptive wave currently bearing down on us. Accept it.

And I’ve also said that stupid wearable technology is inevitable. Accept that as well.

It appears that this dam is beginning to burst.

Catharine Taylor had a humorous and totally on-point reaction to the “tech-togs” that were unveiled at CES. Her take: “Thanks but no thanks”

Maarten Albarda a similar reaction to his first go-around with Google Glass – “Huh?”

Look – don’t get me wrong. Wearable technology, together with the “web of everything,” will eventually change our lives, but most of us won’t be going willingly. We’re going to have to get through the “bubble of silliness” first. Some of this stuff will make sense and elicit a well-earned “Cool” (or “Dope” or “Sick” or what ever generational thumbs-up is appropriate). Other things will garner an equally well-earned WTF? And some will be imminently sensible but will still end up being tossed out with the bathwater anyway.

Rob Garner always says “adoption follows function” This is true, but each of us has different thresholds for what we deem to be functional. If technology starts moving that bar, we know, thanks to the work of Everett Rogers and others, that the audience’s acceptance of that will follow the inevitable bell curve. Functionality is not equal in the eyes of all beholders.

The other problem with these new interfaces with technology is that function is currently scattered around like a handful of grass clippings in the wind. Sure, there are shards of usefulness, but unless you’re willing to wear more layers of wearable tech than your average early adopting Eskimo (or, as we say here in the politically correct north – Inuit), it’s difficult to see how this can significantly improve our day-to-day lives.

The other thing we have to grapple with is what I would call the WACF – The Weird and Creepy Factor. How exactly do we feel about having the frequency of our butt imprinting our sofa, our bank balance, our blood pressure and our body fat percentage beamed up to the data center of a start up we’d never heard of before last Friday? I’m an admitted early adopter and I have to confess – I’m not ready to make that leap right now.

It’s not just the privacy of my personal data that’s holding me back, although that is certainly a concern. Part of this goes back to something I talked about a few columns back – the redefinition of what it means to “be” online rather than “go” online. With wearable technology, we’re always “on” – plugged into the network and sharing data whether we’re aware of it or not.  This requires us with a philosophical loss of control. Chances are that we haven’t given this a lot of rational consideration, but it contributes to that niggling WACF that may be keeping us from donning the latest piece of wearable tech.

Eventually, the accumulated functionality of all this new technology will overcome all these barriers to adoption, but we will all have differing thresholds marking our surrender to the inevitable.  Garner’s assertion that adoption follows function is true, but it’s true of the functional wave as a whole and in that wave there will be winners and losers. Not all functional improvements get adopted. If all adoption followed all functional improvements, I’d be using a Dvorak keyboard right now. Betamax would have become the standard for videocassettes. And we’d be conversing in Esperanto. All functional improvements – all casualties to an audience not quite ready to embrace them.

Expect more to come.