Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.

 

 

Why Millennials are so Fascinating

When I was growing up, there was a lot of talk about the Generation Gap. This referred to the ideological gap between my generation – the Baby Boomers, and our parent’s generation – The Silent Generation (1923 – 1944).

But in terms of behavior, there was a significant gap even amongst early Baby Boomers and those that came at the tail end of the boom – like myself. Generations are products of their environment and there was a significant change in our environment in the 20-year run of the Baby Boomers – from 1945 to 1964. During that time, TV came into most of our homes. For the later boomers, like myself, we were raised with TV. And I believe the adoption of that one technology created an unbridgeable ideological gap that is still impacting our society.

The adoption of ubiquitous technologies – like TV and, more recently, connective platforms like mobile phones and the Internet – inevitable trigger massive environmental shifts. This is especially true for generations that grow up with this technology. Our brain goes through two phases where it literally rewires itself to adapt to its environment. One of those phases happens from birth to about 2 to 3 years of age and the other happens during puberty – from 14 to 20 years of age. A generation that goes through both of those phases while exposed to a new technology will inevitably be quite different from the generation that preceded it.

The two phases of our brain’s restructuring – also called neuroplasticity – are quite different in their goals. The first period – right after birth – rewires the brain to adapt to its physical environment. We learn to adapt to external stimuli and to interact with our surroundings. The second phase is perhaps even more influential in terms of who we will eventually be. This is when our brain creates its social connections. It’s also when we set our ideological compasses. Technologies we spend a huge amount of time with will inevitably impact both those processes.

That’s what makes Millennials so fascinating. It’s probably the first generation since my own that bridges that adoption of a massively influential technological change. Most definitions of this generation have it starting in the early 80’s and extend it to 1996 or 97.   This means the early Millennials grew up in an environment that was not all that different than the generation that preceded it. The technologies that were undergoing massive adoption in the early 80’s were VCRs and microwaves – hardly earth shaking in terms of environmental change. But late Millennials, like my daughters, grew up during the rapid adoption of three massively disruptive technologies: mobile phones, computers and the Internet. So we have a completely different environment for which the brain must adapt not only from generation to generation, but within the generation itself. This makes Millennials a very complex generation to pin down.

In terms of trying to understand this, let’s go back to my generation – the Baby Boomers – to see how environment adaptation can alter the face of society. Boomers that grew up in the late 40’s and early 50’s were much different than boomers that grew up just a few years later. Early boomers probably didn’t have a TV. Only the wealthiest families would have been able to afford them. In 1951, only 24% of American homes had a TV. But by 1960, almost 90% of Americans had a TV.

Whether we like to admit it or not, the values of my generation where shaped by TV. But this was not a universal process. The impact of TV was dependent on household income, which would have been correlated with education. So TV impacted the societal elite first and then trickled down. This elite segment would have also been those most likely to attend college. So, in the mid-60’s, you had a segment of a generation who’s values and world view were at least partially shaped by TV – and it’s creation of a “global village” – and who suddenly came together during a time and place (college) when we build the persona foundations we will inhabit for the rest of our lives. You had another segment of a generation that didn’t have this same exposure and who didn’t pursue a post-secondary education. The Vietnam War didn’t create the Counter-Cultural revolution. It just gave it a handy focal point that highlighted the ideological rift not only between two generations but also within the Baby Boomers themselves. At that point in history, part of our society turned right and part turned left.

Is the same thing happening with Millennials now? Certainly the worldview of at least the younger Millennials has been shaped through exposure to connected media. When polled, they inevitably have dramatically different opinions about things like religion, politics, science – well – pretty much everything. But even within the Millennial camp, their views often seem incoherent and confusing. Perhaps another intra-generational divide is forming. The fact is it’s probably too early to tell. These things take time to play out. But if it plays out like it did last time this happened, the impact will still be felt a half century from now.

NBC’s Grip on Olympic Gold Slipping

When it comes to benchmarking stuff, nothing holds a candle to the quadrennial sports-statzapooloza we call the Summer Olympics. After 3 years, 11 months and 13 days of not giving a crap about sports like team pursuit cycling or half heavyweight judo, we suddenly get into fist fights over 3 one hundredths of a second or an unawarded Yuko.

But it’s not just sports that are thrown into comparative focus by the Olympic games. It also provides a chance to take a snap shot of media consumption trends. The Olympics is probably the biggest show on earth. With the possible exception of the World Cup, it’s the time when the highest number of people on the planet are all watching the same thing at the same time. This makes it advertising nirvana.

Or it should.

Over the past few Olympics, the way we watch various events has been changing because of the nature of the Games themselves. There are 306 separate events in 35 recognized sports that are spread over 16 days of competition. The Olympics play to a global audience, which means that coverage has to span 24 time zones. At any given time, on any given day, there could be 6 or 7 events running simultaneously. In fact, as I’m writing this, diving, volleyball, men’s omnium cycling, Greco-Roman wresting, badminton, field hockey and boxing are all happening at the same time.

This creates a challenge for network TV coverage. The Olympics are hardly a one-size-fits-all spectacle. So, if you’re NBC and you’ve shelled out 1.6 billion dollars to provide coverage, you have a dilemma: how do you assemble the largest possible audience to show all those really expensive ads to? How do you keep all those advertisers happy?

NBC’s answer, it seems, is to repackage the Olympics as a scripted mini-series. It means throttling down real time streaming or live broadcast coverage on some of the big events so these can be assembled into packaged stories during their primetime coverage. NBC’s chief marketing officer, John Miller, was recently quoted as saying, “The people who watch the Olympics are not particularly sports fans. More women watch the games than men, and for the women, they’re less interested in the result and more interested in the journey. It’s sort of like the ultimate reality show and miniseries wrapped into one.”

So, how is this working out for NBC? Not so well, as it turns out.

Ratings are down, with NBC posting the lowest primetime numbers since 1992. The network has come under heavy fire for what is quite possibly the worst Olympic coverage in the history of the games. Let’s ignore for a moment their myopic focus on US contestants and a handful of superstars like Usain Bolt (which may not be irritating unless you’re a international viewer like myself). Their heavy-handed attempt to control and script the fragmented and emergent drama of any Olympic games has stumbled out of the blocks and fallen flat on its face.

I would categorize this as a “RTU/WTF” The first three letters stand for “Research tells us…” I think you can figure out the last three. I’m sure NBC did their research to figure out what they thought the audience really wanted in Olympics game coverage. I’m positive there was a focus group somewhere that told the network what they wanted to hear; “Screw real time results. What we really want is for you to tell us – with swelling music, extreme close ups and completely irrelevant vignettes– the human drama that lies behind the medals…” And, in the collective minds of NBC executives, they quickly added, “…with a zillion commercial breaks and sponsorship messages.”

But it appears that this isn’t what we want. It’s not even close. We want to see the sports we’re interested in, on our device of choice and at the time that best suits us.

This, in a nutshell, is the disruption that is broadsiding the advertising industry at full ramming speed. It was exactly what I was talking about in my last column. NBC may have been able to play their game when they were our only source of information and we were held captive by this scarcity. But over the past 3 Olympic games, starting in Athens in 2004, technology has essentially erased that scarcity. The reality no longer fits NBC’s strategy. Coverage of the Olympics is now a multi-channel affair. What we’re looking for is a way to filter the coverage based on what is most interesting to us, not to be spoon-fed the coverage that NBC feels has the highest revenue potential.

It’s a different world, NBC. If you’re planning to compete in Tokyo, you’d better change your game plan, because you’re still playing like it’s 1996.

 

 

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.

Can A Public Company Keep a Start Up Attitude?

google-glass1

Google is possibly the most interesting company in the world right now. But being interesting does not necessarily equate with being successful. And therein lies the rub.

Case in point. Google is taking another crack at Google Glass. Glass has the potential to be a disruptive technology. And the way Google approached it was very much in the Google way of doing things. They put a beta version out there and asked for feedback from the public. Some of that feedback was positive, but much of it was negative. That is natural. It’s the negative feedback you’re looking for, because it shows what has to be changed. The problem is that Glass V 0.9 is now pegged as a failure. So as Laurie Sullivan reported, Google is trying a different approach, which appears to be taken from Apple’s playbook. They’re developing under wraps, with a new product lead, and you probably won’t see another version of Glass until it’s ready to ship as a viable market-ready product.

The problem here is that Google may have lost too much time. As Sullivan points out, Intel, Epson and Microsoft are all working on consumer versions of wearable visual interfaces. And they’re not alone. A handful of aggressive start-ups are also going after Glass, including Meta, Vuzix, Optinvent, Glassup and Recon. And none of them will attract the attention of Google, simply because they’re not Google.

Did Google screw up with the first release of Google Glass? Probably not. In fact, if you read Eric Ries’s The Lean Start Up, they did a lot of things right. They got a minimally viable product in front of a market to test it and see what to improve. No, Google’s problem wasn’t with their strategy; it was with their speed. As Ries states,

“The goal of a startup is to figure out the right thing to build—the thing customers want and will pay for—as quickly as possible.”

Google didn’t move fast enough with Glass. And I suspect it was because Google isn’t a start up, so it can’t act like one. Again, from Ries,

“The problem isn’t with the teams or the entrepreneurs. They love the chance to quickly get their baby out into the market. They love the chance to have the customer vote instead of the suits voting. The real issue is with the leaders and the middle managers.”

Google isn’t the only company to feel the constricting bonds of being a public company. There is a long list of world changing technologies that were pioneered at places like Xerox and Microsoft and were tagged as corporate failures, only to eventually change the world in someone else’s hands.

I suspect the days are many when Larry Page and Sergey Brin are sorry they ever decided to take Google public. Back then, they probably thought that the vast economic resources that would become available, combined with their vision, would make an unbeatable combination. But in the process of going public, they were forced to compromise on the very spirit that was defined by that vision. They want to do great things, but they still need to hit their quarterly targets and keep shareholders happy. The two things shouldn’t be mutually exclusive, but sadly they almost always are.

It’s probably no accident that Apple does their development in stealth mode. Apple has much more experience than Google in being a public company. They have probably realized that it’s not the buying public that you keep in the dark, it’s the analysts and shareholders. Otherwise, they’ll look at the early betas, an essential step in the development process, and pass judgment, tagging them as failures long before such judgments are justified. It would be like condemning a newborn baby as hopeless because they can’t drive a car yet.

Google is dreaming big dreams. I admire that. I just worry that the structure of Google might not be the right vehicle in which to pursue those dreams.