This week, for some reason (largely to do with thinking I could still handle caffeine and being horribly wrong), a number of pieces fell into place for me when it came to looking at how we might interact with computers and the Internet in the future. I began to sketch that out in my SearchInsider column today (more details about the caffeine episode are in it) , but quickly found that I was at the end of my editorial limit and there were a lot of pieces of the vision that I wasn’t able to draw together. So I promised to put a post on this blog going into a little more detail.
The ironic thing about this vision was that although I’d never seen it fully described before, as I thought about it I realized a lot of the pieces to make this happen are already in development. So obviously, somewhere out there, somebody also seen the same vision, or at least pieces of it. The other thing that struck me was: it all made sense as a logical extension of how I interacted with computers today. Obviously there’s a lot of technology being developed but if you take each of those vectors and follow it forward into the future, they all seem to converge into a similar picture.
Actually, the most commonly referenced rendering of the future that I’ve seen is the world that Spielberg imagined in his movie Minority Report. Although anchored in pop culture, the way that Spielberg arrived at his vision is interesting to note. He took the original short story by Philip K. Dick and fleshed it out by assembling a group of futurists, including philosophers, scientists and artists, and putting them together in a think tank. Together they came up with a vision of the future that was both chilling and intriguing.
I mention Minority Report because there are certain aspects of what I saw the future to be that seem to mirror what Spielberg came up with for his future. So, let me flesh out the individual components and provides links to technology currently under development that seem to point this way.
First of all, what will the web become? There’s been a lot of talk about Web 2.0 and Web 3.0, or the Semantic Web envisioned by Tim Berners Lee. Seth Godin had a particularly interesting post (referenced in my column) that he called the Web4. All these visions of the Web’s future share common elements. In Godin’s version, “Web4 is about making connections, about serendipity and about the network taking initiative”. This Web knows what we’re doing, knows what we have to do in the future, knows where we are at any given time, knows what we want and works as our personal assistant to tie all those pieces together and make our lives easier. More than that, it connects us a new ways, creating the ad hoc communities that I talked about in my earlier post, Brain Numbing Ideas on Friday afternoon.
For the sake of this post, I’m calling my version of the new Web “the Cloud”, borrowing some language from Microsoft. For me the Cloud is all about universal access, functionality, connection and information. The Cloud becomes the repository where we put all our information, both that which we want to make publicly accessible and that which we want to keep private. Initially this will cause some concern, as we wrestle with the change of thinking required to understand that physical ownership of data does not always equal security of that same data. We’ll have to gain a sense of comfort that data stored in online repositories can still remain private.
Another challenge will be understanding where we, ourselves, draw the line between the data we choose to make publicly accessible and the data we want to keep for our own personal use. There will be inevitable mistakes of an embarrassing nature as we learn where to put up our own firewalls. But the fascinating part about the Cloud is that it completely frees us physically. We can take all the data we need to keep our lives on track, stored in the Cloud, and have it accessible to us anywhere we are. What’s more, everyone else is doing the same thing. So within the Cloud, we’ll be able to find anything that anyone chooses to share with us. This could include the music they create, the stories they write, or on a more practical level, what our favorite store currently has in stock, or what our favorite restaurant has on for it’s special tonight. Flight schedules, user manuals, technical documentation, travel journals…the list is endless. And it all resides in the Cloud, accessible to us if we choose.
The other really interesting aspect of the Cloud is the functionality it can offer as we begin to build true applications into the web, through Web 2.0 technology. We start to imagine a world where any functionality we could wish for is available when we need it, and where we can buy access as required. The Cloud becomes a rich source of all the functionality we could ever want. Some of that functionality we use daily, to create our own schedules, to communicate, to connect with others and to manage our finances. Some of that functionality we may use once or twice in a lifetime. It really doesn’t matter because it’s always there for us when we need it.
The functionality of the Cloud is already under development. The two most notable examples can be found in Microsoft’s new Office Live Suite and in the collection of applications that Google is assembling. Although both are early in their development cycles, one can already see where they could go in the future.
The final noteworthy aspect of the Cloud is that it will create the basic foundation for all communication in the future. Our entertainment options will be delivered through the Cloud. We will communicate with each other through the Cloud, either by talking, writing or seeing each other. We will access all our information through the Cloud.
For the Cloud to work, it has to be ubiquitous. This represents possibly the single greatest challenge at the current time. The Cloud is already being built, but our ability to access the Cloud still depends on the speed of our connection and the fact is right now, our wireless infrastructure doesn’t allow for a robust enough connection to really leverage what the Cloud has to offer. But universal wireless access is currently being rolled out in more and more locations, so the day is drawing near when access will cease to be a problem.
So, when the Cloud exists, the next question is how do we access it? Let’s start with the two access points that are most common today: home and at work.
The Home Box
The Home Box becomes the nerve center of our home. It acts as a control point for all the functionality and communication we need when we’re not at work. The Home Box consists of a central unit, which doubles as our main entertainment center, and a number of “smart pods” located throughout the home, each connected to a touch screen.
So, what would the Home Box do? Well first of all, it would inform and entertain us. The pipeline that funnels our entertainment options to us would be directly connected to the Cloud. We would choose what we want to see, so the idea of channels becomes obsolete. All entertainment options exist in the Cloud and we pick and choose what we want, when we want.
Also, the Home Box makes each one of those entertainment options totally interactive. We can engage with the programming and shape it as we see fit. We can manipulate the content to match our preferences. The Home Box can watch four or five sporting events and assemble a customized highlight reel based on what we want to see. The Home Box can scan the Cloud for new works by artists, whether they be visual artists, music artists or video artists, notifies us when new content is ready for us to enjoy. If we have an interest that suddenly develops in one particular area, for instance a location that we want to visit on an upcoming vacation, the Home Box assembles all the information that exists, sorted by our preferences, and brings it back to us. And at any time, while watching a video about a particular destination, we can tag items of interest within the video for further reference. As soon as they’re tagged, a background application can start compiling information on whatever we indicated we were interested in. Advertising, in this manifestation, becomes totally interwoven into the experience. We indicate when we’re interested in something and the connection to the advertiser is initiated by us with a quick click.
But the Home Box is much more than just a smarter TV set or stereo. It also runs our home. It monitors energy consumption levels and adjusts them as required. It monitors what’s currently in our fridge and our pantry (by the way, computers are already being built into fridges) and notifies us when we’re out of something. Or, if there’s a particular recipe we want to make, it will let us know what we currently have and what we need to go shopping for.
Microsoft already has the vision firmly in mind. Many of the components are already here. The limited success of Microsoft’s Windows Media Center has not dissuaded them from this vision of the future. Windows Media Center is now built into premium versions of the Vista operating system. And the is Smart Pods I refer to? Each Xbox 360 has the ability to tap right into windows XP Media Center. The technology is already in place.
The Work Box
Probably the least amount of change that I see in the future is in how we access the Internet at work. For those who of us who work in an office environment, we’re already fairly well connected to the Internet. The primary difference in this case would be where the data resides. Eventually, as we gain comfort with the security protocols that exists within the Cloud, we will feel more comfortable and realize the benefits that come with hosting our corporate data where it’s accessible to all members of the organization, no matter where they are physically located.
But consider what happens for the workers who don’t work in an office environment. Access to the Cloud now allows them to substantially increase their connectivity and functionality while they’re mobile. You could instantly access the inventory of any retail location within the chain. You can see if a parts in stock at the warehouse. You can access files and documents from anywhere, at any time. And, you can tap into the core functionality of your office applications as you wish, where ever you happen to be.
Once again, much of the functionality that would enable this is already in place or being developed. In the last year we at Enquiro have started to realize the capabilities of Microsoft Exchange Server and Sharepoint services. Just today, Google announced new enterprise level apps would be available on the web. Increasingly, more and more collaborative tools that use the Internet as their common ground are being developed. The logical next step is to allow these to reside within the Cloud and to free them from the constraints of our own internal hardware and software infrastructure.
The Mobile Device
When we talk about tangible technology that will enable this future; hardware that we can see and touch, the mobile piece of the equation is the most critical. For us to truly realize the full functionality of the Cloud, we have to have universal access to it. It has to come with us as we live our lives. The new mobile device becomes a constant connection to the Cloud. Small, sleek, GPS enabled, with extended communication capabilities, the new handheld device will become our computing device of choice. All the data and the functionality that we could require at any time exists in the Cloud. The handheld device acts as our primary connection to the Cloud We pull down the information that we need, we rent functionality as required, we do what we have to do and then we move on with our lives.
Our mobile device comes with us and plugs into any environment that we’re in. When we’re at work, we plug it into a small docking station and all the files that we require are interchanged automatically. Work we did at home is automatically uploaded to the corporate section of the Cloud, our address books and appointment calendars are instantly updated, new communications are downloaded, and an accurate snapshot of our lives is captured and is available to us. When we get home again we dock our mobile device and the personal half of our lives is likewise updated.
Consider some practical applications of this:
When we go to the gym, our exercise equipment is now “Cloud” enabled. Our entire exercise program is recorded on our mobile device. As we move from station to station we quickly plug it into a docking station, the weights are automatically adjusted, the number of reps is uploaded, and as we do our exercises, appropriate motivating music and messages are heard in our ear. At the same time, our heart rate and other biological signals are being monitored and are being fed back to the exercise equipment, maximizing our workout.
When we’re at home, we quickly plug our mobile device into the Smart Pod in the kitchen, and everything we need to get on our upcoming shopping trip is instantly uploaded. What’s more, with the functionality built into the Cloud, the best specials on each of the items is instantly determined, the best route to pick up all the items is send to our GPS navigation module, and our shopping trip is efficiently laid out for us. While we’re there, the built in bar code scanner allows us to comparison shop on any item, in the geographic radius we choose.
As I fly back from San Francisco, a flight delay means that I may miss my connecting flight in Seattle. My mobile device notes this, adjusts my schedule accordingly, automatically notifies my wife and scans airline schedules to see if an alternative flight might still get me home without an unexpected layover near SeaTac Airport. It there’s no way I can make it back, it books me a room at my prefered hotel.
The Missing Pieces
I happen to think this is a pretty compelling vision of the future. And as it started to come together for me, I was surprised by how many of the components already exist or are being currently developed. As I said in the beginning, it seems like a puzzle with a lot of the pieces already in place. There are some things, however, we still need to come together for this vision to become real. Here are the challenges as I see them.
For the mobile device that I envisioned to become a reality, we have to substantially up the ante of the computing horsepower. The story that led to my writing of the SearchInsider column was one about the new research chip that is currently under development at Intel. Right now the super chips are being developed for a new breed of supercomputer, but the trickle-down effects are inevitable. Just to give you an idea of the quantum leap in performance we’re talking about, the chip is designed to deliver teraflops performance. Teraflops are trillions of calculations per second. The first time teraflops performance was achieved was in 1997 on a supercomputer that took up more than 2000 square feet, powered by 10,000 Pentium Pro processors. With the new development, that same performance is achieved on a single multi-core chip about the size of a fingernail. This opens the door to dramatic new performance capabilities, including a new level of artificial intelligence, instant video communications, photorealistic games, multimedia data mining and real-time speech recognition.
A descendent of this prototype chip could make our mobile device several orders of magnitude more powerful than our most powerful desktop box today. And when implanted in our Home Box, this new super chip allows us to scan any video file and pick up specific items of interest. You could scan the top 100 movies of any year to see how many of them reference the city of Cleveland, Ohio (not exactly sure why you’d want to do this), or included a product placement for Apple.
Better Speech Recognition
One of the biggest challenges with mobile computing is the input/output part of the problem. Small just does not lend itself to being user-friendly when it comes to getting information in and out of the device. We struggle with tiny keyboards and small screens. But simply talking has proven to be a remarkably efficient communication tool for us for thousands of years. The keyboard was a necessary evil because speech recognition wasn’t an option for us in the past. We can talk much faster than we can talk.
I recently was introduced to Dragon Naturally Speaking for the first time. I’ve been trying it for about three weeks now. Although it’s still getting to know me and I’m still getting to know it, when it works it works very well. I found it a much more efficient way to interact with my computer. It would certainly make interacting with a mobile device infinitely more satisfying. The challenge right now with this is that speech recognition requires a fairly quiet environment, you’re constantly speaking to yourself, and mobile devices just don’t have enough computing power to be able to handle it.
We’ve already dealt with the computing horsepower problem above. So how do we deal with the challenge of being able to get our vocal commands recognized by our mobile device? Let me introduce you to the subvocalization mic. The mic actually picks up the vibrations from our vocal cords, even if we’re only whispering, and renders recognizable speech without all the background noise. New prototype sensors can detect sub vocal or silent speech. We can speak quietly (even silently) to ourselves, no matter how noisy the environment, and our mobile device would be able to understand what we’re saying.
Better Visual Displays
The other challenge with a mobile device is in freeing ourselves from the tiny little 2.5″ x 2 .5″ screen. It just does not produce a very satisfying user experience. One of the biggest frustrations I hear about the lack of functionality with many of the mobile apps comes just because we don’t have enough screen real estate. This is where a heads-up display could make our lives much, much easier. Right now they’re still pretty cumbersome and make us look like cyborgs but you just know we’re not far from the day where they could easily be built into a pair of non-intrusive eyeglasses. Then the output from our mobile device can be as large as we wanted to be.
Going this one step further, let’s borrow a scene from Spielberg’s Minority Report. We have the heads-up display which creates a virtual 3-D representation of the interface. We could also have sensors on our hands that would turn that display into a virtual 3-D touchscreen experience. We could “touch” different things within the display and interact with our computing device in this way. Combined with sub vocalization speech commands, this could create the ultimate user interface. Does this sound far-fetched? Microsoft has already developed much of the technology and has licensed it to a company called eon reality. Like I said no matter what the mind can envision, it’s probably already under development. As I started down this path, it particularly struck me how many of the components under development had the Microsoft brand on them.
If you can fill in other pieces of the puzzle, or you have your own vision of the future, make sure you take a few moments to comment.