Our personal computing future

Long live the PC…

I believe we are on the cusp of the largest change in computing since the invention of the PC. It will change the way we interact with computers at home, school, and work. This change is going to affect whole industries: PC makers, chip manufacturers, software developers, internet service providers, digital carrier services, and online service providers (i.e. cloud services).

I see the convergence of 5 technologies at the core of this coming change:

  1. Tablets / Mobile computing devices
  2. Cloud services
  3. HTML 5 on the web browser
  4. Universal WiFi / WAN
  5. Voice recognition

The PC is dead, long live the PC. It’s not portable (don’t even start arguing about laptops). They cost a lot of money and they have a lot of power, but most users do one of five things:

  1. surf
  2. e-mail
  3. play games
  4. listen to music
  5. post on their social network accounts

At the office, it is used for e-mail, spreadsheets,  word processing, and specialty apps pertinent to the employee’s job function. None of these actions truly requires a 32 core, 11 nm, 3.5 GHz processor with 64 Gb of memory. What most people really want is a simple-to-use tablet, that will connect them to the internet. It’s small, and thin. It’s easy to transport. It acts like a smart piece of paper. It starts instantly (i.e. it never really turns off) and you operate it by directly interacting with your data on the screen. Basically, it looks really cool.

So, users just need to look at stuff, and select things. Good voice-to-text should take care of most of our typing needs (although, time will tell on this piece of technology). That’s the last missing piece of the technology puzzle that hasn’t emerged, yet; although Siri is a good example of where things are headed.

So, everyone is going to be running around with a tablet and a blue-tooth stereo headset. You listen to your music and make calls wirelessly (yes, your tablet will have mobile phone capability).

The future is going to be like this or some variant of it. I’m not saying I’ve divined the future with 100% accuracy, but I hope you see what I’m getting at: the personal computer, that large metal box/keyboard-mouse/monitor sitting on your desk, is a dinosaur and this tech convergence is the bright, shining comet that is going to bury it in the new K-T boundary layer.

Maybe I’ve finally drunk the kool-aid, or I’ve taken noticed of all the current memes, or it’s just obvious.

So what’s next…

Your tablet is just a display device. HTML5 will be the new front-end UI for everything. This is a return to a vision of computing from the dawn of the information age. It’s called the dumb terminal. Computing use to be done on a huge, expensive mainframe; maintained by geeks in white lab coats who hung out all day in chilly white rooms. They didn’t let you near it. There was security, and locked doors. The end-user sat in a room far away from the computer, and stared at a screen. They looked at stuff, made choices, and entered data.

All computing will be done in the cloud… and it ain’t gonna be free. Your gonna pay, and pay, and pay.

  1. Hard cash – subscription fees, memberships, there’s an app for that
  2. Advertising
  3. Personal information mining

So, what does this mean….?

Your keyboard-mouse/monitor is now your tablet. The ugly metal box that no one understands, is going to be moved to the cloud. It will be tended and properly cared for by dedicated geeks. 32 core, 11 nm, 3.5 GHz Intel processors with 256 Gb of memory will live in the cloud. ARM processors will take over the tablets.

The O/S will become irrelevant. Arguing about whether Windows, Linux, or Macintosh is best will be like arguing that vanilla is better than chocolate or better than strawberry. The fact of the matter is that HTML5 will make all apps Neapolitan and they’ll run on any O/S. The O/S is dead. No one will care.

You’re going to have a “fun” relationship with your data carrier until they are replaced. AT&T, Verizon, T-Mobile, and Sprint are going to suck us dry. They will gouge us on every kilobyte transferred. Apple / Google / Amazon / Facebook, i.e. AGAF, (or someone we haven’t seen yet) will come to the rescue and develop a new business model for sharing data wirelessly. It’ll be a hell of a fight, but take a look at Kodak. Remember Kodak? If you’re under 15 years-old, you probably haven’t got a clue what I’m talking about. “Now there’s a Kodak moment,” probably doesn’t even register with you. What we’re talking about here is a multi-billion dollar a year company that ended up as that deer that always seems to get caught in the headlights. Kodak is officially information-age roadkill.

Bottom-line, AGAF has more money (and more talent) than the telecoms. Once they figure out how the game is played and they throw enough money around Washington in the right way, they’ll get the laws changed and silence the FCC through presidential appointment. I believe once the end-user gets a taste of this tech convergence, nothing will be able to stand in its way. The first company (it’ll be Apple) that groks this simple fact will win the adoration and dollars of billions of mobile data users.

End-users will become dumber, so software developers will have to become smarter. The only thing that users will be doing with their tablet/mobile devices is looking at stuff, selecting stuff, and voicing information. That means software developers are going to have to get very creative about how data is organized and displayed. UI’s are going to be page-driven and very fine-grained. They have to be if we’re going to do anything on a 10″ screen. There’s a lot of R&D to be done here, and a lot of money to be made by the folks that figure out how to do it right.

References:
Microsoft Kills Off The Desktop
Windows 8 Won’t Kill Off The Desktop, But We’re Still Screwed

Update: March 27, 2013

It looks like the Kool-aid is starting to be sipped. I just read an article that very closely mirrors my predictions: The Building is the New Server. I missed the part about ARM making in-roads into the data centers, but most of the rest remains valid.

 

Posted in Insights | Tagged , , | Comments Off on Our personal computing future

Context…, it’s everything

So, I’m in the shower yesterday morning and I started thinking about that “Dragon, naturally speaking” commercial I’ve been seeing all over the place on TV. “How does it know to bold the previous word, when the guy says, ‘bold that.’?” I think to myself. “It has to know the context…. Well, it must know a couple of dozen commands. It sensed a pause, and dropped into command-sensing mode. It took the next words it heard and checked them against its list of known commands. One of them was a match. Hmm, would the human brain do it that way?

That’s when I envisioned this whole array of agents riding above the part of our brain that turns sounds into word symbols. Each agent listens to the symbols. There must be thousands of them. Each agent has an output that signals how close it considers the symbol stream to be to it’s command. There’s another layer above that listens to all the outputs. When the monitoring layer receives a strong signal from one of its agents, it knows what command has been given.

It probably doesn’t work that way in the human brain. That’s more like how I’d program it on my computer, but it gets me thinking more about context.

How does the brain know what the context of the command is? What’s involved in defining a context. How can I write software that could build contexts through training from an external environment? What structures do I need to have in place to allow contexts to be built? Hey, what does a plain and simple context look like? How does the context shift as the situation or environment changes?

Context: a series of conditions that define a known state? Is that too simple?

If the human mind has a symbolic model of the world within it and our senses attempt to keep that model in sync with the external environment, then context would be our current belief of the state of our world. I imagine that there is also a model of ourselves inside our mind. Maybe that’s how we hear ourselves think. Anyway, as we grow and learn, are we constantly monitoring the inputs from the environment (or our own mind) and building new contexts. There must be a mechanism within our brain that decides the conditions are right to create a new context. I don’t care if this is actually how the human brain works, I’m just interested in whether it is practical for building a machine that can learn.

So, what if there are lower, basic, contexts whose outputs are sent to another layer of higher contexts, and so on…. Can you build a complex understanding with that? What do you do with a context, anyway? If it is a set of conditions that define a state, then those conditions must be important information. Maybe a context is also like a gateway. When the inputs meet certain condition, the inputs are amplified and sent on to another layer, or even other sections for more processing?

There must be a lot of context conditions: millions, maybe billions. Hmm, I wonder if this is even the way the brain processes information? Is there any practical idea here?

I think I’ve lost my context.

Posted in Insights, Thinking in the shower | Tagged , , , , , , | Comments Off on Context…, it’s everything

A journey of a thousand miles begins with a single step

In this space I intend to express my insights into the wonderful and weird world around us. So much of what I see interests me. Until now, it has all been kept in my head. This is my attempt to put it down in words.

Enjoy

Posted in Insights | Tagged , | Comments Off on A journey of a thousand miles begins with a single step