Advertisement

Docks, transformers, computing cores and taking it all with you

Back in the mists of history -- probably the late '90s or early '00s -- I remember reading a blog post. I'm afraid I have been unable to find it again, so you'll have to take my reminiscing on faith (but please leave a comment if you know what I'm talking about). This post dissected and analyzed a collection of freshly granted IBM patents which, taken together, painted a picture of the future of personal computing that has stayed with me ever since.

In essence, they called for each person to be carrying around a personal "computing core" -- a device we'd recognize today as a modern smartphone, although it was close to science fiction back then -- that could be docked into a variety of shells to become other devices, such as a laptop or a desktop. While Apple's PowerBook Duo subnotebooks were designed to transform into desktop computers when docked with their base units, they didn't quite meet the pocketable part of the 'computing core' definition.

I was reminded of this recently when reading Anandtech's review of the clumsily-named Asus Eee Pad Transformer TF101. If you're unfamiliar with it, the Eee Pad looks, at first glance, like Yet Another Identikit Android Tablet, as it has very similar specs to the rest of them -- Android Honeycomb software, dual core NVidia Tegra 2 system-on-a-chip processor, 1 GB RAM and so forth.

The Asus, however, has two key things in its favor. Firstly, for the baseline Wi-Fi/16 GB configuration, it's $100 cheaper than the iPad. Secondly, it works with a $150 laptop dock accessory that turns it into a netbook.

Now, we've seen this before, notably with Motorola's Atrix 4G Android smartphone. That device also has a "Lapdock" add-on, but Engadget thought the hardware was overpriced and the software disappointing.

According to several reviews, however, the Eee Pad differs in a key way: it's not awful to use. The keyboard isn't terrible, for one, and it includes a touchpad so that you don't have to keep moving your hands back and forth between the keyboard (for typing) and the screen (for UI interaction). It has a big battery in the keyboard bit -- increasing capacity of the docked tablet to a mammoth 48 watt-hours (Whr). The iPad 2, for comparison, has a 25 Whr battery, and the 13" MacBook Air has a 50 Whr one. The keyboard "slice" also includes USB ports and an SD card slot. It's a bit thick when it's in netbook mode, and the software isn't perfect, but the overall tone of most of the reviews I've seen has been one of pleasant surprise that this hybrid design works passably well.

I'm not suggesting this is the Android tablet to put a dent in the iPad's terrifying market share (although it's worth noting that early sales have been strong), but I do think that for the first time, we are seeing devices that might start to deliver on IBM's decade-old vision of computers that morph to our needs. Indeed, Anand Lal Shimpi opens his review talking about the problems of device synchronization -- like walking away from his desktop PC with a tablet in hand and wishing he could bring over the half a dozen open browser tabs, say, or the draft of an article he's working on. (A different answer to this problem is cloud syncing; there'll be more on that in a moment.)

Now, it's worth noting that the IBM patents didn't work this way, with one powerful but pocketable "hub" device. Given how weak mobile devices were back then, there was no feasible way to use the smartphone's CPU for all tasks; hence, the patents describe a complex scheme whereby each shell that the smartphone was plugged into would have its own storage, CPU, RAM and GPU. The software would dynamically and automatically expand to accommodate these new bits and pieces as you moved the core between different shells. In practice, making the device's operating system cope with this sort of realtime shift in resources is very difficult and was a large part of why these ideas have laid dormant for a decade.

So what's changed? Simply, the power in mobile devices today is approaching the levels where they could, without compromise, run an entire desktop system for a mainstream user. The iPad 2's dual core 1 GHz processor is a pretty capable beast, and the next iPhone is widely expected to share it (as indeed the iPhone 4 and the first iPad share their core chips, albeit at a slightly slower clock speed in the iPhone). Now, it's important to point out that these low-power ARM chips are quite a bit less powerful at the same clock rate as the Intel processors in desktop Macs. The iPad 2 sounds almost as powerful as the poverty spec 11" MacBook Air with its dual core 1.4 GHz, but that isn't actually true; in practice, the Air will be ahead to an extent. However, performance is only going to go up; Nvidia is already talking about a quad-core Tegra chip, and we can assume that Apple has similar projects in the labs. Incidentally, the Tegra and the A5 chip are close cousins; they are both derived from the same ARM9 reference design.

To the cloud!

The problem of accessing your stuff from more than one device can also be addressed, to one degree or another, by use of cloud syncing services (which were no more than an unimaginable dream when the IBM patents were written). A few examples from a field of many: Dropbox, Handoff, Firefox mobile, Seamless and many others all attempt to link some piece of data -- files, or browser state, or music playback -- between devices of all shapes and sizes. Surely this is preferable to the idea of clumsy device mating?

Well, perhaps, but I'm not convinced it's a slam-dunk. At the moment, cloud syncing is far from complete, largely because it's not built deep into mobile operating systems. Dropbox, for example, works with (say) the Nebulous Notes text editor I am using to type this post, but not with the game of Death Rally I was playing earlier -- so I can't move my save game back and forth between my iPad and my iPhone (although our own Chris Rawson has a hack for that). Similarly, browser state syncing only works between certain desktop and certain mobile browsers, e.g., between Firefox 4 and Firefox Mobile. What's needed is more akin to what Josh Topolsky proposed at Engadget: the Continuous Client.

Basically, until it's everywhere and integrated into everything -- and until all devices have permanent, ubiquitous internet connections -- cloud syncing will always be limited in what it can offer. And the ubiquitous internet isn't as trivial as it might seem; here in Wales, for example, I don't need to venture very far into the countryside before my iPhone's data connection is down to 9.8 kbit/sec GPRS. For transferring anything beyond short text files, that's highly impractical -- especially as cloud sync involves moving the file across a cellular data interface twice (say, once to upload from iPhone to Dropbox, then again to download from Dropbox to iPad). Low population density and therefore low return-on-investments for cell network buildout means there'll always be at least some unfortunates living on the edges of the grid.

Even when those difficult problems are solved, there'll always be some data for which cloud storage is inappropriate. Would you trust, say, your employer's confidential secrets to Dropbox despite these security concerns? Having spent some time working in secure areas (nuclear power plants and defence contractors) where camera phones and USB sticks were verboten, I can attest there'll always be room for some non-cloud solutions to these problems.

Even if you're happy with any security ramifications, there's also the issue of file size to consider. It's already possible to buy an iPad with more storage space than the baseline Dropbox tier (64 GB versus 50 GB), for example. I'm only a below-average photographer, but I've managed to accumulate almost 150 GB of RAW files in my Aperture library in the 16 months since I bought a grown-up digital camera. Even if Dropbox could accommodate me (which it cannot, although, notably, Amazon's Cloud Drive could), we're a long, long way away from a world where domestic internet connections have fast enough uploading to make working with those sizes feasible.

Who might want to take computing in this direction?

I've already mentioned that Nvidia is actively pushing the idea that a single smartphone will become the literal center of all your computing. When you trawl through the archives, you'll find that most major computing firms have looked at the idea at one point or another; for example, this Microsoft patent from January 2009 for a smartphone dock. Patents, of course, should always be interpreted with a pinch of salt; they may hint at huge new product directions or merely be the failed offshoot of a shelved research project.

So what about Apple? Might it have plans in this direction? Famously, it's never had much stake in transformable computing up until now, much to the annoyance of people who'd really, really like a dock for their MacBooks. You have to go all the way back to the PowerBook Duo to find a laptop<->desktop solution, even though docks are fairly common on mainstream business-oriented PC laptops. Personally, I'm hoping for a docking box for a future MacBook Air that connects to a single Thunderbolt port on the laptop side and offers downstream ports for DisplayPort, audio, Ethernet and USB. In fact, as I'm dreaming anyway, throw eSATA on, too. I'm not holding my breath though.

A trawl through Patently Apple's extensive archives didn't reveal much that is directly related, although this 2008 patent for a laptop that docked into an iMac frame is an exception.

Apple has something huge to gain from such a play. The most stand-out difference between iOS and OS X is the App Store; specifically, how it's the only way to get stuff onto iOS and how Apple takes a cut of every transaction. This means it's quite clearly in Apple's interests to push iOS further upmarket, if the opportunity presents itself, and there seem to be no insoluble technical barriers to using a future iOS device as your main computing platform if you are willing to live within iOS's walled garden. Now, I'm not suggesting OS X is dead -- there'll always be users who need, or simply want, unfettered access to their own computers outside of the tight sandboxing that iOS applies. But will that always apply to everyone, or even most folks? I'd argue the popularity of the iPad shows that's not the case.

Wrapping up

Viewed now, a decade later, IBM's vision seems less compelling than it did at the time. Even as the march of progress has resolved many of the technical challenges, it's also given us arguably superior cloud-based solutions to the problems it was trying to address. Nevertheless, I think it's too early to call it just yet; it's still possible our future gadgets will offer more than meets the eye.