Primed goes in-depth on the technobabble you hear on Engadget every day -- we dig deep into each topic's history and how it benefits our lives. You can follow the series here. Looking to suggest a piece of technology for us to break down? Drop us a line at primed *at* engadget *dawt* com.
Welcome to one of the most unnecessarily complicated questions in the world of silicon-controlled gadgets: should a savvy customer care about the underlying nature of the processor in their next purchase? Theoretically at least, the answer is obvious. Whether it's a CPU, graphics card, smartphone or tricorder, it'll always receive the Holy Grail combo of greater performance and reduced power consumption if it's built around a chip with a smaller fabrication process. That's because, as transistors get tinier and more tightly packed, electrons don't have to travel so far when moving between them -- saving both time and energy. In other words, a phone with a 28-nanometer (nm) processor ought to be fundamentally superior to one with a 45nm chip, and a PC running on silicon with features etched at 22nm should deliver more performance-per-watt than a 32nm rival.
But if that's true, isn't it equally sensible to focus on the end results? Instead of getting bogged down in semiconductor theory, we may as well let Moore's Law churn away in the background while we judge products based on their overall user experience. Wouldn't that make for an easier life? Well, maybe, but whichever way you look at it, it's hard to stop this subject descending into pure philosophy, on a par with other yawnsome puzzles like whether meat-eaters should visit an abattoir at least once, or whether it's better to medicate the ailment or the person. Bearing that in mind, we're going to look at how some key players in the silicon industry treat this topic, and we'll try to deliver some practical, offal-free information in the process.
Table of Contents
When it comes to the importance (or otherwise) of the nanometer, journalists often don't help. Sometimes we make a big deal out of a processor and its innards, while other times we barely mention it. Usually this is because manufacturers themselves have a schizophrenic attitude: they yell about it when their transistors are nice and small, but then treat the whole topic as irrelevant when their transistors happen to be lumpier than the competition's.
A case in point: NVIDIA has some great products on the market right now, but in terms of transistor size it doesn't have much to gloat about. Its latest 600-series graphics cards use a mix of 28nm and 40nm chips, which are no better (in terms of transistor size) than AMD's latest 28nm graphics cards. It's a similar story with NVIDIA's Tegra 3 chips for smartphones and tablets, which, at 40nm, are numerically closer to 2011's processors than to newer 32nm and 28nm competitors. The result: NVIDIA's marketing machine doesn't spend much time talking about transistor size. Buried amidst paragraphs of flowery prose in the press release announcing the company's first 28nm graphics card, there was one short and almost reluctant mention of nanometers, and its significance was left unexplained:
Kepler is based on 28-nanometer (nm) process technology and succeeds the 40-nm NVIDIA Fermi architecture, which was first introduced into the market in March 2010.
On the processor side of things, AMD is in a similar boat. Its flagship Trinity processors lag 10nm behind Intel's finest. But rather than just be coy like NVIDIA, AMD has taken a more forceful tack in trying to dampen consumers' interest in transistor size. One of the company's most senior marketing guys, Sasa Marinkovic, recently took the web to "set the record straight" about the entire notion that transistors need to get smaller if computers are to get better:
Today, people care about the experience their device delivers and not just the manufacturing process. Just ask anyone who has ever used an iPad – what technology was the chip powering the iPad built on? If they don't know, they shouldn't feel bad about it. Most of us don't give much thought to it.
Now, Sasa gave us the same consistent message when we interviewed him a while back, and he might actually have a point. Even though Trinity doesn't have smaller transistors, it still packs 100 million more of them compared to the preceding generation of Llano chips. What's more, thanks to improvements in Trinity's design -- specifically in its tweaked Piledriver cores -- it was able to deliver this extra computing grunt while consuming less power. If you're strict about it, this type of progress fails to meet the requirements of Moore's Law, which originally set the precedent that the density of transistors on a chip (i.e. not just their total number) ought to double every two years. Nevertheless, if Trinity yields better computing, surely consumers should acknowledge that?
Marinkovic's anti-nanometer stance is inherently sensible, and we'd have subscribed to it wholeheartedly were it not for his untimely mention of the iPad. Just recently, a new production run of the iPad 2 (identified as the "iPad2,4") has actually served to increase people's interest in transistor size. How come? Because AnandTech discovered that Apple switched to smaller 32nm transistors, compared to the 45nm chips it used in previous runs of the iPad 2. Those who have managed to pick up an iPad2,4 have benefited from a 16 percent battery life increase as a result -- and whether they're aware of it or not, the iPad can no longer be used to suggest that nanometers don't matter.
So, let's turn to a company that's immune to any accusations of being a sore loser. Moore's Law is part of Intel's genetic make-up, not just because Gordon E. Moore founded the company, but also because the maxim has been explicitly adopted as a corporate goal. Chipzilla currently boasts the world's smallest and most advanced "3D" transistors in a mass-produced product -- its 22nm Ivy Bridge processors for laptops and desktops -- and it's already gearing up for 14nm production next year. Read one of Intel's early press releases on Ivy Bridge and you'll get impression that transistors are the be-all and end-all of what computing is about. The word "22nm" is mentioned no fewer than 10 times in a single page, which starts with the over-arching claim that "22nm chips have an unprecedented combination of power and performance gains."
That all sounds great. In fact, Ivy Bridge is great, but perhaps not to the revolutionary extent that the press release describes. Intel's own benchmarks point to a 6 to 8 percent improvement in computational tasks, with most of the extra transistors being dedicated to the HD 4000 integrated graphics -- an area where Intel is still playing catch-up with AMD. And that raises an uncomfortable question: is it possible that Intel is using 22nm as a substitute for good, old-fashioned ingenuity? In other words, if you took AMD's underdog attitude and merged it with Intel's silicon clout, wouldn't we get computers that are far superior to what's currently on the market? If that's true, then rewarding Intel with our custom simply because it shrank its transistors could be a mistake.
At this point we need to turn a more trustworthy source. John Biggs is a co-founder of ARM, the Cambridge-based chip design company that licenses its IP to many of the biggest mobile chip manufacturers, including Qualcomm, Samsung, NVIDIA and others. Although he's obviously aligned with ARM, Biggs is first and foremost an engineer -- and he has strong opinions on this subject. Considering how many generations of processors he's seen in his career, and how ARM's chips have steadily become smaller, cheaper, faster and more efficient over time as a result, we honestly expected him to side with Intel. But what he actually said is:
"I can't see any reason for the ordinary person to care about nanometers. If you're buying a car, you're looking for practical and tangible benefits, not technology for technology's sake. The same applies when you buy a phone: you want a long battery life and the processing power that you need. Transistor size is just a means to an end."
I can't see any reason for the ordinary person to care about nanometers.
It's not that Biggs denies the advantages of smaller transistors. On the contrary, he acknowledges the traditional view that smaller transistors are a win-win for everybody -- or even, technically, a win-win-win-win, since the speed, efficiency, size and cost have all improved with each step down. What concerns him, though, is that these benefits are no longer guaranteed in the future, which means gadget buyers should look for clear evidence of improvement rather than just taking it for granted.
Why does Biggs strike this note of caution now, after decades in the industry? Because, he says, 45nm is a very approximate threshold at which further shrinkage becomes harder to translate into real-world gains:
"Right now is the crucial time, when we go from having seen these problems on the horizon, to discovering that they're definitely here."
As we move to 28nm, 22nm and less, transistors become "imperfect switches, which can drip like a leaking tap," potentially offsetting efficiency gains. This necessitates complicated solutions like power-gating: bigger and simpler transistors that are used to switch the power to smaller transistors on or off, to stop them from leaking when not in use. In turn, these more complicated designs require ever more expensive R&D and silicon foundries to manufacture them, reducing the number of rivals in the market and potentially reversing any of the savings that came from needing fewer raw materials.
In an effort to see whether Biggs' predicted slow-down is already apparent in current phones and tablets, we've plotted the graph below. It covers a random sample of mostly Android products we've reviewed since 2011, and it deliberately ignores every single characteristic except the fabrication process size of its processor and its SunSpider benchmark score. SunSpider is a useful metric because it's cross-platform and it analyses web-browsing speed -- i.e. an activity that all smartphone users are likely to be interested in.
You can see right away that this is a pretty messy set of data. On the face of it, it backs up what Biggs said. The scores for 45nm phones cover almost the full gamut from great to terrible, which means that transistor size would have been a largely irrelevant criterion in most buying dilemmas. If you've been stuck in comparing the Galaxy Nexus against the Galaxy Note, or the Droid RAZR against the HTC Rezound, then finding out silicon sizes would not have helped you.
We can also see the first inkling of what Biggs said about shrinkage below 45nm getting harder: whereas the 45nm scatter is bustling, there are currently only a handful of devices at 40nm, 32nm and 28nm. Of course this is bound to change, but the implication is that you're likely to end up paying a premium for one of these next-gen handsets, which are mostly flagship devices like the HTC One X, Samsung Galaxy S III and Transformer Prime. That contrasts with the promise that more finely-etched silicon is meant to make everything cheaper.
On the other hand, it's hard to deny that the sub-40nm data clusters are linked with top-level performance. If you imagine that these few dots on the far left of the graph represent just the average of what sub-45nm phones and tablets will deliver over the next two years, such that they'll become the central points in a scatter that will grow over time, then it's hard to see how we'd ever take 45nm seriously in a couple of years. Furthermore, although the sub-45nm handsets may be pricier than their 45nm counterparts today, they're not exorbitant: you're still looking at around $200 on contract for a 28nm or 32nm phone.
Now, let's mix things up even more and ignore every aspect of a device except its fabrication process size and battery life. This time we'll only look at phones, rather than tablets, because the latter have an unfair advantage when it comes to battery size:
It's the same story: the 45nm is cluster all over the place, but its average is still worse than phones with smaller transistors -- particularly those at 32nm and below. The only 45nm devices to deliver battery life in excess of nine hours were the Samsung Stratosphere and the Rugby Smart -- and guess what? Those devices happen to lie right at the bottom of the performance chart. In other words, the 45nm phones either have great performance or solid battery life, while the more recent phones with smaller transistors generally deliver top-level performance and battery life simultaneously.
So, these graphs actually support the idea that nanometers are still relevant when buying a smartphone -- though based on this small amount of evidence, we can't broaden it out any further than that.
You might argue that it's not transistor size that's pushing the newer phones to the front, but perhaps some other aspect of their new processors -- i.e. the design rather than scale of their architecture. However, that's quite a stretch. We put the question to Raj Talluri from Qualcomm, creator of the left-most processors on the graphs, and his opinion was unequivocal: it's not the way Snapdragon has been designed that makes it so fast and efficient, but the way it's been designed for 28nm:
"We worked really hard to achieve 28nm. It was our deliberate strategy and it took a lot of time, because we had to shrink not just the CPU, but also the graphics component and the modem and every other part of our chip. But we wanted 28nm because it's a huge advantage for our OEMs and our users."
We wanted 28nm because it's a huge advantage
Qualcomm made an enormous wager on 28nm and is now lugging its chips to the cashier's cage. Tell Talluri that nanometers don't matter and he'd no doubt smile and buy you a drink. By shrinking every component down so small, his company has not only delivered great performance and battery life, but it's also managed to squeeze in more components than anyone else in the market. It's now the go-to chipmaker for flagship LTE devices, after HTC abandoned NVIDIA's processors in favor of the LTE-equipped Snapdragon and Samsung did the same with its Exynos Quad.
On the whole, and for the average gadget buyer, we'd have to agree with AMD, NVIDIA and John Biggs on this issue: nanometers are kind of interesting, but they cannot be relied upon to pinpoint your next gadget. So long as a buyer weighs up online reviews in the usual manner, and hunts down the features that they need the most, then remaining ignorant of transistor sizes probably isn't going to do them much harm. The iPad2,4 saga is an exception that tests the rule, but the rule just about survives -- we doubt many people would want to return their iPad just because they discovered it was a 2,3.
Nanometers are kind of interesting, but they cannot be relied upon to pinpoint your next gadget
On the other hand, if you're proud to be a geek, then understanding what chips were used in which products, and what fabrication process was used for which chip, will reveal an intricate, behind-the-scenes world that can feed into the way you evaluate different devices. This doesn't mean you should favor the smaller transistor every time: you might still choose the NVIDIA GeForce GT 645 graphics card (40nm) over the GT 640 (28nm), but at least you'll be aware of what you're doing. Equally, you might use your understanding of fabrication to acknowledge when a company is struggling to invest in its future, when it's showing ingenuity in improving its products even without shrinking its silicon, or when it's being complacent in offering smaller transistors and little else.
Finally, we're going to stick our head above the parapet here and make a point specifically about cutting-edge smartphones, and especially those aimed at people who demand both performance and good battery life: our admittedly humble accumulation of evidence suggests that the old semiconductor laws still apply in this field, such that choosing a flagship phone with a significantly smaller fabrication process will generally lead you in the right direction. A difference of a few nanometers won't tell you anything -- for example the performance and battery life of the Samsung Galaxy S III (32nm) is no worse than with a HTC One X (28nm), and in some benchmarks the GS III is actually vastly superior. But for bigger gaps, we'd say it's good to pay attention. If you had a choice between an NVIDIA-powered HTC One X (40nm) or a Qualcomm-powered one on AT&T (28nm), the nanometers tell you the answer. Similarly, would you be entirely comfortable buying a Galaxy Nexus (45nm) today, knowing that the US version of the Galaxy S III (28nm) is just around the corner and will start at $200 on contract? We wouldn't.
[Image credits: Scientist with microscope by Allegra Boverman / MIT, Gordon Moore photo by Justin Sullivan / Getty Images]