cores

Latest

  • Google Compute Engine brings Linux virtual machines 'at Google scale'

    by 
    Richard Lawler
    Richard Lawler
    06.28.2012

    As anticipated, Google has just launched its cloud service for businesses at Google I/O 2012, called Google Compute Engine. Starting today Urs Holzle announced "anyone with large-scale computing needs" can access the infrastructure and efficiency of Google's datacenters. The company is promising both performance and stability -- Amazon EC2 they're coming for you -- claiming "this is how infrastructure as a service is supposed to work". It's also promising "50 percent more computes per dollar" than competitors. Beta testers will be on hand at later meetings to give impressions of the service, if you want to know how running your apps on 700,000 (and counting) cores feels. During the presentation we got a demo of a genome app and we're sure if we understood what was going on, it would have been impressive. Hit the source links below for more details on "computing without limits" or to sign up for a test yourself. Update: Looking for more info? Check out the hour long video from Google I/O dedicated to the technical details, embedded after the break. Check out our full coverage of Google I/O 2012's developer conference at our event hub!

  • Researchers create ultra-fast '1,000 core' processor, Intel also toys with the idea

    by 
    Donald Melanson
    Donald Melanson
    12.28.2010

    We've already seen field programmable gate arrays (or FPGAs) used to create energy efficient supercomputers, but a team of researchers at the University of Glasgow led by Dr. Wim Vanderbauwhede now say that they have "effectively" created a 1,000 core processor based on the technology. To do that, the researchers divvied up the millions of transistors in the FPGA into 1,000 mini-circuits that are each able to process their own instructions -- which, while still a proof of concept, has already proven to be about twenty times faster than "modern computers" in some early tests. Interestingly, Intel has also been musing about the idea of a 1,000 core processor recently, with Timothy Mattson of the company's Microprocessor Technology Laboratory saying that such a processor is "feasible." He's referring to Intel's Single-chip Cloud Computer (or SCC, pictured here), which currently packs a whopping 48 cores, but could "theoretically" scale up to 1,000 cores. He does note, however, that there are a number of other complicating factors that could limit the number of cores that are actually useful -- namely, Amdahl's law (see below) -- but he says that Intel is "looking very hard at a range of applications that may indeed require that many cores." [Thanks, Andrew]

  • AMD's Bobcat and Bulldozer, 2011 flagship CPU cores, detailed today

    by 
    Vlad Savov
    Vlad Savov
    08.24.2010

    One of these days AMD is gonna have to stop talking about its Atom-killing Bobcat and Xeon-ending Bulldozer cores and finally release them. But, until that happy moment arrives in 2011 (fingers crossed), we'll have to content ourselves with more presentation slides. First up, the Bobcat core is AMD's long overdue play for the netbook/ultrathin market. Pitched as having 90 percent of the performance of current-gen, K8-based mainstream chips, AMD's new mobility core will require "less than half the area and a fraction of the power" of its predecessors. That sounds like just the recipe to make the company relevant in laptop purchasing decisions again, while a touted ability for the core to run on less than one watt of power (by lowering operating frequencies and voltages, and therefore performance) could see it appear in even smaller form factors, such as MIDs. The Bobcat's now all set to become the centerpiece of the Ontario APU -- AMD's first Fusion chip, ahead of Llano -- which will be ramping up production late this year, in time for an early 2011 arrival. The Bulldozer also has a future in the Fusion line, but it's earliest role will be as a standalone CPU product for servers and high-end consumer markets. The crafty thing about its architecture is that every one Bulldozer module will be counted as two cores. This is because AMD has split its internal processing pipelines into two (while sharing as many internal components as possible), resulting in a sort of multicore-within-the-core arrangement. The way the company puts it, it's multithreading done right. Interlagos is the codename of the first Opteron chips to sport this new core, showing up at some point next year in a 16-core arrangement (that's 8 Bulldozers, if you're keeping score at home) and promising 50 percent better performance than the current Magny-Cours flagship. Big words, AMD. Now let's see you stick to a schedule for once.%Gallery-100088%

  • IBM creates a chip-sized supercomputer

    by 
    Joshua Topolsky
    Joshua Topolsky
    12.06.2007

    Good news, everybody! Those super-geniuses over at IBM have whipped up a new form of CPU transfer which utilizes pulses of light instead of electricity to move data between cores on a chip. The new technology -- which is one-hundred times faster than current speeds -- is called silicon nanophotonics, and if implemented, could downsize supercomputers to laptop stature. The invention is unhindered by common problems with electrical chips, such as overheating and breakdown of data on short trips, allowing signals to pass unmolested over greater distances. Using this process, data can be moved a few centimeters, while requiring one-tenth as much power, resulting in lower operational costs for supercomputers. Will Green, a researcher at IBM, says that the company's creation will, "Be able to have hundreds or thousands of cores on a chip," and will result in huge speed boosts. Unfortunately, the project is on track to be carried out in 10 to 12 years, which leaves a lot of time to ponder if the chips will play Doom.

  • Intel demonstrates 80-core processor

    by 
    Conrad Quilty-Harper
    Conrad Quilty-Harper
    02.11.2007

    Now that the Megahertz race has faded into the distance (we hear it was a myth), Intel is well and truly kicking off the start of a multi-core war with the demonstration of an 80-core research processor in San Francisco last week. It's not the first multi-core processor to reach double figures -- a company called ClearSpeed put 96 cores onto one of its CPUs -- but it's the first to be accompanied by the aim of making it generally available; an aim that Intel hopes to realize within a five year timeframe. The long time frame is required because current operating systems and software don't take full advantage of the benefits of multi-core processors. In order for Intel to successfully market processors with CPUs that have more than say, 4 cores, there needs to be an equal effort from software programmers, which is why producing an 80-core processor is only half the battle. On paper, 80-cores sounds impressive, but when the software isn't doing anything imaginative with them it's actually rather disappointing: during a demonstration, Intel could only manage to get 1 Teraflop out of the chip, a figure which many medium- to high-end graphics cards are easily capable of. The multi-core war may have begun, but the battle will be fought with software, although that's not to say that the hardware side has already been won: apparently the test chip is much larger than equivalent chips -- 275 mm squared, versus a typical Core 2 Duo's 143 mm squared -- and Intel currently has no way to hook up memory to the chip. Hopefully half a decade should be long enough to sort out these "issues."[Thanks, Michael]