ParallelProcessors

Latest

  • NVIDIA open sources CUDA compiler, shares its LLVM-based love with everyone

    by 
    Michael Gorman
    Michael Gorman
    12.14.2011

    A few years back, Intel prognosticated that NVIDIA's CUDA technology was destined to be a "footnote" in computing history. Since that time, Jen-Hsun Huang's low level virtual machine (LLVM) based compiler has more than proven its worth in several supercomputers, and now NVIDIA has released the CUDA source code to further spread the parallel computing gospel. This move opens up the code to be used with more programming languages and processors (x86 or otherwise) than ever before, which the company hopes will spur development of "next-generation higher performance computing platforms." Academics and chosen developers can get their hands on the code by registering with NVIDIA at the source below, so head on down and get started -- petaflop parallel processing supercomputers don't build themselves, you know.

  • NVIDIA VP says 'Moore's law is dead'

    by 
    Sean Hollister
    Sean Hollister
    05.03.2010

    NVIDIA and Intel haven't been shy about their differing respective visions of the future of computing in the past year or so, but it looks like Team GPU just upped the rhetoric a little -- a Forbes column by NVIDIA VP Bill Dally argues that "Moore's law is dead." Given that Moore's law is arguably the foundation of Intel's entire business, such a statement is a huge shot across the bow; though other companies like AMD are guided by the doctrine, Intel's relentless pursuit of Gordon Moore's vision has become a focal point and rallying cry for the world's largest chipmaker. So what's Dally's solution to the death of Moore's law? For everyone to buy into parallel computing, where -- surprise, surprise -- NVIDIA's GPUs thrive. Dally says that dual, quad- and hex-core solutions are inefficient -- he likens multi-core chips to "trying to build an airplane by putting wings on a train," and says that only ground-up parallel solutions designed for energy efficiency will bring back the golden age of doubling performance every two years. That sounds fantastic, but as far as power consumption is concerned, well, perhaps NVIDIA had best lead by example.

  • NVIDIA Tesla 20-series GPUs promise to dramatically cut supercomputing costs

    by 
    Donald Melanson
    Donald Melanson
    11.16.2009

    Sure, you've been hearing NVIDIA toss around names like CUDA, Fermi and Tesla for what seems like ages now, but we're guessing this is the sort of thing that'll get most folks to really take notice: a promise to cut supercomputing costs by a factor of ten. That rather impressive feat comes courtesy of the company's new Tesla 20-series GPUs, which come in the form of both single GPU PCI-Express Gen-2 cards and full-fledged GPU computing systems, and promise a whole host of cost-saving benefits for everything from ray tracing to 3D cloud computing to data analytics. Of course we are still talking about "cheap" in supercomputing terms -- look for these to run between $2,499 and $18,995 when they roll out sometime in the second quarter of 2010.