ParallelComputing

Latest

  • Parallella 'supercomputers' headed to early backers, 16-core boards up for general pre-order

    by 
    Jamie Rigg
    Jamie Rigg
    07.23.2013

    Following its successful Kickstarter campaign, Adapteva flashed the production versions of its Parallella "supercomputer" boards in April, penning in a loose summer delivery date. Today, the company reports that the first "beta" units have begun winding their way to backers who pledged at the DEVELOPER, 64-CORE-PLUS and ROLF levels. Other backers should receive their boards by summer's end "after some final refinements." For those who missed the crowd-funding window, you too can get a Parallella, as Adapteva has now opened up general pre-orders for the 16-core version on its website. While all Kickstarter-bought boards will bear a Zynq-7020 SoC, new pre-orders are configured with a 7010 as standard, though you can upgrade to the 7020 if you lay down a little more dough. However, newcomers will be treated to "Gen-1" boards, which will offer slight improvements over earlier versions, such as reduced power consumption and an added serial port three-pin header. You'll find the basic 16-core board going for $99 over at Adapteva's store, with an expected October delivery date. The company tells us the 64-core version will also be available for public consumption, with pre-orders beginning in Q4 this year.

  • Insert Coin: The Parallella project dreams of $99 supercomputers

    by 
    Jamie Rigg
    Jamie Rigg
    09.28.2012

    In Insert Coin, we look at an exciting new tech project that requires funding before it can hit production. If you'd like to pitch a project, please send us a tip with "Insert Coin" as the subject line. Parallel computing is normally reserved for supercomputers way out of the reach of average users -- at least at the moment, anyway. Adapteva wants to challenge that with its Parallella project, designed to bring mouth-watering power to a board similar in size to the Raspberry Pi for as little as $99. It hopes to deliver up to 45GHz (in total) using its Epiphany multicore accelerators, that crucially, only chug 5 watts of juice under normal conditions. These goliath speeds currently mean high costs, which is why they need your funds to move out of the prototype stage and start cheap mass production. Specs for the board are as follows: a dual-core ARM A9 CPU running Ubuntu OS as standard, 1GB RAM, a microSD slot, two USB 2.0 ports, HDMI, Ethernet and a 16- or 64-core accelerator, with each core housing a 1GHz RISC processor, all linked "within a single shared memory architecture." An overriding theme of the Parallella project is the openness of the platform. When finalized, the full board design will be released, and each one will ship with free, open-source development tools and runtime libraries. In addition, full architecture and SDK documentation will be published online if-and-when the Kickstarter project reaches its funding goal of $750,000. That's pretty ambitious, but we're reminded of another crowd-funded venture which completely destroyed an even larger target. However, that sum will only be enough for Adapteva to produce the 16-core board, which reportedly hits 13GHz and 26 gigaflops, and is expected to set you back a measly $99. A speculative $3 million upper goal has been set for work to begin on the $199 64-core version, topping out at 45GHz and 90 gigaflops. Pledge options range from $99 to $5,000-plus, distinguished mainly by how soon you'll get your hands on one. Big spenders will also be the first to receive a 64-core board when they become available. Adapteva's Andreas Olofsson talks through the Parallella project in a video after the break, but if you're already sold on the tiny supercomputer, head over to the source link to contribute before the October 27th closing date.

  • NVIDIA open sources CUDA compiler, shares its LLVM-based love with everyone

    by 
    Michael Gorman
    Michael Gorman
    12.14.2011

    A few years back, Intel prognosticated that NVIDIA's CUDA technology was destined to be a "footnote" in computing history. Since that time, Jen-Hsun Huang's low level virtual machine (LLVM) based compiler has more than proven its worth in several supercomputers, and now NVIDIA has released the CUDA source code to further spread the parallel computing gospel. This move opens up the code to be used with more programming languages and processors (x86 or otherwise) than ever before, which the company hopes will spur development of "next-generation higher performance computing platforms." Academics and chosen developers can get their hands on the code by registering with NVIDIA at the source below, so head on down and get started -- petaflop parallel processing supercomputers don't build themselves, you know.

  • Barcelona readies hybrid ARM-based supercomputer, uses NVIDIA GPUs for heavy lifting

    by 
    Mat Smith
    Mat Smith
    11.14.2011

    NVIDIA has announced that it'll be providing CUDA GPUs for Barcelona's Supercomputing Center, with the facility looking to substantially boost its energy efficiency with these later this week at the SC11 Conference in Seattle. While the words "low power" and "energy efficiency" are a bit of a buzz kill in the high-octane high-MFLOP world of supercomputing, the BSC thinks it'll use between 15 to 30 times less power than current systems. Titled the Mont Blanc Project, it's aiming to multiply those energy savings by four to ten times by 2014. While other supercomputers eat their way though megawatts of the electric stuff, hopefully a drop in power demands won't affect this machine's supercomputing scores.

  • IBM rig doesn't look like much, scans 10 billion files in 43 minutes

    by 
    Sharif Sakr
    Sharif Sakr
    07.22.2011

    Someone ought to gift these IBM researchers a better camera, because their latest General Parallel File System is a back-slapping 37 times faster than their last effort back in 2007. The rig combines ten IBM System xSeries servers with Violin Memory SSDs that hold 6.5 terabytes of metadata relating to 10 billion separate files. Every single one of those files can be analyzed and managed using policy-guided rules in under three quarters of an hour. That kind of performance might seem like overkill, but it's only just barely in step with what IBM's Doug Balog describes as a "rapidly growing, multi-zettabyte world." No prizes for guessing who their top customer is likely to be. Full details in the PR after the break.

  • AppleCrate II parallel computer made from Apple IIe motherboards

    by 
    Michael Grothaus
    Michael Grothaus
    05.04.2011

    In what's probably the coolest custom-built machine I've seen in ages, computer enthusiast Michael J. Mahon has built a parallel computer made out of 17 Apple IIe motherboards. As you can see the "AppleCrate II" looks like a big crate of motherboards stacked on top of each other -- and that's pretty much what it is, along with some very clever networking and custom boot code. This is actually his second computer built using old Apple motherboards. The first we covered five years ago; it also used spare Apple IIe boards, although those were a slightly different version. I'll let the image of Mahon's latest creation speak for itself, but if you've got some extra motherboards and some free time, Mahon details how he built his latest wonder over on his website. What's it good for? Well, in addition to blinking its status lights like a Cylon, it can play Beatles songs in 16-part polyphony -- so that's something. [via BoingBoing]

  • NVIDIA teams with PGI for CUDA-x86, gifts its brand of parallelism to the world

    by 
    Sean Hollister
    Sean Hollister
    09.21.2010

    NVIDIA's GPU Technology Conference 2010 just kicked off in San Jose, and CEO Jen-Hsun Huang has shared something interesting with us on stage -- thanks to a partnership with The Portland Group, it's bringing the CUDA parallel computing framework to x86. Previously limited to NVIDIA GPUs -- and the lynchpin of NVIDIA's argument for GPGPU computing -- CUDA applications will now run on "any computer, or any server in the world." Except those based on ARM, we suppose. Still no word on NVIDIA's x86 CPU.

  • NVIDIA VP says 'Moore's law is dead'

    by 
    Sean Hollister
    Sean Hollister
    05.03.2010

    NVIDIA and Intel haven't been shy about their differing respective visions of the future of computing in the past year or so, but it looks like Team GPU just upped the rhetoric a little -- a Forbes column by NVIDIA VP Bill Dally argues that "Moore's law is dead." Given that Moore's law is arguably the foundation of Intel's entire business, such a statement is a huge shot across the bow; though other companies like AMD are guided by the doctrine, Intel's relentless pursuit of Gordon Moore's vision has become a focal point and rallying cry for the world's largest chipmaker. So what's Dally's solution to the death of Moore's law? For everyone to buy into parallel computing, where -- surprise, surprise -- NVIDIA's GPUs thrive. Dally says that dual, quad- and hex-core solutions are inefficient -- he likens multi-core chips to "trying to build an airplane by putting wings on a train," and says that only ground-up parallel solutions designed for energy efficiency will bring back the golden age of doubling performance every two years. That sounds fantastic, but as far as power consumption is concerned, well, perhaps NVIDIA had best lead by example.

  • NVIDIA Tesla 20-series GPUs promise to dramatically cut supercomputing costs

    by 
    Donald Melanson
    Donald Melanson
    11.16.2009

    Sure, you've been hearing NVIDIA toss around names like CUDA, Fermi and Tesla for what seems like ages now, but we're guessing this is the sort of thing that'll get most folks to really take notice: a promise to cut supercomputing costs by a factor of ten. That rather impressive feat comes courtesy of the company's new Tesla 20-series GPUs, which come in the form of both single GPU PCI-Express Gen-2 cards and full-fledged GPU computing systems, and promise a whole host of cost-saving benefits for everything from ray tracing to 3D cloud computing to data analytics. Of course we are still talking about "cheap" in supercomputing terms -- look for these to run between $2,499 and $18,995 when they roll out sometime in the second quarter of 2010.

  • ATI Stream goes fisticuffs with NVIDIA's CUDA in epic GPGPU tussle

    by 
    Darren Murph
    Darren Murph
    08.10.2009

    It's a given that the GPGPU (or General-Purpose Graphics Processing Unit) has a long, long ways to go before it can make a dent in the mainstream market, but given that ATI was talking up Stream nearly three whole years ago, we'd say a battle royale between it and its biggest rival was definitely in order. As such, the benchmarking gurus over at PC Perspective saw fit to pit ATI's Stream and NVIDIA's CUDA technologies against one another in a knock-down-drag-out for the ages, essentially looking to see which system took the most strain away from the CPU during video encoding and which produced more visually appealing results. We won't bother getting into the nitty-gritty (that's what the read link is for), but we will say this: in testing, ATI's contraption managed to relieve the most stress from the CPU, though NVIDIA's alternative seemed to pump out the highest quality materials. In other words, you can't win for losin'.

  • Apple II cluster computer

    by 
    Dave Caolo
    Dave Caolo
    09.02.2005

    Eliot at your sister site Hack a Day has linked to a cluster computer built from Apple //e boards. Each of the eight boards in the "AppleCrate" cluster (purchased for $1 each) is netbooted by a separate, more equipped //e. What is it used for, you may ask? The author writes: "I have written several programs. The first was a parallel work simulator. The simulator puts independent 'jobs' into a message queue of work to be done and receives 'results' from a message queue of completed work. The parallel simulator is described in The AppleCrate Parallel Work Simulator. The next program was really just the part of the parallel simulator that runs an Applesoft program on all slave machines, called PRUN (Parallel RUN)."I think just getting it to boot is cool enough. Well done, Michael.[Via Hack a Day]