ParallelProcessing

Latest

  • Samsung slips into AMD's HSA party, may seek parallel processing boost for Exynos

    by 
    Sharif Sakr
    Sharif Sakr
    08.31.2012

    Trust us, this should ultimately make a lot of sense. As we already know, AMD recently set up the HSA Foundation to promote its vision for better parallel processing -- and especially GPU compute -- in mobiles and PCs. Its semi-rival ARM was one of the first big players to join up, and now Samsung has decided to hop onboard too. Why would it do that? For the simple reason that the Korean company still makes its own chips, based on ARM designs, and we've seen that GPU compute is going to be a big feature in its coming Exynos 5 processor with Mali T604 graphics. Now, anything else at this point is pure speculation, since we only know about Samsung's HSA membership from the appearance of its logo on a relevant slide at AMD's keynote speech at IFA, and there's no official word on Samsung's intentions. At a bare minimum, the company could simply be firming up friendships and hedging its bets on the future of computing. We wouldn't be surprised, however, if Sammy is looking to work with ARM and AMD to implement further aspects of the HSA philosophy into even more advanced Exynos chips down the line -- chips that are able to use both GPU compute and smaller transistors to achieve leaps in performance while also gobbling fewer volts.

  • NVIDIA open sources CUDA compiler, shares its LLVM-based love with everyone

    by 
    Michael Gorman
    Michael Gorman
    12.14.2011

    A few years back, Intel prognosticated that NVIDIA's CUDA technology was destined to be a "footnote" in computing history. Since that time, Jen-Hsun Huang's low level virtual machine (LLVM) based compiler has more than proven its worth in several supercomputers, and now NVIDIA has released the CUDA source code to further spread the parallel computing gospel. This move opens up the code to be used with more programming languages and processors (x86 or otherwise) than ever before, which the company hopes will spur development of "next-generation higher performance computing platforms." Academics and chosen developers can get their hands on the code by registering with NVIDIA at the source below, so head on down and get started -- petaflop parallel processing supercomputers don't build themselves, you know.

  • IBM rig doesn't look like much, scans 10 billion files in 43 minutes

    by 
    Sharif Sakr
    Sharif Sakr
    07.22.2011

    Someone ought to gift these IBM researchers a better camera, because their latest General Parallel File System is a back-slapping 37 times faster than their last effort back in 2007. The rig combines ten IBM System xSeries servers with Violin Memory SSDs that hold 6.5 terabytes of metadata relating to 10 billion separate files. Every single one of those files can be analyzed and managed using policy-guided rules in under three quarters of an hour. That kind of performance might seem like overkill, but it's only just barely in step with what IBM's Doug Balog describes as a "rapidly growing, multi-zettabyte world." No prizes for guessing who their top customer is likely to be. Full details in the PR after the break.

  • Tilera's new 100-core CPU elbows its way to the cloud, face-melt still included

    by 
    Joseph Volpe
    Joseph Volpe
    06.21.2011

    Hundred core chips might not be breaking news -- especially if the company announcing it is Tilera -- but what if that new multi-core CPU drew an insanely lower wattage and set its sights on powering a few cloud server farms? Well, that's exactly what chip maker Tilera has up its silicon sleeve. "Co-developed with the world's leading cloud computing companies" -- take a guess who that might include -- the new 64-bit TileGx-3100 clocks in at up to 1.5GHz while sucking down a lighter 48W. Line that up next to the current cloud favorite, Intel's Xeon, and your power consumption is slashed nearly in half. Of course, the barrier to entry is high for the nascent chip developer since most code written is for the x86 -- requiring a whole new set of instructions for data centers to play nice. Expect to see this face-melting monster sometime early 2012, by which time, you'll probably have your 50,000 strong music library synced to the cloud.

  • NVIDIA teams with PGI for CUDA-x86, gifts its brand of parallelism to the world

    by 
    Sean Hollister
    Sean Hollister
    09.21.2010

    NVIDIA's GPU Technology Conference 2010 just kicked off in San Jose, and CEO Jen-Hsun Huang has shared something interesting with us on stage -- thanks to a partnership with The Portland Group, it's bringing the CUDA parallel computing framework to x86. Previously limited to NVIDIA GPUs -- and the lynchpin of NVIDIA's argument for GPGPU computing -- CUDA applications will now run on "any computer, or any server in the world." Except those based on ARM, we suppose. Still no word on NVIDIA's x86 CPU.

  • Intel plans to stuff more than 8 cores, extra speed into 2011 server chips

    by 
    Vlad Savov
    Vlad Savov
    05.07.2010

    Yeah yeah, "more cores and faster speeds," you've heard it all before right? That'd be our reaction too if we weren't talking about the successor to the Nehalem-Ex, Intel's most gruesomely overpowered chip to date. Launched under the Xeon 7500 branding in March, it represents Intel's single biggest generational leap so far, and with its eight cores, sixteen threads, and 24MB of shared onboard cache, you could probably see why. Time waits for no CPU though, and Intel's planned 32nm Westmere-Ex successor will move things forward with an unspecified increase in both core count (speculated to be jumping up to 12) and operating frequencies, while keeping within the same power envelope. Given the current 2.26GHz default speed and 2.66GHz Turbo Boost option of the 7500, that means we're probably looking at a 2.4GHz to 2.5GHz 12-core, hyper-threaded processor, scheduled to land at some point next year. Time to make some apps that can use all that parallel processing power, nay?

  • NVIDIA VP says 'Moore's law is dead'

    by 
    Sean Hollister
    Sean Hollister
    05.03.2010

    NVIDIA and Intel haven't been shy about their differing respective visions of the future of computing in the past year or so, but it looks like Team GPU just upped the rhetoric a little -- a Forbes column by NVIDIA VP Bill Dally argues that "Moore's law is dead." Given that Moore's law is arguably the foundation of Intel's entire business, such a statement is a huge shot across the bow; though other companies like AMD are guided by the doctrine, Intel's relentless pursuit of Gordon Moore's vision has become a focal point and rallying cry for the world's largest chipmaker. So what's Dally's solution to the death of Moore's law? For everyone to buy into parallel computing, where -- surprise, surprise -- NVIDIA's GPUs thrive. Dally says that dual, quad- and hex-core solutions are inefficient -- he likens multi-core chips to "trying to build an airplane by putting wings on a train," and says that only ground-up parallel solutions designed for energy efficiency will bring back the golden age of doubling performance every two years. That sounds fantastic, but as far as power consumption is concerned, well, perhaps NVIDIA had best lead by example.

  • Tilera's 100-core Tile-GX processor won't boil the oceans, will still melt faces

    by 
    Thomas Ricker
    Thomas Ricker
    10.26.2009

    Sixty-four, sixty-shmore... that's so 2007 in terms of processing cores found in a single CPU: one hundred cores is where the future of computing resides. This magnificent engineering feat isn't from AMD or even Intel, it's the latest Tile-GX series of chips from the two-year old San Jose startup, Tilera. Its general purpose chips can run stand-alone or as co-processors running alongside those x86 chips that usually ship in four-, six-, or now eight-core configurations like Intel's upcoming Nehalem-EX chip. Tilera's 100-core chip pulls 55 watts at peak performance while its 16-core chip draws as little as 5 watts. Tilera uses the same mesh architecture as its previous 64-core chip in order to overcome the performance degradation accompanying data exchange on typical, multi-core processors -- or so it says. Tilera's new 40-nm process chips have cranked the clock to 1.5GHz and include support for 64-bit processing. And while its processors could be applied to any number of computing scenarios, Tilera's focusing on lucrative markets like parallel-processing where its meager developer and marketing resources can extract a relatively quick payout. The fun begins in early 2011 with volume pricing set between $400 and $1000.[Via PC World]

  • Intel and Microsoft fund $20M grant to reinvent computing: where do you want to go tomorrow?

    by 
    Nilay Patel
    Nilay Patel
    03.25.2008

    Although both Microsoft and Intel's R&D departments have been responsible for some nifty futuristic tech, the two companies got together last week and announced a $20M grant to two universities to "start over" and develop next-gen computing systems based around parallel processing. The grant will fund Universal Parallel Computing Research Centers at UC -Berkeley, which is kicking in another $7M, and the University of Illinois at Champaign / Urbana, which is donating $8M of its own. According to Mark Snir, head of the UIUC lab, the goal is to find a way to make "parallelism so easy to use that parallel programming becomes synonymous with programming" -- an increasingly important priority as current multi-core processors aren't necessarily being fully utilized, and 100-core processors aren't far off. That leads us to wonder: what to do with all that newly-unlocked processing power? Virtual-reality Facebook? Real-time visual augmentation? Finally being able to run Crysis? We know you've got ideas -- sound off in comments![Thanks, Luke]

  • Researchers tout breakthrough in single chip parallel processing

    by 
    Donald Melanson
    Donald Melanson
    06.25.2007

    Researchers at the University of Maryland's A. James Clark School of Engineering have developed a prototype of what they say could be the "next generation" of personal computers, one that's apparently 100 times faster than current desktop PCs. That considerable feat was made possible though the use of parallel processing on a single chip, in this case, cramming 64 processors onto a circuit board the size of a license plate. Just as importantly, the researchers also developed the necessary software to ensure all that computing muscle gets along, which they say makes the system "feasible for general-purpose computing tasks" for the first time. They don't appear to be content with things just yet though, saying that the same principles could one day be applied to systems with 1,000 processors on a chip the size of a finger nail.