gpgpu

Latest

  • NVIDIA unveils Tesla K40 accelerator, teams with IBM on GPU-based supercomputing

    by 
    Jon Fingas
    Jon Fingas
    11.18.2013

    NVIDIA's Tesla GPUs are already mainstays in supercomputers that need specialized processing power, and they're becoming even more important now that the company is launching its first Tesla built for large-scale projects. The new K40 accelerator only has 192 more processing cores than its K20x ancestor (2,880, like the GeForce GTX 780 Ti), but it crunches analytics and science numbers up to 40 percent faster. A jump to 12GB of RAM, meanwhile, helps it handle data sets that are twice as big as before. The K40 is already available in servers from NVIDIA's partners, and the University of Texas at Austin plans to use it in Maverick, a remote visualization supercomputer that should be up and running by January. As part of the K40 rollout, NVIDIA has also revealed a partnership with IBM that should bring GPU-boosted supercomputing to enterprise-grade data centers. The two plan on bringing Tesla GPU support to IBM's Power8-based servers, including both apps and development tools. It's not clear when the deal will bear fruit, but don't be surprised if it turbocharges a corporate mainframe near you.

  • AMD unveils Radeon HD 8900M laptop graphics, ships them in MSI's GX70 (eyes-on)

    by 
    Jon Fingas
    Jon Fingas
    05.15.2013

    Did you think AMD showed all its mobile GPU cards when it launched the Radeon HD 8000M series in January? Think twice. The company has just unveiled the 8900M series, an adaptation of its Graphics Core Next architecture for desktop replacement-class gaming laptops. To call it a big jump would be an understatement: compared to the 8800M, the flagship 8970M chip doubles the stream processors to 1,280, hikes the clock speed from 725MHz to 850MHz and bumps the memory speed slightly to 1.2GHz. The net effect is about 12 to 54 percent faster game performance than NVIDIA's current mobile speed champion, the GTX 680M, and up to four times the general computing prowess in OpenCL. The 8970M is more than up to the task of powering up to 4K in one screen, and it can handle up to six screens if there are enough ports. We'll see how long AMD's performance reign lasts, although we won't have to wait to try the 8970M -- MSI is launching the GPU inside the new GX70 laptop you see above. We got a brief, hands-off tease of the 17.3-inch GX60 successor at the 8900M's unveiling, and it's clear the graphics are the centerpiece. We saw it driving Crysis 3 very smoothly on one external display while powering 2D on two other screens, albeit through a bulky set of Mini DisplayPort, HDMI and VGA cables. Otherwise, the GX70 is superficially similar to its ancestor with that chunky profile, an unnamed Richland-based AMD A10 processor, Killer networking and a SteelSeries keyboard. More than anything, price should be the clincher: MSI is pricing the GX70 with the new Radeon at $1,100, which amounts to quite the bargain for anyone whose laptop has to double as a primary gaming PC. %Gallery-188363% %Gallery-188362%

  • NVIDIA rolls out Apex and PhysX developer support for the PlayStation 4

    by 
    Jon Fingas
    Jon Fingas
    03.07.2013

    Just because the PlayStation 4 centers around an AMD-based platform doesn't mean that NVIDIA is out of the picture. The graphics firm is updating the software developer kits for both its Apex dynamics framework and PhysX physics modeling system to address Sony's new console, even if they won't have the full hardware acceleration that comes with using NVIDIA's own chipsets. The introductions will mostly take some of the guesswork out of creating realistic-looking games -- theoretically, adding a larger number of collisions, destructible objects and subtler elements like cloth and hair modeling. Most of us won't see the fruits of the updated SDKs until at least this holiday, but programmers looking for more plausible PS4 game worlds can hit the source links.

  • Lighty paints real lighting Photoshop-style, minus the overdone lens flare (video)

    by 
    Jon Fingas
    Jon Fingas
    03.07.2013

    It's not hard to find smart lightbulbs that bow to our every whim. Creating a well-coordinated light scheme can be difficult without tweaking elements one by one, however, which makes the Japan Science and Technology Agency's Lighty project that much more elegant. The approach lets would-be interior coordinators paint degrees of light and shadow through an app, much as they would create a magnum opus in Photoshop or a similar image editor. Its robotic lighting system sorts out the rest: a GPU-assisted computer steers a grid of gimbal-mounted lightbulbs until their positions and intensity match the effect produced on the screen. While Lighty currently exists just as a scale model, the developers plan to work with life-sized rooms, and potentially large halls, from now on. We're all for the newfound creativity in our lighting, as long as we can't mess it up with a Gaussian Blur filter.

  • ARM claims new GPU has desktop-class brains, requests OpenCL certificate to prove it

    by 
    Sharif Sakr
    Sharif Sakr
    08.02.2012

    It's been a while since ARM announced its next generation of Mali GPUs, the T604 and T658, but in the semiconductor business silence should never be confused with inactivity. Behind the scenes, the chip designers have been working with Khronos -- that great keeper of open standards -- to ensure the new graphics processors are fully compliant with OpenCL and are therefore able to use their silicon for general compute tasks (AR, photo manipulation, video rendering etc.) as well as for producing pretty visuals. Importantly, ARM isn't settling for the Embedded Profile version of OpenCL that has been "relaxed" for mobile devices, but is instead aiming for the same Full Profile OpenCL 1.1 found in compliant laptop and desktop GPUs. A tall order for a low-power processor, perhaps, but we have a strong feeling that Khronos's certification is just a formality at this point, and that today's news is a harbinger of real, commercial T6xx-powered devices coming before the end of the year. Even the souped-up Mali 400 in the European Galaxy S III can only reign for so long.

  • NVIDIA GeForce GTX 690 review roundup: (usually) worth the one grand

    by 
    Jon Fingas
    Jon Fingas
    05.03.2012

    Now that NVIDIA's GeForce GTX 690 is shipping through some vendors, gamers have been wondering if it's worth the wallet-busting $999 to get those higher frame rates. Surprisingly, the answer is "yes." As AnandTech notes, the GTX 690 is often almost as fast or faster than a pair of GTX 680s working together in SLI mode, only using less power and running at cooler and quieter power levels through those two 28-nanometer Kepler chips. Across multiple reviewers, though, the GTX 690 was sometimes slower than two Radeon HD 7970 boards using CrossFire. HotHardware and others found that it's definitely the graphics card of choice for Batman: Arkham City enthusiasts: problems with AMD's CrossFire mode leave a dual Radeon HD 7970 setup running at just half the frame rate of its NVIDIA-made challenger. Caveats? There are still some worries beyond the price tag, as the twin Radeon cards are as much as three times faster at general-purpose computing tasks than the latest and greatest GeForce. PC Perspective likewise warns that fans of joining three displays together for some 3D Vision Surround action will still take a big frame rate hit when they put the 3D glasses on. Still, the GTX 690 looks to be tops if you're looking to get the fastest single-card gaming on Earth, and as Legit Reviews adds, that trivalent chromium-plated aluminum makes it one of the "better looking" cards, to boot. Read - AnandTech Read - HotHardware Read - Legit Reviews Read - PC Perspective

  • Barcelona readies hybrid ARM-based supercomputer, uses NVIDIA GPUs for heavy lifting

    by 
    Mat Smith
    Mat Smith
    11.14.2011

    NVIDIA has announced that it'll be providing CUDA GPUs for Barcelona's Supercomputing Center, with the facility looking to substantially boost its energy efficiency with these later this week at the SC11 Conference in Seattle. While the words "low power" and "energy efficiency" are a bit of a buzz kill in the high-octane high-MFLOP world of supercomputing, the BSC thinks it'll use between 15 to 30 times less power than current systems. Titled the Mont Blanc Project, it's aiming to multiply those energy savings by four to ten times by 2014. While other supercomputers eat their way though megawatts of the electric stuff, hopefully a drop in power demands won't affect this machine's supercomputing scores.

  • WebCL scores first demos, GPU accelerated apps headed to your browser

    by 
    Terrence O'Brien
    Terrence O'Brien
    07.06.2011

    Look, WebGL is great and everything but, in the era of general-purpose GPU computing, we know our 3D chips are capable of much more than just pushing pixels. WebCL is a new standard that brings OpenCL processing to the browser, leveraging the power of your graphics card to perform complex computations. Samsung and Nokia have both released prototype plug-ins, with Sammy's running exclusively in Safari on OS X using NVIDIA chips and Nokia focusing on the 32-bit Windows version of Firefox 4 and AMD GPUs. At the moment, the young technology doesn't offer much to the average user, but demos (after the break) show just how much faster OpenCL can be than traditional JavaScript -- more than 10-times quicker on some tests. Let the countdown to Folding@Home the Web App begin -- we're starting a pool now.

  • Real-time 3D face reproduction demonstrated on video

    by 
    Darren Murph
    Darren Murph
    12.15.2010

    Eager to be freaked right on out? If so, head past the break and mash play. There, you'll see a recent demonstration by Tohto C-Tech, where a 3D camera setup was used to capture a person's face and then reproduce it on a monitor (in 3D, no less) in real-time. We're told that an undisclosed GPGPU setup was used to pull it off, as typical CPUs just weren't quick enough to render the final product on their own. The camera setup actually captures the face from two different viewpoints, enabling the sides of the face to be shown in addition to the front. We warned you that copious amounts of freaky were involved.

  • ARM intros next-gen Mali-T604 embedded GPU, Samsung first to get it (update: video)

    by 
    Chris Ziegler
    Chris Ziegler
    11.10.2010

    Promising "visually rich user experiences not previously seen in consumer electronics devices," ARM has introduced its latest embedded GPU architecture, Mali-T604, at its Technology Conference 2010 in California today. Though we're unlikely to see it in devices any time soon, the introduction means that the new design is available to ARM licensees -- and notably, the company points out that partner Samsung will be the first to get hooked up. Considering Sammy competes in the high-end embedded system-on-chip space already with its ARM-based Hummingbird line of cores, adding in the Mali-T604 is probably the next logical step for them. ARM says that it's designed "specifically" with the needs of general purpose GPU computing in mind and includes extensive support both for OpenCL and DirectX, so look for some insane number-crunching capabilities on your next-generation phone, tablet, and set-top box. Follow the break for ARM's press release. Update: We sat down with ARM's Jem Davies to get some more details about the new Mali, and discovered it's only the first of several potential next-gen GPUs to come as part of the Midgard platform -- while this particular processor is available with up to four shader cores, successors might have more. The T604 itself is no slouch, though, as it can theoretically deliver two to five times the performance of the company's existing Mali 400 GPUs core for core and clock for clock -- which themselves run circles around the PowerVR SGX 540 competition if you take ARM at its word. Davies told us that not only does the Mali-T604 do DirectX, it supports the game-friendly DirectX11 as well as the always-popular OpenGL ES 2.0, and will appear in an system-on-a-chip together with an ARM Cortex-A15 "Eagle" CPU, when both are eventually baked into silicon several years down the road. Of course, in the eyes of marketers the future is always now, so get a look at conceptual uses (hint: augmented reality) for ARM's new Mali right after the break. Additional reporting by Sean Hollister

  • NVIDIA teams with PGI for CUDA-x86, gifts its brand of parallelism to the world

    by 
    Sean Hollister
    Sean Hollister
    09.21.2010

    NVIDIA's GPU Technology Conference 2010 just kicked off in San Jose, and CEO Jen-Hsun Huang has shared something interesting with us on stage -- thanks to a partnership with The Portland Group, it's bringing the CUDA parallel computing framework to x86. Previously limited to NVIDIA GPUs -- and the lynchpin of NVIDIA's argument for GPGPU computing -- CUDA applications will now run on "any computer, or any server in the world." Except those based on ARM, we suppose. Still no word on NVIDIA's x86 CPU.

  • NVIDIA VP says 'Moore's law is dead'

    by 
    Sean Hollister
    Sean Hollister
    05.03.2010

    NVIDIA and Intel haven't been shy about their differing respective visions of the future of computing in the past year or so, but it looks like Team GPU just upped the rhetoric a little -- a Forbes column by NVIDIA VP Bill Dally argues that "Moore's law is dead." Given that Moore's law is arguably the foundation of Intel's entire business, such a statement is a huge shot across the bow; though other companies like AMD are guided by the doctrine, Intel's relentless pursuit of Gordon Moore's vision has become a focal point and rallying cry for the world's largest chipmaker. So what's Dally's solution to the death of Moore's law? For everyone to buy into parallel computing, where -- surprise, surprise -- NVIDIA's GPUs thrive. Dally says that dual, quad- and hex-core solutions are inefficient -- he likens multi-core chips to "trying to build an airplane by putting wings on a train," and says that only ground-up parallel solutions designed for energy efficiency will bring back the golden age of doubling performance every two years. That sounds fantastic, but as far as power consumption is concerned, well, perhaps NVIDIA had best lead by example.

  • NVIDIA unleashes GeForce GTX 480 and GTX 470 'tessellation monsters'

    by 
    Vlad Savov
    Vlad Savov
    03.26.2010

    Let's get the hard data out of the way first: 480 CUDA cores, 700 MHz graphics and 1,401MHz processor clock speeds, plus 1.5GB of onboard GDDR5 memory running at 1,848MHz (for a 3.7GHz effective data rate). Those are the specs upon which Fermi is built, and those are the numbers that will seek to justify a $499 price tag and a spectacular 250W TDP. We attended a presentation by NVIDIA this afternoon, where the above GTX 480 and its lite version, the GTX 470, were detailed. The latter card will come with a humbler 1.2GB of memory plus 607MHz, 1,215MHz and 1,674MHz clocks, while dinging your wallet for $349 and straining your case's cooling with 215W of hotness. NVIDIA's first DirectX 11 parts are betting big on tessellation becoming the way games are rendered in the future, with the entire architecture being geared toward taking duties off the CPU and freeing up its cycles to deliver performance improvements elsewhere. This is perhaps no better evidenced than by the fact that both GTX models scored fewer 3DMarks than the Radeon HD 5870 and HD 5850 that they're competing against, but managed to deliver higher frame rates than their respective competitors in in-game benchmarks from NVIDIA. The final bit of major news here relates to SLI scaling, which is frankly remarkable. NVIDIA claims a consistent 90 percent performance improvement (over a single card) when running GTX 480s in tandem, which is as efficient as any multi-GPU setup we've yet seen. After the break you'll find a pair of tech demos and a roundup of the most cogent reviews.%Gallery-89115%

  • AMD spells out the future: heterogeneous computing, Bulldozer and Bobcats galore

    by 
    Darren Murph
    Darren Murph
    11.12.2009

    Believe it or not, it's just about time for AMD to start thinking about its future. We know -- you're still doing your best to wrap that noodle around Congos and Thubans, but now it's time to wonder how exactly Leo, Llano and Zambezi (to name a few) can fit into your already hectic schedule. At an Analyst Day event this week, the chipmaker removed the wraps on its goals for 2010 and 2011, and while it's still focusing intently on Fusion (better described as heterogeneous computing, where "workloads are divided between the CPU and GPU"), it's the forthcoming platforms that really have us worked up. For starters, AMD is looking into Accelerated Processing Unit (APU) configurations, which "represent the combined capabilities of [practically any] two separate processors." We're also told that the firm may actually introduce its Bulldozer (architecture for mainstream machines) and Bobcat (architecture for low-power, ultrathin PCs) platforms more hastily than similar ones have been rolled out in the past, which demonstrates an effort to really target the consumer market where Intel currently reigns. Frankly, we're jazzed about the possibilities, so hit the links below for a deep dive into what just might be powering your next (or next-next) PC. [Via Digitimes]

  • NVIDIA launches Fermi next-gen GPGPU architecture, CUDA and OpenCL get even faster

    by 
    Nilay Patel
    Nilay Patel
    10.01.2009

    NVIDIA had told us it would be accelerating its CUDA program to try and get an advantage over its competitors as OpenCL brings general-purpose GPU computing to the mainstream, and it looks like that effort's paying off -- the company just announced its new Fermi CUDA architecture, which will also serve as the foundation of its next-gen GeForce and Quadro products. The new features are all pretty technical -- the world's first true cache hierarchy in a GPU, anyone? -- but the big takeaway is that CUDA and OpenCl should run even faster on this new silicon, and that's never a bad thing. Hit up the read links for the nitty-gritty, if that's what gets you going. Read - NVIDIA Fermi siteRead - Hot Hardware analysis Read - PC Perspective analysis

  • ATI Stream goes fisticuffs with NVIDIA's CUDA in epic GPGPU tussle

    by 
    Darren Murph
    Darren Murph
    08.10.2009

    It's a given that the GPGPU (or General-Purpose Graphics Processing Unit) has a long, long ways to go before it can make a dent in the mainstream market, but given that ATI was talking up Stream nearly three whole years ago, we'd say a battle royale between it and its biggest rival was definitely in order. As such, the benchmarking gurus over at PC Perspective saw fit to pit ATI's Stream and NVIDIA's CUDA technologies against one another in a knock-down-drag-out for the ages, essentially looking to see which system took the most strain away from the CPU during video encoding and which produced more visually appealing results. We won't bother getting into the nitty-gritty (that's what the read link is for), but we will say this: in testing, ATI's contraption managed to relieve the most stress from the CPU, though NVIDIA's alternative seemed to pump out the highest quality materials. In other words, you can't win for losin'.

  • Intel shows Larrabee die shot in Germany, speculators go berserk

    by 
    Darren Murph
    Darren Murph
    05.14.2009

    It's been right around a century since Intel has provided any sort of hard evidence that Larrabee (a next-gen hybrid CPU / GPU) was more than a figment of anyone's imagination, but thanks to a die shot throw up Will Ferrell-style at the Visual Computing Institute of the Saarland University, we'd say the speculation is definitely back on. Intel's Chief Technology Officer, Justin Rattner, was responsible for the demo, but when PC Perspective pinged the company to inquire further, it suggested that the image we see above may not necessarily be indicative of the final shipping product, but that Larrabee was "healthy and in [its] labs right now." Sweet, so how's about a date in which that statement changes to "in shipping machines right now?" Hmm?[Via PC Perspective]

  • NVIDIA's GT300 specs outed -- is this the cGPU we've been waiting for?

    by 
    Darren Murph
    Darren Murph
    04.26.2009

    NVIDIA's been dabbling in the CPU space behind closed doors for years now, but with Intel finally making a serious push into the GPU realm, it's about time the firm got serious with bringing the goods. BSN has it that the company's next-generation GT300 will be fundamentally different than the GT200 -- in fact, it's being hailed as the "first truly new architecture since SIMD (Single-Instruction Multiple Data) units first appeared in graphical processors." Beyond this, the technobabble runs deep, but the long and short of it is this: NVIDIA could be right on the cusp of delivering a single chip that can handle tasks that were typically separated for the CPU and GPU, and we needn't tell you just how much your life could change should it become a reality. Now, if only NVIDIA would come clean and lift away some of this fog surrounding it (and the rumored GTX 380), that'd be just swell.[Thanks, Musouka]

  • NVIDIA dishes about OpenCL

    by 
    Nilay Patel
    Nilay Patel
    12.09.2008

    We spent some time on the phone with NVIDIA today in the wake of last night's official release of the OpenCL GPU-processing spec, and we learned some interesting things. NVIDIA thinks OpenCL is going to bring a lot more attention to general-purpose GPU computing, and it's planning on stoking the flames -- not only is it accelerating the CUDA release schedule, it's planning on working with Microsoft on DirectX 11 Compute. Hit the break for some more highlights!

  • OpenCL 1.0 spec released, GPUs everywhere to get a workout

    by 
    Nilay Patel
    Nilay Patel
    12.08.2008

    How time flies -- it was just a few weeks ago that the OpenCL spec was finalized and sent out for final legal review, and now it's here and ready to go. Over 20 partner companies (including AMD, NVIDIA, and, somewhat surprisingly, Intel) have signed on to the parallel programming standard originally proposed by Apple as part of Snow Leopard, and the final spec should allow apps to tap into multi-core CPUs, GPUs, DSPs and even variants of the Cell chip for everything from raw number crunching to interfacing with OpenGL. Sounds hot -- now we'll just have to see how Microsoft counters with the GPU acceleration expected to be built into Windows 7.