gpucompute

Latest

  • OpenCL mod for the Kindle Fire HD reveals untapped graphics potential (hands-on video)

    by 
    Sharif Sakr
    Sharif Sakr
    01.04.2013

    As neat as the Kindle Fire HD already is, just a few dinky tweaks could turn it into so much more -- a platform for true physics-based gaming, for example, or even for surprisingly fast photo manipulation. How come? Because both the 8.9-inch and 7-inch versions of the Android-based slate come with a graphics engine that can handle OpenCL acceleration. It certainly won't work out of the box, but Amazon has been working quietly with Imagination Technologies -- the folks behind the tablet's PowerVR GPU -- to try it out. The demo after the break is subtle, perhaps, but it's fluid, detailed and goes far beyond anything that a stock device can achieve. It also proves that, in certain circumstances, OpenCL has the power to boost frame rates by 50 percent while simultaneously lowering power consumption by the same proportion. Read on for more.

  • Adobe Premiere Pro CS6 now fully supports Retina MacBook Pro: both HiDPI and GPU compute

    by 
    Sharif Sakr
    Sharif Sakr
    09.06.2012

    Adobe's video editing application is already a lovely thing on the Retina MacBook Pro, but not visually -- only in terms of its raw performance on that Core i7 CPU. Until today's update -- 6.0.2 -- the software hasn't actually been able to make use of HiDPI itself, and neither has it been able to exploit the performance-boosting potential of GPU compute on the laptop's NVIDIA GTX 650M graphics card. If you're lucky enough to own this combo of hardware and software, Adobe's official blog suggests that you go ahead and check for the update or apply it manually following the instructions at the source link below (it's actually within Bridge that you should check for the update, with other Adobe titles closed). We're hopefully about to apply it ourselves and will report back on its impact. Update on the update: As expected, video thumbnails look sumptuous in the absence of pixelation, making this a worthy revision. That said, software encoding of a short timeline was still faster with the Mercury Engine set to software mode rather than GPU compute. A 2:30 clip took 2:02 to encode with OpenCL, 2:00 to encode with CUDA, but just 1:42 to encode in Software mode. No doubt people who do multi-cam editing or need to render complex effects in real-time may see a benefit -- please, let us know if you do! Update: Just had word from NVIDIA that may explain what's happening with our encoding times. We're told it's only if we enable "Maximum Render Quality" that GPU compute will shine through in terms of performance, because enabling max quality in software mode would slow it down. So far we've only tried with default settings, so clearly there's room here for more experimentation.

  • Samsung slips into AMD's HSA party, may seek parallel processing boost for Exynos

    by 
    Sharif Sakr
    Sharif Sakr
    08.31.2012

    Trust us, this should ultimately make a lot of sense. As we already know, AMD recently set up the HSA Foundation to promote its vision for better parallel processing -- and especially GPU compute -- in mobiles and PCs. Its semi-rival ARM was one of the first big players to join up, and now Samsung has decided to hop onboard too. Why would it do that? For the simple reason that the Korean company still makes its own chips, based on ARM designs, and we've seen that GPU compute is going to be a big feature in its coming Exynos 5 processor with Mali T604 graphics. Now, anything else at this point is pure speculation, since we only know about Samsung's HSA membership from the appearance of its logo on a relevant slide at AMD's keynote speech at IFA, and there's no official word on Samsung's intentions. At a bare minimum, the company could simply be firming up friendships and hedging its bets on the future of computing. We wouldn't be surprised, however, if Sammy is looking to work with ARM and AMD to implement further aspects of the HSA philosophy into even more advanced Exynos chips down the line -- chips that are able to use both GPU compute and smaller transistors to achieve leaps in performance while also gobbling fewer volts.

  • Engadget Primed: The crazy science of GPU compute

    by 
    Sharif Sakr
    Sharif Sakr
    08.20.2012

    Primed goes in-depth on the technobabble you hear on Engadget every day -- we dig deep into each topic's history and how it benefits our lives. You can follow the series here. Looking to suggest a piece of technology for us to break down? Drop us a line at primed *at* engadget *dawt* com. As you're hopefully aware, this is a gadget blog. As a result, we're innately biased towards stuff that's new and preferably fandangled. More cores, more pixels, more lenses; just give it here and make us happy. The risk of this type of technological greed is that we don't make full use of what we already have, and nothing illustrates that better than the Graphics Processing Unit. Whether it sits in our desktops, laptops, tablets or phones, the GPU is cruelly limited by its history -- its long-established reputation as a dumb, muscular component that takes instructions from the main processor and translates them into pixels for us to gawp at. But what if the GPUs in our devices had some buried genius -- abilities that, if only we could tap into them, would yield hyper-realistic experiences and better all-round performance from affordable hardware? Well, the thing is, this hidden potential actually exists. We've been covering it since at least 2008 and, even though it still hasn't generated enough fuss to become truly famous, the semiconductor industry is making more noise about it now than ever before. So please, join us after the break as we endeavor to explain why the trend known as "GPU compute," aka "general purpose GPU (GPGPU)," or simply "not patronizing your graphics processor," is still exciting despite having let us down in the past. We'll try to show why it's worth learning a few related concepts and terms to help provide a glossary for future coverage; and why, on the whole, your graphics chip is less Hasselhoff and more Hoffman than you may have imagined.

  • ARM claims new GPU has desktop-class brains, requests OpenCL certificate to prove it

    by 
    Sharif Sakr
    Sharif Sakr
    08.02.2012

    It's been a while since ARM announced its next generation of Mali GPUs, the T604 and T658, but in the semiconductor business silence should never be confused with inactivity. Behind the scenes, the chip designers have been working with Khronos -- that great keeper of open standards -- to ensure the new graphics processors are fully compliant with OpenCL and are therefore able to use their silicon for general compute tasks (AR, photo manipulation, video rendering etc.) as well as for producing pretty visuals. Importantly, ARM isn't settling for the Embedded Profile version of OpenCL that has been "relaxed" for mobile devices, but is instead aiming for the same Full Profile OpenCL 1.1 found in compliant laptop and desktop GPUs. A tall order for a low-power processor, perhaps, but we have a strong feeling that Khronos's certification is just a formality at this point, and that today's news is a harbinger of real, commercial T6xx-powered devices coming before the end of the year. Even the souped-up Mali 400 in the European Galaxy S III can only reign for so long.

  • Rivals AMD and ARM unite, summon others to become 'heterogeneous'

    by 
    Sharif Sakr
    Sharif Sakr
    06.12.2012

    Rumors of a hook-up between AMD and ARM have been circulating ever since someone coined the phrase "the enemy of Intel is my friend." As of today, however, that alliance is real and cemented in the form of the HSA Foundation -- a non-profit organization dedicated to promoting the dark arts of Heterogeneous System Architecture. It's a relatively old concept in computing, but the Foundation's founding partners (AMD, ARM, Imagination Technologies, MediaTek and Texas Instruments) all stand to gain from its wider adoption. How come? Because it involves boosting a chip's performance by making it use its various components as co-processors, rather than treating them as specialized units that can never help each other out. In other words, while Intel pursues Moore's Law and packs ever-more sophisticated transistors into its CPUs, AMD, ARM and the other HSA pals want to achieve similar or better results through parallel computing. In most cases, that'll mean using the graphics processor on a chip not only for visuals and gaming, but also for general tasks and apps. This can already be achieved using a programming language called OpenCL, but AMD believes it's too tricky to code and is putting mainstream developers off. Equally, NVIDIA has long had its own language for the same purpose, called CUDA, but it's proprietary. Whatever niche is left in the middle, the HSA Foundation hopes to fill it with an easier and more open standard that is not only cross-OS but also transcends the PC / mobile divide. If it works, it'll give us a noticeable surge in computational power in everyday apps by 2014. If it fails, these new-found friends can go back to the less awkward custom of ignoring each other.

  • AMD launches R-Series chip for next-gen casinos, surveillance systems, distractions

    by 
    Sharif Sakr
    Sharif Sakr
    05.21.2012

    While others push for ever-smaller processors to power the so-called Internet of Things, AMD's new R-Series chips are designed to go the other way: upgrading devices that are already hooked up but that could benefit from more graphical whizz. These embedded processors have the same Piledriver and Radeon HD 7000 internals as their Trinity cousins, but they're intended for digital billboards, casino gaming, payment systems and other applications that need to present a pretty picture to the end-user. In addition to visuals, they can also use their built-in GPUs to speed up encryption / decryption and support parallel-processing tasks like medical imaging, multi-camera surveillance and, you know, serious stuff. A number of manufacturers have already adopted the new chips, but perhaps the only place you're likely to be aware that you're using one is if you happen to buy an R-Series equipped mini-ITX motherboard.

  • AMD reveals Trinity specs, claims to beat Intel on price, multimedia, gaming

    by 
    Sharif Sakr
    Sharif Sakr
    05.15.2012

    Itching for the details of AMD's latest Accelerated Processing Units (APUs)? Then get ready to scratch: Trinity has arrived and, as of today, it's ready to start powering the next generation of low-power ultra-portables, laptops and desktops that, erm, don't run Intel. The new architecture boasts up to double the performance-per-watt of last year's immensely popular Llano APUs, with improved "discrete-class" integrated graphics and without adding to the burden on battery life. How is that possible? By how much will Trinity-equipped devices beat Intel on price? And will it play Crysis: Warhead? Read on to find out.