cuda

Latest

  • NVIDIA's $99 Jetson Nano is an AI computer for DIY enthusiasts

    by 
    Jon Fingas
    Jon Fingas
    03.18.2019

    Sophisticated AI generally isn't an option for homebrew devices when the mini computers can rarely handle much more than the basics. NVIDIA thinks it can do better -- it's unveiling an entry-level AI computer, the Jetson Nano, that's aimed at "developers, makers and enthusiasts." NVIDIA claims that the Nano's 128-core Maxwell-based GPU and quad-core ARM A57 processor can deliver 472 gigaflops of processing power for neural networks, high-res sensors and other robotics features while still consuming a miserly 5W. On the surface, at least, it could hit the sweet spot if you're looking to build your own robot or smart speaker.

  • NVIDIA details how its Jetson development kit creates smart, seeing cars

    by 
    Jon Fingas
    Jon Fingas
    03.23.2013

    Developing a high-end in-car infotainment system can present challenges that don't exist in other platforms -- you're juggling core car systems, a myriad of sensors and media playback in a testbed on wheels. NVIDIA has just explained how it's uniting those elements with its new, lengthily-titled Jetson Automotive Development Platform. While it looks like a single-DIN car stereo laid bare, the configurable kit incorporates a Tegra processor (for usual infotainment functions), multiple car-friendly interfaces and a Kepler-based graphics chipset that can power car detection, lane departure and other computer vision systems by using CUDA or OpenCV code. The net effect should be a much simpler development process: automakers can consolidate some of their test hardware in one Jetson unit that they can upgrade or swap out if newer technology comes along. NVIDIA isn't naming the handful of designers and suppliers that are already building car electronics using Jetson, although history offers a few possible candidates.

  • NVIDIA launches GeForce 650 Ti, Maingear says all custom desktop models now have it

    by 
    Steve Dent
    Steve Dent
    10.09.2012

    No one can accuse Maingear of skipping the latest hardware cycles -- less than a month after offering PCs with NVIDIA GeForce GTX 650 and 660 graphics, the PC system builder has announced that you can get all its desktop machines with GTX 650 Ti graphics now, too. As it happens, NVIDIA has just launched that very product -- a friskier version of the GTX 650 with extra Cuda cores -- keeping Maingear in lockstep with its graphics board suppliers again. That card will fill the gap between the $229 GeForce GTX 660 and $109 GTX 650 cards and run $149 (estimated), while Maingear has said it will feature special F131 and Potenza GTX 650 Ti systems at $150 off, along with a free copy of Assassin's Creed III. So, if you need all the fps you can get, but can't quite pony up for the more desirable GTX 660, check the PR for more details.

  • Adobe Premiere Pro CS6 now fully supports Retina MacBook Pro: both HiDPI and GPU compute

    by 
    Sharif Sakr
    Sharif Sakr
    09.06.2012

    Adobe's video editing application is already a lovely thing on the Retina MacBook Pro, but not visually -- only in terms of its raw performance on that Core i7 CPU. Until today's update -- 6.0.2 -- the software hasn't actually been able to make use of HiDPI itself, and neither has it been able to exploit the performance-boosting potential of GPU compute on the laptop's NVIDIA GTX 650M graphics card. If you're lucky enough to own this combo of hardware and software, Adobe's official blog suggests that you go ahead and check for the update or apply it manually following the instructions at the source link below (it's actually within Bridge that you should check for the update, with other Adobe titles closed). We're hopefully about to apply it ourselves and will report back on its impact. Update on the update: As expected, video thumbnails look sumptuous in the absence of pixelation, making this a worthy revision. That said, software encoding of a short timeline was still faster with the Mercury Engine set to software mode rather than GPU compute. A 2:30 clip took 2:02 to encode with OpenCL, 2:00 to encode with CUDA, but just 1:42 to encode in Software mode. No doubt people who do multi-cam editing or need to render complex effects in real-time may see a benefit -- please, let us know if you do! Update: Just had word from NVIDIA that may explain what's happening with our encoding times. We're told it's only if we enable "Maximum Render Quality" that GPU compute will shine through in terms of performance, because enabling max quality in software mode would slow it down. So far we've only tried with default settings, so clearly there's room here for more experimentation.

  • Engadget Primed: The crazy science of GPU compute

    by 
    Sharif Sakr
    Sharif Sakr
    08.20.2012

    Primed goes in-depth on the technobabble you hear on Engadget every day -- we dig deep into each topic's history and how it benefits our lives. You can follow the series here. Looking to suggest a piece of technology for us to break down? Drop us a line at primed *at* engadget *dawt* com. As you're hopefully aware, this is a gadget blog. As a result, we're innately biased towards stuff that's new and preferably fandangled. More cores, more pixels, more lenses; just give it here and make us happy. The risk of this type of technological greed is that we don't make full use of what we already have, and nothing illustrates that better than the Graphics Processing Unit. Whether it sits in our desktops, laptops, tablets or phones, the GPU is cruelly limited by its history -- its long-established reputation as a dumb, muscular component that takes instructions from the main processor and translates them into pixels for us to gawp at. But what if the GPUs in our devices had some buried genius -- abilities that, if only we could tap into them, would yield hyper-realistic experiences and better all-round performance from affordable hardware? Well, the thing is, this hidden potential actually exists. We've been covering it since at least 2008 and, even though it still hasn't generated enough fuss to become truly famous, the semiconductor industry is making more noise about it now than ever before. So please, join us after the break as we endeavor to explain why the trend known as "GPU compute," aka "general purpose GPU (GPGPU)," or simply "not patronizing your graphics processor," is still exciting despite having let us down in the past. We'll try to show why it's worth learning a few related concepts and terms to help provide a glossary for future coverage; and why, on the whole, your graphics chip is less Hasselhoff and more Hoffman than you may have imagined.

  • NVIDIA GeForce GTX 660 Ti review roundup: impressive performance for around $300

    by 
    Darren Murph
    Darren Murph
    08.16.2012

    No one's saying that $300 is "cheap," but compared to the GTX 670 and GTX 680 before it, the newly announced GeForce GTX 660 Ti is definitely in a more attainable category. The usual suspects have hashed out their reviews today, with the general consensus being one of satisfaction. A gamechanger in the space it's not, but this Kepler-based GPU managed to go toe-to-toe with similarly priced Radeon GPUs while being relatively power efficient in the process. That said, AnandTech was quick to point out that unlike Kepler reviews in the past, the 660 Ti wasn't able to simply blow away the competition; it found the card to perform around 10 to 15 percent faster than the 7870 from AMD, while the 7950 was putting out roughly the same amount of performance as the card on today's test bench. HotHardware mentioned that NVIDIA does indeed have another winner on its hands, noting that it'd be tough to do better right now for three Benjamins. Per usual, there's plenty of further reading available in the links below for those seriously considering the upgrade.

  • NVIDIA announces $299 GeForce GTX 660 Ti, lets Kepler walk among the people

    by 
    Sharif Sakr
    Sharif Sakr
    08.16.2012

    It's taken NVIDIA a mighty long time to squeeze its Kepler GPU into something more affordable than the GTX 670, but it's finally happened -- the mid-range GTX 660 Ti is out and available to purchase for $299 on boards from EVGA, Gigabyte, ASUS and the usual suspects. Some buyers may complain that's $50 more than the 560 Ti, while others will no doubt be reeling off their CVV codes already. For its part, NVIDIA claims the 660 Ti is the "best card per watt ever made" and that it beats even AMD's higher-priced Radeon HD 7950 at 1920 x 1080. Check out the slide deck below for official stats, as well as for examples of what the card can do with its support for DirectX 11 tessellation, PhysX (particularly on Borderlands 2, which you may well find bundled free) and NVIDIA's TXAA anti-aliasing. We'll wait for independent benchmarks in our review round-up before making any judgment, but in the meantime it's fair to say that this 150-watt card comes fully featured. For a start, it has just as many 28nm CUDA cores as the GTX 670, the same base and GPU Boost clock speeds, the same 2GB of GDDR5 and indeed the same connectivity. The only sacrifice is memory bandwidth: all that computational performance is limited by a 192-bit memory bus, compared to the 256-bit width of the 670. Judging from those specs, we'd expect it to be almost 670-like in performance, and that's going to be pretty impressive.%Gallery-162585%

  • NVIDIA unleashes GeForce GTX 690 graphics card, loads it with dual Kepler GPUs, charges $1k

    by 
    Joe Pollicino
    Joe Pollicino
    04.29.2012

    Would you look at that? NVIDIA hinted it would be coming today, and it looks like the tease is living up to the hype. The company stormed into the weekend at its Shanghai Game Festival by unleashing its latest offering, the GeForce GTX 690 -- and oh yeah, it's packing two 28nm Kepler GPUs! Trumping the recently released GTX 680 as the "worlds fastest graphics card," it's loaded with a whopping 3,072 Cuda cores. The outer frame is made from trivalent chromium-plated aluminum, while you'll find thixomolded magnesium alloy around the fan for vibration reduction and added cooling. Aiding in cooling even further, the unit also sports a dual vapor chamber and center-mounted fan. It'll cost you a spendy $1,000 to pick up one of these puppies come May 3rd, and you'll likely be tempted to double up -- two can run together in SLI as an effective quad-core card. With that said, NVIDIA claims that a single 690 runs 4dB quieter than duo of GTX 680s in SLI and handles about twice the framerate as a duo of GTX 680s in SLI a single GTX 680 -- impressive, but we'll reserve judgement until we see it for ourselves. Check out the press release after the break if you'd like more information in the meantime (...and yes, it runs Crysis -- 2 Ultra to be exact -- at 57.8fps, according to NVIDIA). [Thanks to everyone who sent this in]

  • NVIDIA open sources CUDA compiler, shares its LLVM-based love with everyone

    by 
    Michael Gorman
    Michael Gorman
    12.14.2011

    A few years back, Intel prognosticated that NVIDIA's CUDA technology was destined to be a "footnote" in computing history. Since that time, Jen-Hsun Huang's low level virtual machine (LLVM) based compiler has more than proven its worth in several supercomputers, and now NVIDIA has released the CUDA source code to further spread the parallel computing gospel. This move opens up the code to be used with more programming languages and processors (x86 or otherwise) than ever before, which the company hopes will spur development of "next-generation higher performance computing platforms." Academics and chosen developers can get their hands on the code by registering with NVIDIA at the source below, so head on down and get started -- petaflop parallel processing supercomputers don't build themselves, you know.

  • NVIDIA's Tesla GPU powers Tsubame 2.0 to green supercomputer supremacy

    by 
    Terrence O'Brien
    Terrence O'Brien
    11.23.2011

    The Green500 might not be quite as well known as the Top500, but it's no less of an honor to be counted among the world's most energy efficient supercomputers. NVIDIA is tooting its own horn for making it on to the list for the second year in a row as part of the "greenest" petaflop machine. The Tsubame 2.0 at the Tokyo Institute of Technology's Global Scientific Information Center is powered by Intel's Xeon CPUs, but NVIDIA's Tesla general purpose GPUs do a vast majority of the number crunching, allowing it to deliver 1.19 petaflops of performance while consuming only 1.2 megawatts. That's roughly 958 megaflops per watt, a huge increase over the most efficient CPU-only super computer, the Cielo Cray, which gets only 278 megaflops per watt. The Tsubame 2.0 isn't the greenest machine on the planet though, that honor belongs to IBM's BlueGene which takes the top five spots on the Green500. Still, number ten ain't bad... right? Check out the PR after the break.

  • Barcelona readies hybrid ARM-based supercomputer, uses NVIDIA GPUs for heavy lifting

    by 
    Mat Smith
    Mat Smith
    11.14.2011

    NVIDIA has announced that it'll be providing CUDA GPUs for Barcelona's Supercomputing Center, with the facility looking to substantially boost its energy efficiency with these later this week at the SC11 Conference in Seattle. While the words "low power" and "energy efficiency" are a bit of a buzz kill in the high-octane high-MFLOP world of supercomputing, the BSC thinks it'll use between 15 to 30 times less power than current systems. Titled the Mont Blanc Project, it's aiming to multiply those energy savings by four to ten times by 2014. While other supercomputers eat their way though megawatts of the electric stuff, hopefully a drop in power demands won't affect this machine's supercomputing scores.

  • NVIDIA's Kepler GPU still (kinda, sorta) on schedule for 2011 debut

    by 
    Terrence O'Brien
    Terrence O'Brien
    08.06.2011

    Back in September of last year NVIDIA pledged that the successor to Fermi, Kepler, would arrive in 2011. Since then, things have been rather quiet on the next-gen GPU front. In fact, rumors have started to circulate that the 28nm-based chip would be pushed back to 2012. Turns out those rumblings aren't entirely inaccurate. While the latest polygon pushing silicon will start being churned out before it's time to buy a new calendar, final products won't start shipping until next year, as a company rep told TechSpot. Kepler's descendent, Maxwell, is still expected to land sometime in either 2013 or 2014, but there's plenty of time for that timetable to slide back a bit too.

  • NVIDIA announces GeForce GTX 580M and 570M, availability in the Alienware M18x and MSI GT780R (updated: MSI says no)

    by 
    Dana Wollman
    Dana Wollman
    06.28.2011

    We know you're going to be shocked -- shocked! -- to hear this, but NVIDIA's gone and refreshed its high-end line of GeForce GTX cards. The GTX 580M takes the place of the GTX 485M, and NVIDIA's bragging that it's the "fastest notebook GPU ever," capable, we're told, of besting the Radeon HD 6970M's tesselation performance by a factor of six. The new GTX 570M, meanwhile, promises a 20 percent speed boost over the last-generation 470M. Both 40-nanometer cards support DirectX11, OpenCL, PhysX, CUDA, 3D Vision, Verde drivers, Optimus, SLI, and 3DTV Play. As for battery life, NVIDIA's saying that when coupled with its Optimus graphics switching technology, the 580M can last through five hours of Facebook, but last we checked, that's not why y'all are shelling out thousands for beastly gaming rigs. You can find the 580M in the Alienware M17X and M18X (pictured) starting today, though you might have to wait a week or so for them to ship. Meanwhile, the 570M is shipping in the MSI GT780R as you read this, and you'll also find the 580M in a pair of 3D-capable Clevo laptops: the P170HM3 and the SLI-equipped P270WN. Handy chart full 'o technical details after the break. Update: An MSI rep has let us know that contrary to earlier reports, the GT780R is not currently available with the 570M graphics card. The company added that it will offer some unspecified laptop with the 570M sometime in the "near" future. It's unclear if that laptop will, in fact, be the GT780R.

  • KFA2 NVIDIA GeForce GTX 460 WHDI graphics card is first to go wireless

    by 
    Thomas Ricker
    Thomas Ricker
    01.14.2011

    What you're looking at is the world's first wireless graphics card affectionately dubbed the KFA2 (aka, Galaxy) GeForce GTX460 WHDI 1024MB PCIe 2.0. The card uses five aerials to stream uncompressed 1080p video from your PC to your WHDI enabled television (or any display courtesy of the bundled 5GHz WHDI receiver) at a range of about 100 feet. Otherwise, it's the same mid-range GTX 460 card we've seen universally lauded with 1024MB of onboard RAM helping to make the most of its 336 CUDA cores. Insane, yes, but we'd accept nothing less from our beloved graphics cards manufacturers.

  • NVIDIA GeForce GTX 580 reviewed: 'what the GTX 480 should have been'

    by 
    Vlad Savov
    Vlad Savov
    11.09.2010

    You saw the key specs slip out a little ahead of time, now it's the moment we've all been waiting for: the GeForce GTX 580 has been thoroughly benchmarked to see if its claim to being "the world's fastest DirectX 11 GPU" stands up to scrutiny. In short, yes it does. The unanimous conclusion reached among the reviewers was that the 580 cranks up the performance markedly relative to the GTX 480 -- with some citing gains between 10 and 20 percent and others finding up to 30 percent improvements -- while power draw, heat emissions, and noise were lowered across the board. ATI's AMD's Radeon HD 5870 wasn't completely crushed by the newcomer, but it was consistently behind NVIDIA's latest pixel pusher. Priced at $499, the GTX 580 is actually praised for offering good value, though its TDP of 244W might still require you to upgrade a few parts inside your rig to accommodate it, while current online prices are closer to $550. Anyhow, the pretty comparative bar charts await at the links below. Read - HardOCP Read - Tech Report Read - Legit Reviews Read - Bit-tech Read - PC Perspective Read - Hot Hardware

  • NVIDIA GeForce GTX 580 detailed: 512 CUDA cores, 1.5GB of GDDR5 on 'world's fastest DX 11 GPU' (update: video!)

    by 
    Vlad Savov
    Vlad Savov
    11.08.2010

    It might not be November 9 all around the world yet, but NVIDIA's GeForce GTX 580 has already had its spec sheet dished out to the world, courtesy of CyberPower's seemingly early announcement. The new chip will offer a 772MHz clock speed, 512 processing cores, and a 192.4GBps memory bandwidth, courtesy of 1.5GB of GDDR5 clocked at an effective rate of 4GHz. CyberPower is strapping this beast into its finest rigs, and for additional overkill it'll let you SLI up to three of them within one hot and steamy case. Now let's just wait patiently for midnight to roll around and see what the reviewers thought of NVIDIA's next big thing. Update: CRN has a $499 price for us and a recital of NVIDIA's internal estimate that the GTX 580 bests the GTX 480 by between 20 and 35 percent. It seems, however, that the embargo for this hot new slice of silicon is set for early tomorrow morning, so check back then for the expert review roundup. Update 2: Lusting to see one on video? How about two side by side? Skip past the break for the eye candy [Thanks, Rolly Carlos!].

  • NVIDIA brings Fermi to the entry-level professionals with Quadro 600 and 2000 GPUs

    by 
    Darren Murph
    Darren Murph
    10.05.2010

    NVIDIA's Fermi architecture has been around the block or two in the consumer universe, but it's touching the company's pro line today with the introduction of the entry-level Quadro 600 and mid-range Quadro 2000. Boasting 96 and 192 CUDA processor cores, respectively, these guys utilize the new Scalable Geometry Engine technology to "deliver dramatically higher performance across leading CAD and DCC applications such as SolidWorks and Autodesk 3ds Max." More interesting still, however, is the design of the Quadro 600 -- it touts a half-height form factor that can be crammed into just about anything. Oh, and both of these boards have 1GB of graphics memory and are compatible with 3D Vision Pro -- you know, in case you need a round of Avatar between research projects. The pair is available now in North America for $199 and $599 in order of mention, with plenty more of the nitty-gritty awaiting you beyond the break.

  • ElcomSoft turns your laptop into a one-touch WiFi cracking system

    by 
    Thomas Ricker
    Thomas Ricker
    09.25.2010

    It's been a few years since we checked in with Elcomsoft's Wireless Security Auditor WiFi cracking software. As you'd expect, things have become easier, much easier. Elcomsoft now has an all-in-one solution that will locate wireless networks, intercept data packets, and crack WAP/WPA2 PSK passwords from any modern laptop with a discrete ATI AMD or NVIDIA graphics card. Here's the quote IT nerds will surely we love: Today, ElcomSoft is integrating a wireless sniffer into Elcomsoft Wireless Security Auditor. The integrated sniffer turns Elcomsoft Wireless Security Auditor into a one-button, all-in-one solution ready to be used by corporate security officers without specific experience in information security. Call us crazy, but if you're a C-level security officer with no specific information security experience then maybe you shouldn't be sniffing people's data packets. Then again, we're sure ElcomSoft will happily sell their $1,199 pro software or $399 standard edition to any hacker willing to pay, white hat or not.

  • NVIDIA teams with PGI for CUDA-x86, gifts its brand of parallelism to the world

    by 
    Sean Hollister
    Sean Hollister
    09.21.2010

    NVIDIA's GPU Technology Conference 2010 just kicked off in San Jose, and CEO Jen-Hsun Huang has shared something interesting with us on stage -- thanks to a partnership with The Portland Group, it's bringing the CUDA parallel computing framework to x86. Previously limited to NVIDIA GPUs -- and the lynchpin of NVIDIA's argument for GPGPU computing -- CUDA applications will now run on "any computer, or any server in the world." Except those based on ARM, we suppose. Still no word on NVIDIA's x86 CPU.

  • NVIDIA GTX 470M highlights rollout of 400M mobile GPU series

    by 
    Vlad Savov
    Vlad Savov
    09.03.2010

    Not everybody needs the world's fastest mobile GPU, so NVIDIA is sagely trickling down its Fermi magic to more affordable price points today. The 400M family is being fleshed out with five new midrange parts -- GT 445M, GT 435M, GT 425M, GT 420M and GT 415M, to give them their gorgeous names -- and a pair of heavy hitters known as the GTX 470M and GTX 460M. Features shared across the new range include a 40nm fab process, DirectX 11, CUDA general-purpose computing skills, PhysX, and Optimus graphics switching. 3D Vision and 3DTV Play support will be available on all but the lowest two variants. NVIDIA claims that, on average, the 400M graphics cards are 40 percent faster than their 300M series counterparts, and since those were rebadges of the 200M series, we're most definitely willing to believe that assertion. Skip past the break for all the vital statistics, and look out for almost all (HP is a notable absentee, while Apple is a predictable one) the big-time laptop vendors to have gear bearing the 4xxM insignia soon.