supercomputer

Latest

  • Stanford seizes 1 million processing cores to study supersonic noise

    by 
    Zachary Lutz
    Zachary Lutz
    01.29.2013

    In short order, the Sequoia supercomputer and its 1.57 million processing cores will transition to a life of top-secret analysis at the National Nuclear Security Administration, but until that day comes, researchers are currently working to ensure its seamless operation. Most recently, a team from Stanford took the helm of Sequoia to run computational fluid dynamics simulations -- a process that requires a finely tuned balance of computation, memory and communication components -- in order to better understand engine noise from supersonic jets. As an encouraging sign, the team was able to successfully push the CFD simulation beyond 1 million cores, which is a first of its kind and bodes very well for the scalability of the system. This and other tests are currently being performed on Sequoia as part of its "shakeout" period, which allows its caretakers to better understand the capabilities of the IBM BlueGene/Q computer. Should all go well, Sequoia is scheduled to begin a life of government work in March. In the meantime, you'll find a couple views of the setup after the break.

  • IBM supercomputer simulates 530 billion neurons and a whole lot of synapses

    by 
    Mat Smith
    Mat Smith
    11.20.2012

    IBM Research, in collaboration with DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, has reached another brain simulation milestone. Powered by its new TrueNorth system on the world's second fastest supercomputer, IBM was capable of crafting a 2.084 billion neurosynaptic cores and 100 trillion synapses -- all at a speed "only" 1,542 times slower than real life. The abstract explains that this isn't a biologically realistic simulation of the human brain, but rather mathematically abstracted -- and little more dour -- versions steered towards maximizing function and minimizing cost. DARPA's SyNAPSE project aims to tie together supercomputing, neuroscience and neurotech for a future cognitive computing architecture far beyond what's running behind your PC screen at the moment. Want to know more? We've included IBM's video explanation of cognitive computing after the break.

  • Titan supercomputer leads latest Top 500 list, newly-available Xeon Phi chip cracks top ten

    by 
    Donald Melanson
    Donald Melanson
    11.12.2012

    The supercomputer formerly known as Jaguar recently got an upgrade that was significant enough to earn it a new moniker, and it turns out that was also enough for it to claim the top spot on the latest Top 500 list of the world's most powerful supercomputers. Now known as Titan, the Cray-developed supercomputer at the Oak Ridge National Laboratory edged out the Lawrence Livermore National Laboratory's Sequoia supercomputer for the number one position, reaching 17.59 Petaflops per second with the aid of 18,688 NVIDIA K20 GPUs and an equal number of AMD Opteron processors. As EE Times notes, however, the other big story with this list is the strong showing for Intel's new Xeon Phi co-processors, which have just starting shipping to customers and have already found their way into seven of the supercomputers on the list, including one in the top ten (the Stampede at the Texas Advanced Computing Center at the University of Texas). You can see how your favorite supercomputer did at the link below.

  • Cray unleashes 100 petaflop XC30 supercomputer with up to a million Intel Xeon cores

    by 
    Steve Dent
    Steve Dent
    11.08.2012

    Cray has just fired a nuclear salvo in the supercomputer wars with the launch of its XC30, a 100 petaflop-capable brute that can scale up to one million cores. Developed in conjunction with DARPA, the Cascade-codenamed system uses a new type of architecture called Aries interconnect and Intel Xeon E5-2600 processors to easily leapfrog its recent Titan sibling, the previous speed champ. That puts Cray well ahead of rivals like China's Tianhe-2, and the company will aim to keep that edge by supercharging future versions with Intel Xeon Phi coprocessors and NVIDIA Tesla GPUs. High-end research centers have placed $100 million worth of orders so far (though oddly, DARPA isn't one of them yet), and units are already shipping in limited numbers -- likely by the eighteen-wheeler-full, from the looks of it.

  • China's Tianhe-2 supercomputer could hit 100 petaflops in 2015, may have a race on its hands

    by 
    Jon Fingas
    Jon Fingas
    11.01.2012

    China's supercomputer development is as much driven by national reputation as by military prowess and science; the country chose to build the Sunway BlueLight MPP with domestic chips knowing that it wouldn't get the absolute performance crown. It won't be quite so modest the next time around. China's National University of Defense Technology wants the Tianhe-2 supercomputer due in 2015 to crack an extremely high 100 petaflops, or five times faster than the record-setting Titan over in the US and a whopping 40 times faster than the Tianhe-1A. Before we hand the crown over, though, Top 500 supercomputer chart keeper Jack Dongarra notes to ITworld that China might have to sprint if it wants the symbolic title: the EU, Japan and US are all striving for the same benchmark, and they're not backing off anytime soon. The nation's trump card may have to be long-term plans for an exaflop-strength supercomputer by 2018, at which point we suspect the bragging will simmer down. For awhile.

  • Cray's Jaguar supercomputer upgraded with NVIDIA Tesla GPUs, renamed Titan

    by 
    Alexis Santos
    Alexis Santos
    10.29.2012

    Cray's Jaguar (or XK7) supercomputer at Oak Ridge National Laboratory has been loaded up with the first shipping NVIDIA Tesla K20 GPUs and renamed Titan. Loaded with 18,688 of the Kepler-based K20s, Titan's peak performance is more than 20 petaflops. Sure, the machine has an equal number of 16-core AMD Opteron 6274 processors as it does GPUs, but the Tesla hardware packs 90 percent of the entire processing punch. Titan is roughly ten times faster and five times more energy efficient than it was before the name change, yet it fits into the same 200 cabinets as its predecessor. Now that it's complete, the rig will analyze data and create simulations for scientific projects ranging from topics including climate change to nuclear energy. The hardware behind Titan isn't meant to power your gaming sessions, but the NVIDIA says lessons learned from supercomputer GPU development trickle back down to consumer-grade cards. For the full lowdown on the beefed-up supercomputer, hit the jump for a pair of press releases.

  • Alt-week 10.6.12: supercomputers on the moon, hear the Earth sing and the future of sports commentary

    by 
    James Trew
    James Trew
    10.06.2012

    Alt-week peels back the covers on some of the more curious sci-tech stories from the last seven days. Normally we try to encourage you to join us around the warm alt-week campfire by teasing you about what diverse and exotic internet nuggets we have for you inside. Sadly, this week that's not the case. There's nothing for you here we're afraid. Not unless you like totally mind-blowing space videos, singing planets and AI / sports commentary-flavored cocktails, that is. Oh, you do? Well what do you know! Come on in... this is alt-week.

  • Insert Coin: The Parallella project dreams of $99 supercomputers

    by 
    Jamie Rigg
    Jamie Rigg
    09.28.2012

    In Insert Coin, we look at an exciting new tech project that requires funding before it can hit production. If you'd like to pitch a project, please send us a tip with "Insert Coin" as the subject line. Parallel computing is normally reserved for supercomputers way out of the reach of average users -- at least at the moment, anyway. Adapteva wants to challenge that with its Parallella project, designed to bring mouth-watering power to a board similar in size to the Raspberry Pi for as little as $99. It hopes to deliver up to 45GHz (in total) using its Epiphany multicore accelerators, that crucially, only chug 5 watts of juice under normal conditions. These goliath speeds currently mean high costs, which is why they need your funds to move out of the prototype stage and start cheap mass production. Specs for the board are as follows: a dual-core ARM A9 CPU running Ubuntu OS as standard, 1GB RAM, a microSD slot, two USB 2.0 ports, HDMI, Ethernet and a 16- or 64-core accelerator, with each core housing a 1GHz RISC processor, all linked "within a single shared memory architecture." An overriding theme of the Parallella project is the openness of the platform. When finalized, the full board design will be released, and each one will ship with free, open-source development tools and runtime libraries. In addition, full architecture and SDK documentation will be published online if-and-when the Kickstarter project reaches its funding goal of $750,000. That's pretty ambitious, but we're reminded of another crowd-funded venture which completely destroyed an even larger target. However, that sum will only be enough for Adapteva to produce the 16-core board, which reportedly hits 13GHz and 26 gigaflops, and is expected to set you back a measly $99. A speculative $3 million upper goal has been set for work to begin on the $199 64-core version, topping out at 45GHz and 90 gigaflops. Pledge options range from $99 to $5,000-plus, distinguished mainly by how soon you'll get your hands on one. Big spenders will also be the first to receive a 64-core board when they become available. Adapteva's Andreas Olofsson talks through the Parallella project in a video after the break, but if you're already sold on the tiny supercomputer, head over to the source link to contribute before the October 27th closing date.

  • IBM's Mira supercomputer tasked with simulating an entire universe in a fortnight

    by 
    Daniel Cooper
    Daniel Cooper
    09.26.2012

    A universe that only exists in the mind of a supercomputer sounds a little far fetched, but one is going to come to live at the Argonne National Laboratory in October. A team of cosmologists is using IBM's Blue Gene/Q "Mira" supercomputer, the third fastest in the world, to run a simulation through the first 13 billion years after the big bang. It'll work by tracking the movement of trillions of particles as they collide and interact with each other, forming structures that could then transform into galaxies. As the project's only scheduled to last a fortnight, we're hoping it doesn't create any sentient characters clamoring for extra life, we've seen Blade Runner enough times to know it won't end well.

  • Supercomputer built from Raspberry Pi and Lego, managed by humans rather than Minifigs

    by 
    Jon Fingas
    Jon Fingas
    09.13.2012

    If you're a computational engineer, there's no question about what you do with the Raspberry Pi: you make a supercomputer cluster. Researchers at the University of Southampton have followed their instincts and built Iridis-Pi, a tiny 64-node cluster based on the Raspberry Pi's usual Debian Wheezy distribution and linked through Ethernet. While no one would mistake any one Raspberry Pi for a powerhouse, the sheer number of networked devices gives the design both some computing grunt and 1TB worth of storage in SD cards. Going so small also leads to some truly uncommon rackmounting -- team lead Simon Cox and his son James grouped the entire array in two towers of Lego, which likely makes it the most adorable compute cluster you'll ever see. There's instructions to help build your own Iridis-Pi at the source link, and the best part is that it won't require a university-level budget to run. Crafting the exact system you see here costs under £2,500 ($4,026), or less than a grown-up supercomputer's energy bill.

  • IBM pushing System z, Power7+ chips as high as 5.5GHz, mainframes get mightier

    by 
    Jon Fingas
    Jon Fingas
    08.04.2012

    Ten-core, 2.4GHz Xeons? Pshaw. IBM is used to the kind of clock speeds and brute force power that lead to Europe-dominating supercomputers. Big Blue has no intentions of letting its guard down when it unveils its next generation processors at the upcoming Hot Chips conference: the company is teasing that the "zNext" chip at the heart of a future System z mainframe will ramp up to 5.5GHz -- that's faster than the still-speedy 5.2GHz z196 that has led IBM's pack since 2010. For those who don't need quite that big a sledgehammer, the technology veteran is hinting that its upcoming Power7+ processors will be up to 20 percent faster than the long-serving Power7, whose current 4.14GHz peak clock rate may seem quaint. We'll know just how much those extra cycles mean when IBM takes to the conference podium on August 29th, but it's safe to say that our databases and large-scale simulations won't know what hit them.

  • IBM's water-cooled supercomputer saves energy and helps with your heating bill (video)

    by 
    Daniel Cooper
    Daniel Cooper
    06.19.2012

    IBM's SuperMuc has had a good week. Not only has the three petaflop machine been listed as Europe's fastest supercomputer, but it's also apparently the first high performance computer that's entirely water-cooled. Rather than filling rooms with air conditioning units, water is piped around veins in each component, removing heat 4,000 times more efficiently than air. The hot water is then used to heat the buildings of the Leibniz Supercomputing Centre where it lives, saving the facility $1.25 million per year. After the break we've got a video from Big Blue, unfortunately narrated by someone who's never learned how to pronounce the word "innovative."

  • NNSA Sequoia supercomputer takes worlds fastest title, prevents nuclear testing

    by 
    Sean Buckley
    Sean Buckley
    06.18.2012

    Fujitsu's 10.51 petaflop K supercomputer is pretty fast, but does it pack enough computational oomph to stave off underground nuclear testing? Probably -- but the NNSA's new sixteen petaflop rig does it better. According to the National Nuclear Security Administration, a supercomputer at Lawrence Livermore National Laboratory, dubbed Sequoia, is now the fastest supercomputer on the planet, clocking in at 16.32 sustained petaflops. "Sequoia will provide a more complete understanding of weapons performance, notably hydrodynamics and properties of materials at extreme pressure sand temperatures," says NNSA Director of Advanced Simulation and Computing Bob Meisner, explaining that supercomputer simulations will "support the effort to extend the life of aging weapons systems." Translation? Sequoia will help the NNSA keep the US' nuclear stockpile stable without resorting to nuclear testing, or put simply, more computational power, fewer explosions. We can't think of a better thing to do with 98,304 compute nodes, 1.6 million cores and 1.6 petabytes of memory spread across 96 racks -- can you? Check out the official press release after the break.

  • Supercomputer gets a memory boost with 380 petabytes of magnetic tape

    by 
    James Trew
    James Trew
    05.25.2012

    Remember the Cray XK6 at the University of Illinois that drives the National Science Foundation's Blue Waters project? Well, it looks like it's getting a little memory upgrade, sorta. We're not talking a slick new SSD here, or even a sweet NAS, all that computational power requires nothing less than... tape. Okay, so it's actually a full storage infrastructure, and some of it -- 25 petabytes no less -- will be disk-based. The rest -- a not insignificant 380 petabytes -- will be the good old magnetic stuff. The idea is that the disk part will be used for instant access, with the tape section serving as "nearline" storage -- something between an archive and online solution. Spectra Logic is providing the tape, and says it'll take a couple of years to implement the whole lot. Once complete, the system will support the supercomputer's lofty tasks, such as understanding how the cosmos evolved after the Big Bang and, y'know designing new materials at the atomic level. And we thought we were excited about out next desktop.

  • NVIDIA outs a pair of Tesla GPUs to electrify your supercomputer

    by 
    Daniel Cooper
    Daniel Cooper
    05.16.2012

    NVIDIA's announced a pair of Tesla GPUs that'll give some extra pep to your supercomputing tasks. The K10 and K20 units harness the power of Kepler to add more muscle to the company's scientific and technical computing arm that supplies gear to the Barcelona Supercomputing Center and Tokyo's Tsubame 2.0. Internal tests reveal that the hardware is around three times faster than the company's Fermi GPUs -- with the latter card expected to arrive at the end of the year. The company didn't announce pricing, since its aiming them squarely at the big academic institutions, defense contractors and oil explorers -- but if your surname is Buffet or Abramovitch, then they might sell you one at trade.

  • IBM's Holey Optochip transmits 1Tbps of data, is named awesomely

    by 
    Darren Murph
    Darren Murph
    03.12.2012

    Be honest: was there any doubt whatsoever that something called a "Holey Optochip" would be anything short of mind-blowing? No. None. The whiz-kids over at IBM have somehow managed to transmit a staggering 1Tbps of data over a new optical chip, with the fresh prototype showing promise for ultra-high interconnect bandwidth to power future supercomputer and data center applications. For those who'd rather not deal with esoteric descriptions, that's around 500 HD movies being transferred each second, and it's enough to transfer the entire U.S. Library of Congress web archive in just 60 minutes. Needless to say, it's light pulses taking charge here, and researchers are currently hunting for ways to make use of optical signals within standard low-cost, high-volume chip manufacturing techniques. Getting the feeling that your own personal supercomputer is just a year or two away? Hate to burst your bubble, but IBM's been touting similar achievements since at least 2008. Actually, scratch that -- where there's hope, there's Holey.

  • Cha-ching! IBM's Watson heads to Citigroup to meddle in human finances

    by 
    Christopher Trout
    Christopher Trout
    03.06.2012

    Watson's been a busy supercomputer since it took a couple of humans to school on Jeopardy last year -- what with its stint at Columbia and a recent foray into hunting patent trolls -- and now it's taking on the financial industry. IBM and Citigroup recently announced plans to explore how America's favorite supercomputer fits into the realm of digital banking. Under the agreement, Citi will examine Watson's ability to "help analyze customer needs and process vast amounts of up-to-the-minute financial, economic, product and client data," in the hopes of providing rapid, personalized banking solutions. According to Bloomberg, Watson's financial assistance will be provided as a "cloud-based service" and will earn IBM a portion of the revenue and savings it helps generate. The full press release (which makes no mention of a vacation for the overworked machine) can be found after the break.

  • Eyes-on the innards of Fujitsu's K supercomputer (updated)

    by 
    Michael Gorman
    Michael Gorman
    01.25.2012

    Fujitsu's K supercomputer was on our radar before it was even completed, and naturally, we let you know when it smoked the competition and became the supercomputing speed king. So, when we had the opportunity to see a piece of K at Fujitsu's North America Technology Forum today, we couldn't pass it up. In case you forgot, K is a massive machine powered by 864 racks with 24 boards per rack housing SPARC64 CPUs. We got to see one of those boards, and Yuichiro Ajima -- who designed the inter-connection chips (ICC) on them -- was gracious enough to give us some more info on this most super of supercomputers.As you can see in the gallery above, each board has extensive plumbing to keep the SPARC silicon running at a manageable 32 - 35 degrees Celsius (90 - 95 Fahrenheit) under load. Underneath that copper cooling system lies four processors interspersed between 32 memory modules (with 2GB per module) and four ICCs lined up next to the board's rack interconnect ports. Currently, the system takes 30 megawatts to do its thing, though Ajima informed us that K's theoretical max electricity consumption is about double that -- for perspective, that means K could consume the entire output of some solar power plants. When asked if there were plans to add more racks should Fujitsu's supercomputer lose its crown, Ajima-san said that while possible, there are no plans to do so -- we'll see if that changes should a worthy opponent present itself.Update: Turns out the K's power consumption resides around 13 megawatts, with a max consumption of 16MW at its current configuration. The facility in Kobe, Japan where K resides can deliver up to 24 megawatts, so expansion is possible, but none is currently planned.

  • T-Platforms to build ten petaflop supercomputer for Moscow State University

    by 
    Chris Barylick
    Chris Barylick
    12.26.2011

    In post-Soviet Russia, massive supercomputer programs you. (Sorry, we had to.) Recently, Russia's Moscow State University contracted with high-performance computing company T-Platforms to create a ten petaflop cluster that'll be operational in 2013. The computer would fall just short of the fastest supercomputer on Earth (the Japanese K Computer, which is rated at 10.51 petaflops) and will incorporate a mixture of different node types to achieve the ten petaflops. T-Platforms will reportedly build the nodes from Sandy Bridge or Ivy Bridge Xeon processors and NVIDIA's next-generation Kepler GPU coprocessors, and Intel's Many Integrated Core (MIC) architecture could also be included if it's available during construction. The reason for the project? Unknown officially, but we're guessing it's just another reason for Putin to rip his shirt off and celebrate.

  • VT nears completion of HokieSpeed, world's 96th most powerful supercomputer

    by 
    Zachary Lutz
    Zachary Lutz
    12.23.2011

    If basking in the presence of a powerful supercomputer is on your list of "must-haves" when selecting a proper university, then you may wish to fire off an admissions application to the Hokies at Virginia Tech. The school's HokieSpeed system is now in its final stages of testing, which combines 209 separate computers, each powered by dual six-core Xeon E5645 CPUs and two NVIDIA M2050 / C2050 448-core GPUs, with a single-precision peak processing capability of 455 teraflops. To put things in perspective, HokieSpeed is now the 96th most powerful computer in the world, and yet it was built for merely $1.4 million in loose change -- the majority of which came from a National Science Foundation grant. As a further claim to fame, HokieSpeed is the 11th most energy-efficient supercomputer in the world. Coming soon, the system will drive a 14-foot wide by four-foot tall visualization wall, which is to consist of eight 46-inch Samsung 3D televisions humming in unison. After all, with virtually limitless potential, these scientists will need a fitting backdrop for all those Skyrim sessions. The full PR follows the break, complete with commentary from the system's mastermind, Professor Wu Feng.