siggraph

Latest

  • Shader Printer uses heat-sensitive 'paint' that can be erased with low temperatures (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.08.2012

    Lovin' the bold look of those new Nikes? If you're up to date on the athletic shoe scene, you may notice that sneaker designs can give way long before your soles do. A new decaling technique could enable you to "erase" labels and other artworks overnight without a trace, however, letting you change up your wardrobe without shelling out more cash. A prototype device, called Shader Printer, uses a laser to heat (at 50 degrees Celsius, 120 degrees Fahrenheit) a surface coated with a bi-stable color-changing material. When the laser reaches the "ink," it creates a visible design, that can then be removed by leaving the object in a -10 degree Celsius (14 degree Fahrenheit) freezer overnight. The laser and freezer simply apply standard heat and cold, so you could theoretically add and remove designs using any source. For the purposes of a SIGGRAPH demo, the team, which includes members from the Japan Science and Technology Agency, Keio University, the University of Tokyo and MIT, used a hair dryer to apply heat to a coated plastic doll in only a few seconds -- that source doesn't exactly offer the precision of a laser, but it works much more quickly. Then, they sprayed the surface with -50-degree Celsius (-58 Fahrenheit) compressed air, which burned off the rather sloppy pattern in a flash. There were much more attractive prints on hand as well, including an iPhone cover and a sneaker with the SIGGRAPH logo, along with a similar plastic doll with clearly defined eyes. We also had a chance to peek at the custom laser rig, which currently takes about 10 minutes to apply a small design, but could be much quicker in the future with a higher-powered laser on board. The hair dryer / canned air combo offers a much more efficient way of demoing the tech, however, as you'll see in our hands-on video after the break.

  • Disney Research's Botanicus Interacticus adds capacitive touch to ordinary plants, we go hands-on

    by 
    Zach Honig
    Zach Honig
    08.08.2012

    Sure, you spend plenty of time talking to your plants, but have you ever made them sing? In partnership with Berlin-based Studio NAND, Walt Disney's experience development arm, coined Disney Research, has found a way to take human-plant interaction to an almost freakish level. The project's called Botanicus Interacticus, and centers around a custom-built capacitive sensor module, which pipes a very low current through an otherwise ordinary plant, then senses when and where you touch. Assuming your body is grounded, the device uses more than 200 frequencies to determine exactly where you've grabbed hold of a stem. Then, depending on how it may be programed, the sensor can trigger any combination of feedback, ranging from a notification that your child is attempting to climb that massive oak in the yard again, to an interactive melody that varies based on where your hand falls along the plant. Because this is Disney Research, the company would most likely use the new tech in an interactive theme park attraction, though there's currently no plan to do much more than demo Botanicus Interacticus for SIGGRAPH attendees. This week's demonstration is giving the creators an opportunity to gather feedback as they try out their project on the general public. There's four different stations on hand, ranging from a stick of bamboo that offers the full gamut of sensitivity, including the exact location of touch, to an orchid that can sense an electric field disruption even as you approach for contact. While interactive plants may not have a role in everyday life, Botanicus Interacticus is certainly a clever implementation of capacitive touch. You can see it action just past the break.%Gallery-162014%

  • AMD launches its next-gen FirePro graphics card lineup, we go hands-on at SIGGRAPH (video)

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    Just as you've cozied up with "Tahiti" and "Cape Verde," AMD has returned to grow its "Southern Islands" family of graphics cards with four fresh FirePros, offering up to four teraflops of graphics computing power. That spec can be found in the company's new W9000, which is capable of four TFLOPs single precision and one TFLOP double precision with a price tag just shy of $4,000. That behemoth of a card offers 6GB of GDDR5 RAM and requires 274 watts of power. More humble members of the fam include the W8000, which has the same form-factor as the higher-end W9000, but eases back on the specs, consuming 189 watts of power and carrying a $1,599 price tag. We had a chance to take a closer look at both cards at SIGGRAPH, and while they packed a significant amount of heft, you'll likely never take a second look once they're buried away in your tower rig. Fans of smaller housings (and price tags) may take notice of the W7000 and W5000, which are both considerably more compact and require less power to boot, with pricing set at $899 and $599, respectively. Those cards were also on hand for our demo, and can be seen along with the top two configs in our gallery below. You can also sneak a closer peek in the hands-on video after the break, and glance at the full specs over at our news post from earlier today.%Gallery-161943%

  • ARM's Mali-T604 makes official debut, we get a first look at the next-gen GPU (hands-on video) (update: it's the Exynos 5)

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    Think those are some pretty slick graphics in your Galaxy S III? Samsung's latest smartphone packs some mighty graphics prowess of its own, thanks to the Mali-400 MP GPU, but once you spend a few minutes with the Mali-T604, the company's next-generation chipset, the improvements become quite clear. After seeing the Mali-T604 in action, as we did at SIGGRAPH today, the capabilities leave us hopeful for the future, and perhaps feeling a bit self-conscious about the silicon currently in our pockets. The reference device on hand was operating in sync with a variety of unnamed hardware, protected from view in a relatively large sealed box. We weren't able to squeeze many details out of ARM reps, who remained mum about the demo components, including clock speed, manufacturer and even fabrication size. What we do know is that we were looking at a quad-core Mali-T604 and dual-core ARM Cortex-A15 processor, with a fabrication size in the range of "28 to 40 nanometers" (confirming the exact size would reveal the manufacturer). Clock speed is also TBD, and the early silicon on demo at the show wasn't operating anywhere close to its top end. In order to experience the T604, we took a look at three demos, including Timbuktu 2, which demonstrates elements like self shadowing and depth of field with OpenGL ES 3.0, Hauntheim, which gives us an early look at physics simulation and HDR lighting with OpenCL, and Enlighten, which rendered silky smooth real-time illumination. You can see all of the demos in action after the break, and you can expect T604-equipped devices to make their debut beginning later this year -- ARM says its working with eight manufacturers to get the licensed tech to market as early as Q3. Update: ARM has just confirmed to us that this reference device is running off an Exynos 5 Dual chip (up to 1.7GHz), which means the following video is also a heads-up on what Sammy has in store for us in its forthcoming devices.%Gallery-161934%

  • NVIDIA announces second generation Maximus, now with Kepler power

    by 
    James Trew
    James Trew
    08.07.2012

    It's been almost exactly a year since we first heard about NVIDIA's Maximus technology, and today the firm's just announced an update. The second generation of the platform is now supported by Kepler-based GPUs. This time around computational tasks get ferried off to the SMX-streaming K20 GPU ($3,199 MSRP,) leaving the 3,840 x 2,160 resolution-supporting Quadro K5000 GPU ($2,249) to tackle the graphical functions. Want to know when you can get your hands on the goods? Well, NVIDIA says starting December, with the Quadro K5000 available as a standalone in October. Head down to the PR for the full spin and forthcoming workstation / OEM details.

  • We're live from SIGGRAPH 2012 in Los Angeles!

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    Most of us experience the Los Angeles Convention Center during one of its most chaotic weeks of the year, when tens of thousands of gaming industry manufacturers, video game designers and consumers descend upon downtown LA for the annual E3 expo, booth-babe radar tweaked to 11. There's a hint of graphics prowess amid the halls this week, too, albeit on a vastly smaller scale, and with a heavy heap of civility. SIGGRAPH is a trade event through and through, with attendees demonstrating their latest tech, taking in a handful of seminars or hunting for networking opportunities, in search of employment and partnerships. It's often also a venue for product launches, which is what's brought us out, along with the usual bounty of kooky creations that serve to entertain and lighten the mood. As always, we'll be bringing you a little bit of everything over the next few days, letting you sample the best of SIGGRAPH from the comfort of your own device -- head over to our SIGGRAPH 2012 tag to follow along.

  • OpenGL ES 3.0 and OpenGL 4.3 squeeze textures to the limit, bring OpenVL along for the ride

    by 
    Jon Fingas
    Jon Fingas
    08.07.2012

    Mobile graphics are clearly setting the agenda at SIGGRAPH this year -- ARM's Mali T600-series parts have just been chased up by a new Khronos Group standard that will likely keep those future video cores well-fed. OpenGL ES 3.0 represents a big leap in textures, introducing "guaranteed support" for more advanced texture effects as well as a new version of ASTC compression that further shrinks texture footprints without a conspicuous visual hit. OpenVL is also coming to give augmented reality apps their own standard. Don't worry, desktop users still get some love through OpenGL 4.3: it adds the new ASTC tricks, new visual effects (think blur) and support for compute shaders without always needing to use OpenCL. All of the new standards promise a bright future in graphics for those living outside of Microsoft's Direct3D universe, although we'd advise being patient: there won't be a full Open GL ES 3.0 testing suite for as long as six months, and any next-generation phones or tablets will still need the graphics hardware to match.

  • AMD announces $4k FirePro W9000 GPU, entry-level FirePro A300 APU for CAD and graphics pros

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    After a brief tease earlier this summer, AMD just announced pricing and availability for its new market-leading FirePro W9000 graphics processing unit -- the $3,999 GPU is available now through AMD resellers, and is compatible with Supermicro SuperWorkstations. Joining that "world's most powerful" rig are the W8000, W7000 and W5000, which sell for $1,599, $899 and $599, respectively, and can each power six 30-inch 4K displays. Power-hungry pros will want to opt for the top-of-the-line model in order to take advantage of four TFLOPs single precision or one TFLOP double precision, along with 6 gigs of high-speed GDDR5 RAM. The W8000, on the other hand, offers 3.23 TFLOPs single precision and 806 GFLOPs double precision, followed by the W7000 with 2.4 TFLOPs / 152 GFLOPs, both with 4 gigs of RAM, along with the W5000, which packs 1.27 TFLOPs single and 80 GFLOPs double, with 2 GB of GDDR5 RAM. Design pros with slightly more modest demands may find the FirePro A300 APU more in line with their budgets -- we don't have precise pricing to share, since third parties will ship their own configs, but terms like "entry-level" and "mainstream" make it clear that you won't be drawing in more than a couple zeros in the checkbook to make your purchase. The integrated solution utilizes AMD's Turbo Core tech, supports Eyefinity and Discrete Compute Offload, and can power horizontal display arrays of up to 10,240 x 1,600 pixels. You'll find all the nitty-gritty in the pair of press releases after the break. Update: Our pals over at HotHardware have just pushed out a review of the W8000 and W9000, but found the results to be a bit of a letdown. Hit up their post for the full skinny.

  • University of Tokyo builds a soap bubble 3D screen, guarantees your display stays squeaky clean (video)

    by 
    Jon Fingas
    Jon Fingas
    06.29.2012

    There are waterfall screens, but what if you'd like your display to be a little more... pristine? Researchers at the University of Tokyo have developed a display that hits soap bubbles with ultrasonic sound to change the surface. At a minimum, it can change how light glances off the soap film to produce the image. It gets truly creative when taking advantage of the soap's properties: a single screen is enough to alter the texture of a 2D image, and multiple screens in tandem can create what amounts to a slightly sticky hologram. As the soap is made out of sturdy colloids rather than the easily-burst mixture we all knew as kids, users won't have to worry about an overly touch-happy colleague popping a business presentation. There's a video preview of the technology after the jump; we're promised a closer look at the technology during the SIGGRAPH expo in August, but we don't yet know how many years it will take to find sudsy screens in the wild.

  • Cyclone Display exemplifies 'multi-colored expression,' totally heading to a nightclub near you (video)

    by 
    Darren Murph
    Darren Murph
    08.12.2011

    Ever heard of Yoichi Ochiai? You have now. Hailing from Japan's University of Tsukuba, this whizkid was on hand here at SIGGRAPH to showcase one of his latest creations -- and it just so happened to be one of the trippiest yet. The Cyclone Display was a demonstration focused on visual stimulation; a projector shown above interacted with a plate of spinning disks. Underneath, a cadre of motors were controlled by a connected computer, and as the rotation and velocity changed, so did the perceived pixels and colors. The next step, according to Ochiai, would be to blow this up and shrink it down, mixing textures in with different lighting situations. With a little help, a drab nightclub could douse its walls in leopard print one night, or zebra fur another. Interactive clubbing never sounded so fun, eh? You know the drill -- gallery's below, video's a click beneath. %Gallery-130394%

  • HAPMAP navigational system keeps your eyes on the prize, your hands on the route (video)

    by 
    Darren Murph
    Darren Murph
    08.12.2011

    Alternative navigational systems aren't exactly new, but the concept shown here just might have wings. HAPMAP was one of a handful of projects selected for demonstration at SIGGRAPH's E-tech event, aiming to keep a human's eye away from the map (and in turn, on whatever's in front of them) by developing a system that guides via haptics. With a handheld device capable of both navigating and vibrating, the interface indicates complex navigation cues that follow the curvature of a road or path -- it's far more detailed than the typical "go straight," and there's also opportunity here to provide handicapped individuals with a method for getting to previously inaccessible locales. By mimicking the operation and interface of sliding handrails (as well as using motion capture cameras), it's particularly useful for the visually impaired, who need these subtle cues to successfully navigate a winding path. Hop on past the break for a couple of demonstration vids. %Gallery-130395%

  • Vection Field controls traffic at SIGGRAPH, fictional cities from the future (video)

    by 
    Darren Murph
    Darren Murph
    08.12.2011

    So, let's set the stage. You're walking down a semi-busy street in a semi-foreign city. You're curiously hanging close to the middle of the sidewalk. You bust out your smartphone and figure out that your so-called engagement just got "Complicated." Your gait has an irregularity. You look up and spot what appears to be a local, eerily perturbed and somewhat flummoxed by your current position. You dodge left. So does he. You dodge right, knowing full well that it'll only complicate matters when he follows suit. Before long, you're tiptoeing around a stranger while a full-on traffic jam builds up behind you. You've just ruined the universe, and that's not doing anyone any good. The solution? The University of Electro-Communications's Vection Field, which hones in on large moving visual cues that "induce a sense of self-movement." Funny enough, the lenticular lenses pathway here at SIGGRAPH actually worked -- we never expected an optical illusion to solve such a monumental issue, but we'll take it. Vid's past the break, per usual.

  • Wrist sensor turns the back of your hand into a meaty haptic interface (video)

    by 
    Amar Toor
    Amar Toor
    08.12.2011

    We're all intimately familiar with the backs of our hands, so why not use them as a haptic interface to control our gadgets? That's the idea behind the device pictured above -- a nifty little wrist sensor that turns your paw into a flesh-toned trackpad. Designed by Kei Nakatsuma, a PhD student at the University of Tokyo, this contraption employs infrared sensors to track a user's finger as it moves across the back of a hand. These movements are mirrored on a wristwatch-like display, thanks to seven IR detectors and a set of piezoelectric sensors, effectively turning any digit into an organic stylus or mouse. Nakatsuma, who unveiled his work at this week's SIGGRAPH, says his creation can't handle the more complicated, pinching or rotating gestures you could manipulate on most smartphone touchscreens and acknowledges that the screen can be difficult to read in direct sunlight. But the underlying technology could pave the way for similarly handy designs, while allowing users to interact with their gadgets without having to constantly glance at their screens, or go fishing in their pockets. Feel your way past the break to see a video of the device in action.

  • Surround Haptics could bring force feedback to vests, coasters and gaming (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    Haptics and gaming have gone hand in hand for centuries it seems -- well before the Rumble Pak made itself an N64 staple, we vividly recall snapping up a vibration jumpsuit for our Sega Genesis. 'Course, it was on clearance for a reason. Ali Israr et al. were on hand here at SIGGRAPH's E-tech conference to demonstrate the next big leap in haptics, joining hands with Disney Research in order to showcase a buzzing game chair for use with Split/Second. The seat shown in the gallery (and video) below cost around $5,000 to concoct, with well over a dozen high-end coils tucked neatly into what looked to be a snazzy padding set for an otherwise uneventful seating apparatus. We sat down with members of the research team here in Vancouver, and while the gaming demo was certainly interesting, it's really just the tip of the proverbial iceberg. The outgoing engineers from Black Rock Studios helped the team wire stereoscopic audio triggers to the sensors, with a left crash, right scrape and a head-on collision causing the internal coils to react accordingly. Admittedly, the demo worked well, but it didn't exactly feel comfortable. In other words -- we can't exactly say we'd be first in line to pick one of these up for our living room. %Gallery-130406%

  • Sony's Face-to-Avatar blimp soars through SIGGRAPH, melts the heart of Big Brother (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    Telepresence, say hello to your future. Humans, say hello to the next generation of Chancellor Sutler. All jesting aside, there's no question that Big Brother came to mind when eying Sony Computer Science Laboratories' Face-to-Avatar concept at SIGGRAPH. For all intents and purposes, it's a motorized blimp with a front-facing camera, microphone, a built-in projector and a WiFi module. It's capable of hovering above crowds in order to showcase an image of what's below, or displaying an image of whatever's being streamed to its wireless apparatus. The folks we spoke to seemed to think that it was still a few years out from being in a marketable state, but we can think of a few governments who'd probably be down to buy in right now. Kidding. Ominous video (and static male figurehead) await you after the break. %Gallery-130392%

  • MoleBot interactive gaming table hooks up with Kinect, puts Milton Bradley on watch (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    Looking to spruce up that nondescript living room table? So are a smattering of folks from the Korea Advanced Institute of Science and Technology. At this week's SIGGRAPH E-tech event, a team from the entity dropped by to showcase the deadly cute MoleBot table. At its simplest, it's a clever tabletop game designed to entertain folks aged 3 to 103; at the other extreme, it's a radically new way of using Microsoft's Kinect to interact with something that could double as a place to set your supper. Improving on similar projects in the past, this shape-display method uses a two-dimensional translating cam (mole cam), 15,000 closely packed hexagonal pins equivalent to cam followers, and a layer of spandex between the mole cam and the pins to reduce friction. When we dropped by, the Kinect mode was disabled in favor of using an actual joystick to move the ground below. In theory, one could hover above the table and use hand gestures to move the "mole," shifting to and fro in order to pick up magnetic balls and eventually affix the "tail" onto the kitty. The folks we spoke with seemed to think that there's consumer promise here, as well as potential for daycares, arcades and other locales where entertaining young ones is a priority. Have a peek at a brief demonstration vid just after the break, and yes, you can bet we'll keep you abreast of the whole "on sale" situation.%Gallery-130405%

  • InteractiveTop brings tabletop gaming to SIGGRAPH, doubles as Inception token (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    MoleTop a little too passive for you? Fret not, as a team from The University of Electro-Communications popped by this year's installment of SIGGRAPH in order to showcase something entirely more vicious. It's air hockey meets bumper cars, and the InteractiveTop demo was certainly one of the stranger ones we came across here in Vancouver. Put simply, it's a virtual game of spinning tops, where users use magnet-loaded controllers to shuffle tops across a board and into an opponent's top. There's an aural and haptic feedback mechanism to let you know when you've struck, and plenty of sensors loaded throughout to keep track of collisions, force and who's hitting who. Pore over the links below for more technobabble, or just head past the break for an in-action video. %Gallery-130404%

  • Visualized: Objet's 3D printer breathes plastic life into Hollywood creatures, layer by layer

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    It ain't easy being plastic, you know? Objet -- the 3D printing house that aimed to replace your office's all-in-one Epson back in July -- brought a few of its snazziest pieces here to SIGGRAPH, and we popped by to have a gander. Targeting the animation-inspired crowd that showed up here in Vancouver, the company brought along some Hollywood examples of how its multi-material Objet260 Connex helped movie makers craft prototype creatures before they were inserted into the storyline. Thor's Destroyer and Avatar's Na'vi were both on hand, as well as the two critters shown above. The hothead on the right was crafted in around 18 hours (and subsequently painted), while the cool cat on the left was built in three fewer. Wildly enough, that fellow required no painting whatsoever; so long as you're cool with shades of grey, you can program your object to be colored from the outset. Oh, and as for his cost? Around $80 for the materials -- slightly more for the printer itself. %Gallery-130291%

  • Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    Lookin' to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach. Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed "a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need." Naturally, this one's most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices -- Microsoft's Kinect was specifically mentioned, and it doesn't take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we'd advise you to set aside a few hours (and rest up beforehand). %Gallery-130390%

  • PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video)

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    Think SIGGRAPH's all about far-out design concepts? Think again. A crew from the Tokyo Metropolitan University IDEEA Lab was on hand here at the show's experimental wing showcasing a new "musical interface," one that's highly tactile and darn near impossible to walk away from. Upon first glance, it reminded us most of Yamaha's Tenori-On, but the "universal input / output box" is actually far deeper and somewhat more interactive in use. A grand total of 16 solenoids are loaded in, and every one of 'em are loaded up with sensors. Users can tap any button to create a downbeat (behind the scenes, a sequencer flips to "on"), which will rise in unison with the music until you tap it once more to settle it (and in turn, eliminate said beat). You can grab hold of a peg in order to sustain a given note until you let it loose. There's a few pitch / tone buttons that serve an extra purpose -- one that we're sure you can guess by their names. Those are capable of spinning left and right, with pitch shifting and speeds increasing / decreasing with your movements. The learning curve here is practically nonexistent, and while folks at the booth had no hard information regarding an on-sale date, they confirmed to us that hawking it is most certainly on the roadmap... somewhere. Head on past the break for your daily (video) dose of cacophony. %Gallery-130382%