Siggraph2012

Latest

  • SIGGRAPH 2012 wrap-up

    by 
    Zach Honig
    Zach Honig
    08.10.2012

    Considering that SIGGRAPH focuses on visual content creation and display, there was no shortage of interesting elements to gawk at on the show floor. From motion capture demos to 3D objects printed for Hollywood productions, there was plenty of entertainment at the Los Angeles Convention Center this year. Major product introductions included ARM's Mali-T604 GPU and a handful of high-end graphics cards from AMD, but the highlight of the show was the Emerging Technologies wing, which played host to a variety of concept demonstrations, gathering top researchers from institutions like the University of Electro-Communications in Tokyo and MIT. The exhibition has come to a close for the year, but you can catch up with the show floor action in the gallery below, then click on past the break for links to all of our hands-on coverage, direct from LA.%Gallery-162185%

  • Colloidal Display uses soap bubbles, ultrasonic waves to form a projection screen (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.10.2012

    If you've ever been to an amusement park, you may have noticed ride designers using some non-traditional platforms as projection screens -- the most common example being a steady stream of artificial fog. Projecting onto transparent substances is a different story, however, which made this latest technique a bit baffling to say the least. Colloidal Display, developed by Yoichi Ochiai, Alexis Oyama and Keisuke Toyoshima, uses bubbles as an incredibly thin projection "screen," regulating the substance's properties, such as reflectance, using ultrasonic sound waves from a nearby speaker. The bubble liquid is made from a mixture of sugar, glycerin, soap, surfactant, water and milk, which the designers say is not easily popped. Still, during their SIGGRAPH demo, a motor dunked the wands in the solution and replaced the bubble every few seconds. A standard projector directed at the bubble creates an image, which appears to be floating in the air. And, because the bubbles are transparent, they can be stacked to simulate a 3D image. You can also use the same display to project completely different images that fade in and out of view depending on your angle relative to the bubble. There is a tremendous amount of distortion, however, because the screen used is a liquid that remains in a fluid state. Because of the requirement to constantly refresh the bubbles, and the unstable nature of the screen itself, the project, which is merely a proof of concept, wouldn't be implemented without significant modification. Ultimately, the designers hope to create a film that offers similar transparent properties but with a more solid, permanent composition. For now, you can sneak a peek of the first iteration in our hands-on video after the break.%Gallery-162176%

  • Stuffed Toys Alive! replaces mechanical limbs with strings for a much softer feel (hands-on)

    by 
    Zach Honig
    Zach Honig
    08.10.2012

    It worked just fine for Pinocchio, so why not animatronic stuffed bears? A group of researchers from the Tokyo University of Technology are on hand at SIGGRAPH's Emerging Technologies section this week to demonstrate "Stuffed Toys Alive!," a new type of interactive toy that replaces the rigid plastic infrastructure used today with a seemingly simple string pulley-based solution. Several strings are installed at different points within each of the cuddly gadget's limbs, then attached to a motor that pulls the strings to move the fuzzy guy's arms while also registering feedback, letting it respond to touch as well. There's not much more to it than that -- the project is ingenious but also quite simple, and it's certain to be a hit amongst youngsters. The obligatory creepy hands-on video is waiting just past the break.%Gallery-162161%

  • Chilly Chair uses static electricity to raise your arm hair, force an 'emotional reaction' (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.09.2012

    Hiding in the back of the SIGGRAPH Emerging Technologies demo area -- exactly where such a project might belong -- is a dark wood chair that looks anything but innocent. Created by a team at the University of Electro-Communications in Toyko, Chilly Chair, as it's called, may be a reference to the chilling feeling the device is tasked with invoking. After signing a liability waiver, attendees are welcomed to pop a squat before resting their arms atop a cool, flat metal platform hidden beneath a curved sheath that looks like something you may expect to see in Dr. Frankenstein's lab, not a crowded corridor of the Los Angeles Convention Center. Once powered up, the ominous-looking contraption serves to "enrich" the experience as you consume different forms of media, be it watching a movie or listening to some tunes. It works by using a power source to pump 10 kV of juice to an electrode, which then polarizes a dielectric plate, causing it to attract your body hair. After signing our life away with the requisite waiver, we sat down and strapped in for the ride. Despite several minutes of build-up, the entire experience concluded in what seemed like only a few seconds. A projection screen in front of the chair lit up to present a warning just as we felt the hairs jet directly towards the sheath above. By the time we rose, there was no visual evidence of the previous state, though we have no doubt that the Chilly Chair succeeded in raising hair (note: the experience didn't come close to justifying the exaggerated reaction you may have noticed above). It's difficult to see how this could be implemented in future home theater setups, especially considering all the extra hardware currently required, but it could potentially add another layer of immersion to those novelty 4D attractions we can't seem to avoid during visits to the amusement park. You can witness our Chilly Chair experience in the hands-on video after the break.%Gallery-162116%

  • MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.09.2012

    Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products. We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.%Gallery-162096%

  • Gocen optical music recognition can read a printed score, play notes in real-time (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.08.2012

    It's not often that we stumble upon classical music on the floor at SIGGRAPH, so the tune of Bach's Cantata 147 was reason enough to stop by Gocen's small table in the annual graphics trade show's Emerging Technologies hall. At first glance, the four Japanese men at the booth could have been doing anything on their MacBook Pros -- there wasn't a musical instrument in sight -- but upon closer inspection, they each appeared to be holding identical loupe-like devices, connected to each laptop via USB. Below each self-lit handheld reader were small stacks of sheet music, and it soon became clear that each of the men was very slowly moving their devices from side to side, playing a seemingly perfect rendition of "Jesu, Joy of Man's Desiring." The project, called Gocen, is described by its creators as a "handwritten notation interface for musical performance and learning music." Developed at Tokyo Metropolitan University, the device can read a printed (or even handwritten) music score in real-time using optical music recognition (OMR), which is sent through each computer to an audio mixer, and then to a set of speakers. The interface is entirely text and music-based -- musicians, if you can call them that, scan an instrument name on the page before sliding over to the notes, which can be played back at different pitches by moving the reader vertically along the line. It certainly won't replace an orchestra anytime soon -- it takes an incredible amount of care to play in a group without falling out of a sync -- but Gocen is designed more as a learning tool than a practical device for coordinated performances. Hearing exactly how each note is meant to sound makes it easier for students to master musical basics during the beginning stages of their educations, providing instant feedback for those that depend on self-teaching. You can take a closer look in our hands-on video after the break, in a real-time performance demo with the Japan-based team.%Gallery-162022%

  • Shader Printer uses heat-sensitive 'paint' that can be erased with low temperatures (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.08.2012

    Lovin' the bold look of those new Nikes? If you're up to date on the athletic shoe scene, you may notice that sneaker designs can give way long before your soles do. A new decaling technique could enable you to "erase" labels and other artworks overnight without a trace, however, letting you change up your wardrobe without shelling out more cash. A prototype device, called Shader Printer, uses a laser to heat (at 50 degrees Celsius, 120 degrees Fahrenheit) a surface coated with a bi-stable color-changing material. When the laser reaches the "ink," it creates a visible design, that can then be removed by leaving the object in a -10 degree Celsius (14 degree Fahrenheit) freezer overnight. The laser and freezer simply apply standard heat and cold, so you could theoretically add and remove designs using any source. For the purposes of a SIGGRAPH demo, the team, which includes members from the Japan Science and Technology Agency, Keio University, the University of Tokyo and MIT, used a hair dryer to apply heat to a coated plastic doll in only a few seconds -- that source doesn't exactly offer the precision of a laser, but it works much more quickly. Then, they sprayed the surface with -50-degree Celsius (-58 Fahrenheit) compressed air, which burned off the rather sloppy pattern in a flash. There were much more attractive prints on hand as well, including an iPhone cover and a sneaker with the SIGGRAPH logo, along with a similar plastic doll with clearly defined eyes. We also had a chance to peek at the custom laser rig, which currently takes about 10 minutes to apply a small design, but could be much quicker in the future with a higher-powered laser on board. The hair dryer / canned air combo offers a much more efficient way of demoing the tech, however, as you'll see in our hands-on video after the break.

  • Disney Research's Botanicus Interacticus adds capacitive touch to ordinary plants, we go hands-on

    by 
    Zach Honig
    Zach Honig
    08.08.2012

    Sure, you spend plenty of time talking to your plants, but have you ever made them sing? In partnership with Berlin-based Studio NAND, Walt Disney's experience development arm, coined Disney Research, has found a way to take human-plant interaction to an almost freakish level. The project's called Botanicus Interacticus, and centers around a custom-built capacitive sensor module, which pipes a very low current through an otherwise ordinary plant, then senses when and where you touch. Assuming your body is grounded, the device uses more than 200 frequencies to determine exactly where you've grabbed hold of a stem. Then, depending on how it may be programed, the sensor can trigger any combination of feedback, ranging from a notification that your child is attempting to climb that massive oak in the yard again, to an interactive melody that varies based on where your hand falls along the plant. Because this is Disney Research, the company would most likely use the new tech in an interactive theme park attraction, though there's currently no plan to do much more than demo Botanicus Interacticus for SIGGRAPH attendees. This week's demonstration is giving the creators an opportunity to gather feedback as they try out their project on the general public. There's four different stations on hand, ranging from a stick of bamboo that offers the full gamut of sensitivity, including the exact location of touch, to an orchid that can sense an electric field disruption even as you approach for contact. While interactive plants may not have a role in everyday life, Botanicus Interacticus is certainly a clever implementation of capacitive touch. You can see it action just past the break.%Gallery-162014%

  • AMD launches its next-gen FirePro graphics card lineup, we go hands-on at SIGGRAPH (video)

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    Just as you've cozied up with "Tahiti" and "Cape Verde," AMD has returned to grow its "Southern Islands" family of graphics cards with four fresh FirePros, offering up to four teraflops of graphics computing power. That spec can be found in the company's new W9000, which is capable of four TFLOPs single precision and one TFLOP double precision with a price tag just shy of $4,000. That behemoth of a card offers 6GB of GDDR5 RAM and requires 274 watts of power. More humble members of the fam include the W8000, which has the same form-factor as the higher-end W9000, but eases back on the specs, consuming 189 watts of power and carrying a $1,599 price tag. We had a chance to take a closer look at both cards at SIGGRAPH, and while they packed a significant amount of heft, you'll likely never take a second look once they're buried away in your tower rig. Fans of smaller housings (and price tags) may take notice of the W7000 and W5000, which are both considerably more compact and require less power to boot, with pricing set at $899 and $599, respectively. Those cards were also on hand for our demo, and can be seen along with the top two configs in our gallery below. You can also sneak a closer peek in the hands-on video after the break, and glance at the full specs over at our news post from earlier today.%Gallery-161943%

  • ARM's Mali-T604 makes official debut, we get a first look at the next-gen GPU (hands-on video) (update: it's the Exynos 5)

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    Think those are some pretty slick graphics in your Galaxy S III? Samsung's latest smartphone packs some mighty graphics prowess of its own, thanks to the Mali-400 MP GPU, but once you spend a few minutes with the Mali-T604, the company's next-generation chipset, the improvements become quite clear. After seeing the Mali-T604 in action, as we did at SIGGRAPH today, the capabilities leave us hopeful for the future, and perhaps feeling a bit self-conscious about the silicon currently in our pockets. The reference device on hand was operating in sync with a variety of unnamed hardware, protected from view in a relatively large sealed box. We weren't able to squeeze many details out of ARM reps, who remained mum about the demo components, including clock speed, manufacturer and even fabrication size. What we do know is that we were looking at a quad-core Mali-T604 and dual-core ARM Cortex-A15 processor, with a fabrication size in the range of "28 to 40 nanometers" (confirming the exact size would reveal the manufacturer). Clock speed is also TBD, and the early silicon on demo at the show wasn't operating anywhere close to its top end. In order to experience the T604, we took a look at three demos, including Timbuktu 2, which demonstrates elements like self shadowing and depth of field with OpenGL ES 3.0, Hauntheim, which gives us an early look at physics simulation and HDR lighting with OpenCL, and Enlighten, which rendered silky smooth real-time illumination. You can see all of the demos in action after the break, and you can expect T604-equipped devices to make their debut beginning later this year -- ARM says its working with eight manufacturers to get the licensed tech to market as early as Q3. Update: ARM has just confirmed to us that this reference device is running off an Exynos 5 Dual chip (up to 1.7GHz), which means the following video is also a heads-up on what Sammy has in store for us in its forthcoming devices.%Gallery-161934%

  • NVIDIA announces second generation Maximus, now with Kepler power

    by 
    James Trew
    James Trew
    08.07.2012

    It's been almost exactly a year since we first heard about NVIDIA's Maximus technology, and today the firm's just announced an update. The second generation of the platform is now supported by Kepler-based GPUs. This time around computational tasks get ferried off to the SMX-streaming K20 GPU ($3,199 MSRP,) leaving the 3,840 x 2,160 resolution-supporting Quadro K5000 GPU ($2,249) to tackle the graphical functions. Want to know when you can get your hands on the goods? Well, NVIDIA says starting December, with the Quadro K5000 available as a standalone in October. Head down to the PR for the full spin and forthcoming workstation / OEM details.

  • We're live from SIGGRAPH 2012 in Los Angeles!

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    Most of us experience the Los Angeles Convention Center during one of its most chaotic weeks of the year, when tens of thousands of gaming industry manufacturers, video game designers and consumers descend upon downtown LA for the annual E3 expo, booth-babe radar tweaked to 11. There's a hint of graphics prowess amid the halls this week, too, albeit on a vastly smaller scale, and with a heavy heap of civility. SIGGRAPH is a trade event through and through, with attendees demonstrating their latest tech, taking in a handful of seminars or hunting for networking opportunities, in search of employment and partnerships. It's often also a venue for product launches, which is what's brought us out, along with the usual bounty of kooky creations that serve to entertain and lighten the mood. As always, we'll be bringing you a little bit of everything over the next few days, letting you sample the best of SIGGRAPH from the comfort of your own device -- head over to our SIGGRAPH 2012 tag to follow along.

  • OpenGL ES 3.0 and OpenGL 4.3 squeeze textures to the limit, bring OpenVL along for the ride

    by 
    Jon Fingas
    Jon Fingas
    08.07.2012

    Mobile graphics are clearly setting the agenda at SIGGRAPH this year -- ARM's Mali T600-series parts have just been chased up by a new Khronos Group standard that will likely keep those future video cores well-fed. OpenGL ES 3.0 represents a big leap in textures, introducing "guaranteed support" for more advanced texture effects as well as a new version of ASTC compression that further shrinks texture footprints without a conspicuous visual hit. OpenVL is also coming to give augmented reality apps their own standard. Don't worry, desktop users still get some love through OpenGL 4.3: it adds the new ASTC tricks, new visual effects (think blur) and support for compute shaders without always needing to use OpenCL. All of the new standards promise a bright future in graphics for those living outside of Microsoft's Direct3D universe, although we'd advise being patient: there won't be a full Open GL ES 3.0 testing suite for as long as six months, and any next-generation phones or tablets will still need the graphics hardware to match.

  • AMD announces $4k FirePro W9000 GPU, entry-level FirePro A300 APU for CAD and graphics pros

    by 
    Zach Honig
    Zach Honig
    08.07.2012

    After a brief tease earlier this summer, AMD just announced pricing and availability for its new market-leading FirePro W9000 graphics processing unit -- the $3,999 GPU is available now through AMD resellers, and is compatible with Supermicro SuperWorkstations. Joining that "world's most powerful" rig are the W8000, W7000 and W5000, which sell for $1,599, $899 and $599, respectively, and can each power six 30-inch 4K displays. Power-hungry pros will want to opt for the top-of-the-line model in order to take advantage of four TFLOPs single precision or one TFLOP double precision, along with 6 gigs of high-speed GDDR5 RAM. The W8000, on the other hand, offers 3.23 TFLOPs single precision and 806 GFLOPs double precision, followed by the W7000 with 2.4 TFLOPs / 152 GFLOPs, both with 4 gigs of RAM, along with the W5000, which packs 1.27 TFLOPs single and 80 GFLOPs double, with 2 GB of GDDR5 RAM. Design pros with slightly more modest demands may find the FirePro A300 APU more in line with their budgets -- we don't have precise pricing to share, since third parties will ship their own configs, but terms like "entry-level" and "mainstream" make it clear that you won't be drawing in more than a couple zeros in the checkbook to make your purchase. The integrated solution utilizes AMD's Turbo Core tech, supports Eyefinity and Discrete Compute Offload, and can power horizontal display arrays of up to 10,240 x 1,600 pixels. You'll find all the nitty-gritty in the pair of press releases after the break. Update: Our pals over at HotHardware have just pushed out a review of the W8000 and W9000, but found the results to be a bit of a letdown. Hit up their post for the full skinny.

  • University of Tokyo builds a soap bubble 3D screen, guarantees your display stays squeaky clean (video)

    by 
    Jon Fingas
    Jon Fingas
    06.29.2012

    There are waterfall screens, but what if you'd like your display to be a little more... pristine? Researchers at the University of Tokyo have developed a display that hits soap bubbles with ultrasonic sound to change the surface. At a minimum, it can change how light glances off the soap film to produce the image. It gets truly creative when taking advantage of the soap's properties: a single screen is enough to alter the texture of a 2D image, and multiple screens in tandem can create what amounts to a slightly sticky hologram. As the soap is made out of sturdy colloids rather than the easily-burst mixture we all knew as kids, users won't have to worry about an overly touch-happy colleague popping a business presentation. There's a video preview of the technology after the jump; we're promised a closer look at the technology during the SIGGRAPH expo in August, but we don't yet know how many years it will take to find sudsy screens in the wild.