Siggraph2011

Latest

  • Cyclone Display exemplifies 'multi-colored expression,' totally heading to a nightclub near you (video)

    by 
    Darren Murph
    Darren Murph
    08.12.2011

    Ever heard of Yoichi Ochiai? You have now. Hailing from Japan's University of Tsukuba, this whizkid was on hand here at SIGGRAPH to showcase one of his latest creations -- and it just so happened to be one of the trippiest yet. The Cyclone Display was a demonstration focused on visual stimulation; a projector shown above interacted with a plate of spinning disks. Underneath, a cadre of motors were controlled by a connected computer, and as the rotation and velocity changed, so did the perceived pixels and colors. The next step, according to Ochiai, would be to blow this up and shrink it down, mixing textures in with different lighting situations. With a little help, a drab nightclub could douse its walls in leopard print one night, or zebra fur another. Interactive clubbing never sounded so fun, eh? You know the drill -- gallery's below, video's a click beneath. %Gallery-130394%

  • HAPMAP navigational system keeps your eyes on the prize, your hands on the route (video)

    by 
    Darren Murph
    Darren Murph
    08.12.2011

    Alternative navigational systems aren't exactly new, but the concept shown here just might have wings. HAPMAP was one of a handful of projects selected for demonstration at SIGGRAPH's E-tech event, aiming to keep a human's eye away from the map (and in turn, on whatever's in front of them) by developing a system that guides via haptics. With a handheld device capable of both navigating and vibrating, the interface indicates complex navigation cues that follow the curvature of a road or path -- it's far more detailed than the typical "go straight," and there's also opportunity here to provide handicapped individuals with a method for getting to previously inaccessible locales. By mimicking the operation and interface of sliding handrails (as well as using motion capture cameras), it's particularly useful for the visually impaired, who need these subtle cues to successfully navigate a winding path. Hop on past the break for a couple of demonstration vids. %Gallery-130395%

  • Vection Field controls traffic at SIGGRAPH, fictional cities from the future (video)

    by 
    Darren Murph
    Darren Murph
    08.12.2011

    So, let's set the stage. You're walking down a semi-busy street in a semi-foreign city. You're curiously hanging close to the middle of the sidewalk. You bust out your smartphone and figure out that your so-called engagement just got "Complicated." Your gait has an irregularity. You look up and spot what appears to be a local, eerily perturbed and somewhat flummoxed by your current position. You dodge left. So does he. You dodge right, knowing full well that it'll only complicate matters when he follows suit. Before long, you're tiptoeing around a stranger while a full-on traffic jam builds up behind you. You've just ruined the universe, and that's not doing anyone any good. The solution? The University of Electro-Communications's Vection Field, which hones in on large moving visual cues that "induce a sense of self-movement." Funny enough, the lenticular lenses pathway here at SIGGRAPH actually worked -- we never expected an optical illusion to solve such a monumental issue, but we'll take it. Vid's past the break, per usual.

  • Wrist sensor turns the back of your hand into a meaty haptic interface (video)

    by 
    Amar Toor
    Amar Toor
    08.12.2011

    We're all intimately familiar with the backs of our hands, so why not use them as a haptic interface to control our gadgets? That's the idea behind the device pictured above -- a nifty little wrist sensor that turns your paw into a flesh-toned trackpad. Designed by Kei Nakatsuma, a PhD student at the University of Tokyo, this contraption employs infrared sensors to track a user's finger as it moves across the back of a hand. These movements are mirrored on a wristwatch-like display, thanks to seven IR detectors and a set of piezoelectric sensors, effectively turning any digit into an organic stylus or mouse. Nakatsuma, who unveiled his work at this week's SIGGRAPH, says his creation can't handle the more complicated, pinching or rotating gestures you could manipulate on most smartphone touchscreens and acknowledges that the screen can be difficult to read in direct sunlight. But the underlying technology could pave the way for similarly handy designs, while allowing users to interact with their gadgets without having to constantly glance at their screens, or go fishing in their pockets. Feel your way past the break to see a video of the device in action.

  • Surround Haptics could bring force feedback to vests, coasters and gaming (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    Haptics and gaming have gone hand in hand for centuries it seems -- well before the Rumble Pak made itself an N64 staple, we vividly recall snapping up a vibration jumpsuit for our Sega Genesis. 'Course, it was on clearance for a reason. Ali Israr et al. were on hand here at SIGGRAPH's E-tech conference to demonstrate the next big leap in haptics, joining hands with Disney Research in order to showcase a buzzing game chair for use with Split/Second. The seat shown in the gallery (and video) below cost around $5,000 to concoct, with well over a dozen high-end coils tucked neatly into what looked to be a snazzy padding set for an otherwise uneventful seating apparatus. We sat down with members of the research team here in Vancouver, and while the gaming demo was certainly interesting, it's really just the tip of the proverbial iceberg. The outgoing engineers from Black Rock Studios helped the team wire stereoscopic audio triggers to the sensors, with a left crash, right scrape and a head-on collision causing the internal coils to react accordingly. Admittedly, the demo worked well, but it didn't exactly feel comfortable. In other words -- we can't exactly say we'd be first in line to pick one of these up for our living room. %Gallery-130406%

  • Sony's Face-to-Avatar blimp soars through SIGGRAPH, melts the heart of Big Brother (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    Telepresence, say hello to your future. Humans, say hello to the next generation of Chancellor Sutler. All jesting aside, there's no question that Big Brother came to mind when eying Sony Computer Science Laboratories' Face-to-Avatar concept at SIGGRAPH. For all intents and purposes, it's a motorized blimp with a front-facing camera, microphone, a built-in projector and a WiFi module. It's capable of hovering above crowds in order to showcase an image of what's below, or displaying an image of whatever's being streamed to its wireless apparatus. The folks we spoke to seemed to think that it was still a few years out from being in a marketable state, but we can think of a few governments who'd probably be down to buy in right now. Kidding. Ominous video (and static male figurehead) await you after the break. %Gallery-130392%

  • MoleBot interactive gaming table hooks up with Kinect, puts Milton Bradley on watch (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    Looking to spruce up that nondescript living room table? So are a smattering of folks from the Korea Advanced Institute of Science and Technology. At this week's SIGGRAPH E-tech event, a team from the entity dropped by to showcase the deadly cute MoleBot table. At its simplest, it's a clever tabletop game designed to entertain folks aged 3 to 103; at the other extreme, it's a radically new way of using Microsoft's Kinect to interact with something that could double as a place to set your supper. Improving on similar projects in the past, this shape-display method uses a two-dimensional translating cam (mole cam), 15,000 closely packed hexagonal pins equivalent to cam followers, and a layer of spandex between the mole cam and the pins to reduce friction. When we dropped by, the Kinect mode was disabled in favor of using an actual joystick to move the ground below. In theory, one could hover above the table and use hand gestures to move the "mole," shifting to and fro in order to pick up magnetic balls and eventually affix the "tail" onto the kitty. The folks we spoke with seemed to think that there's consumer promise here, as well as potential for daycares, arcades and other locales where entertaining young ones is a priority. Have a peek at a brief demonstration vid just after the break, and yes, you can bet we'll keep you abreast of the whole "on sale" situation.%Gallery-130405%

  • InteractiveTop brings tabletop gaming to SIGGRAPH, doubles as Inception token (video)

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    MoleTop a little too passive for you? Fret not, as a team from The University of Electro-Communications popped by this year's installment of SIGGRAPH in order to showcase something entirely more vicious. It's air hockey meets bumper cars, and the InteractiveTop demo was certainly one of the stranger ones we came across here in Vancouver. Put simply, it's a virtual game of spinning tops, where users use magnet-loaded controllers to shuffle tops across a board and into an opponent's top. There's an aural and haptic feedback mechanism to let you know when you've struck, and plenty of sensors loaded throughout to keep track of collisions, force and who's hitting who. Pore over the links below for more technobabble, or just head past the break for an in-action video. %Gallery-130404%

  • Visualized: Objet's 3D printer breathes plastic life into Hollywood creatures, layer by layer

    by 
    Darren Murph
    Darren Murph
    08.11.2011

    It ain't easy being plastic, you know? Objet -- the 3D printing house that aimed to replace your office's all-in-one Epson back in July -- brought a few of its snazziest pieces here to SIGGRAPH, and we popped by to have a gander. Targeting the animation-inspired crowd that showed up here in Vancouver, the company brought along some Hollywood examples of how its multi-material Objet260 Connex helped movie makers craft prototype creatures before they were inserted into the storyline. Thor's Destroyer and Avatar's Na'vi were both on hand, as well as the two critters shown above. The hothead on the right was crafted in around 18 hours (and subsequently painted), while the cool cat on the left was built in three fewer. Wildly enough, that fellow required no painting whatsoever; so long as you're cool with shades of grey, you can program your object to be colored from the outset. Oh, and as for his cost? Around $80 for the materials -- slightly more for the printer itself. %Gallery-130291%

  • Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    Lookin' to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach. Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed "a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need." Naturally, this one's most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices -- Microsoft's Kinect was specifically mentioned, and it doesn't take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we'd advise you to set aside a few hours (and rest up beforehand). %Gallery-130390%

  • Visualized: 3D3 Solutions scans our face in two seconds flat

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    See that bloke? That's Darren Murph. Well, a digital representation of the human version, anyway. That image was captured in two painless seconds at the hands of 3D3 Solutions, which was on-hand here at SIGGRAPH to demonstrate its newest FlexScan setups. The rig that snapped our face rings up at around $10,000, and relies on a Canon DSLR (strictly for capturing textures), a projector and a secondary camera. As you've likely picked up on, this is hardly designed for average DIYers, but these solutions are also far more detailed and flexible than using Microsoft's Kinect. We're told that the company recently started to support Nikon cameras as well, and for those who'd prefer to use their existing cameras / PJs, a hobbyist-centric software package will allow you to do just that. The only problem? Figuring out where the $2,700 (for software) is going to come from. Head on past the break for a demonstration vid, or peruse the gallery below if you're feeling extra creepy. %Gallery-130289%

  • Organic Motion's OpenStage motion capture system grabs 200FPS, no backdrop required (video)

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    At just under $40,000 for an eight camera setup, we're hardly in hobbyist territory here, but Organic Motion's new OpenStage 2.0 motion capture system could certainly make do in the average basement. Unlike a few competing solutions shown here at SIGGRAPH, this one actually has no backdrop mandate, and better still, doesn't require you to latch a single sensor onto your subject. The magic lies within the cameras hung above -- kits are sold that contain between eight and 24 cameras, and even the latter can be handled with a single workstation. Multi-person tracking ain't no thang, and while you aren't capturing HD footage here, the high-speed VGA capability enables up to 200 frames per second to be logged. Not surprisingly, the company's aiming this squarely at the animation and medical realms, and should start shipping bundles as early as next month. Looking to take down Pixar? You'll need a lot more than 40 large, but perhaps the video after the break will give you a bit of inspiration. %Gallery-130288%

  • Robot skin captures super detailed 3D surface images

    by 
    Lydia Leavitt
    Lydia Leavitt
    08.10.2011

    Remember those awesome pin art toys where you could press your hand (or face) into the pins to leaving a lasting impression? Researchers at MIT have taken the idea one (or two) steps further with "GelSight," a hunk of synthetic rubber that creates a detailed computer visualized image of whatever surface you press it against. It works as such: push the reflective side of the gummy against an object (they chose a chicken feather and a $20 bill) and the camera on the other end will capture a 3-D image of the microscopic surface structure. Originally designed as robot "skin," researchers realized the tool could be used in applications from criminal forensics (think bullets and fingerprints) to dermatology. The Coke can-sized machine is so sensitive, it can capture surface subtleties as small as one by two micrometer in surface -- finally solving the mystery of who stole the cookies from the cookie jar. (Hint: we know it was you Velvet Sledgehammer).

  • Perceptive Pixel shows world's largest projected capacitive display at SIGGRAPH, we go hands-on (video)

    by 
    Darren Murph
    Darren Murph
    08.09.2011

    Perceptive Pixel wasn't kidding around when it launched the planet's biggest projected capacitive display here at SIGGRAPH -- all 82 inches of it were here on display, and naturally, we stopped by to give it a look. While 82-inch panels aren't anything new, this one's particularly special. You see, the company actually procures the panels from Samsung, and then it rips the guts out while bonding its own network of sensors directly to it; most large-screen touch devices simply pop a touch layer on top of whatever TV shows up in the labs, but this integrated approach takes sensitivity to a whole 'nother level. For those unfamiliar with the term 'projected capacitive,' we're surmising that it's actually far less foreign than you think -- it's a technology used in a handful of smartphones, from Samsung's Moment to Apple's iPhone. 3M was also showing off a PC tech preview back at CES, and after using it here on the show floor, there's no question that it's the future for larger-screen devices. To quote CEO Jeff Han: "once consumers get a taste of this on the mobile front, they start demanding it elsewhere." %Gallery-130284%

  • NVIDIA, Fusion-io and HP drive a dozen 1080p streams on four displays at SIGGRAPH (video)

    by 
    Darren Murph
    Darren Murph
    08.09.2011

    A dozen uncompressed 1080p video feeds, simultaneously running off a single workstation. Yep, you're looking at it. NVIDIA's showcase piece here at SIGGRAPH was undoubtedly this wall -- a monster that trumps even Intel's CES wall in terms of underlying horsepower. A relatively stock HP Z800 workstation was loaded with the NVIDIA QuadroPlex 7000 Visual Computing System (that's four GPUs, for those counting) in order to push four HD panels. A pair of Fusion-io's ioDrive Duos were pushing a total of three gigabytes per second, enabling all 12 of the feeds to cycle through with nary a hint of lag. We're still a few years out from this being affordable enough for the common Earthling, but who says you need to wait that long to get a taste? Vid's after the break, hombre. %Gallery-130280%

  • Microsoft's KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities

    by 
    Darren Murph
    Darren Murph
    08.09.2011

    It's a little shocking to think about the impact that Microsoft's Kinect camera has had on the gaming industry at large, let alone the 3D modeling industry. Here at SIGGRAPH 2011, we attended a KinectFusion research talk hosted by Microsoft, where a fascinating new look at real-time 3D reconstruction was detailed. To better appreciate what's happening here, we'd actually encourage you to hop back and have a gander at our hands-on with PrimeSense's raw motion sensing hardware from GDC 2010 -- for those who've forgotten, that very hardware was finally outed as the guts behind what consumers simply know as "Kinect." The breakthrough wasn't in how it allowed gamers to control common software titles sans a joystick -- the breakthrough was the price. The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts. Turns out, that's precisely what a smattering of highly intelligent blokes in the UK have done, and they've built a new method for reconstructing 3D scenes (read: real-life) in real-time by using a simple Xbox 360 peripheral. The actual technobabble ran deep -- not shocking given the academic nature of the conference -- but the demos shown were nothing short of jaw-dropping. There's no question that this methodology could be used to spark the next generation of gaming interaction and augmented reality, taking a user's surroundings and making it a live part of the experience. Moreover, game design could be significantly impacted, with live scenes able to be acted out and stored in real-time rather than having to build something frame by frame within an application. According to the presenter, the tech that's been created here can "extract surface geometry in real-time," right down to the millimeter level. Of course, the Kinect's camera and abilities are relatively limited when it comes to resolution; you won't be building 1080p scenes with a $150 camera, but as CPUs and GPUs become more powerful, there's nothing stopping this from scaling with the future. Have a peek at the links below if you're interested in diving deeper -- don't be shocked if you can't find the exit, though. %Gallery-130263%

  • Perceptive Pixel unveils an 82-inch multi-touch LCD, TV news anchors overcome by giddy hands

    by 
    Joe Pollicino
    Joe Pollicino
    08.09.2011

    Perceptive Pixel has been no stranger to massive multi-touch screens ever since it got over being so Frustrated. At this year's SIGGRAPH the company is showing off a whopping 82-inch projected capacitive LCD -- and you thought MondoPad was huge. Apparently, the "slim" 6-inch deep, optically bonded display is "the world's largest" of its type, although Perceptive does make an 88-inch DLP model if you need a bit more real estate. On-screen content is displayed in 1080p HD resolution at 120Hz, and with an unlimited multi-touch response time of less than 1ms, it's ready for all the situations Wolf Blitzer's digits can handle. We'll hopefully be checking it out on the show floor, but for now you'll find more details past the break.

  • NVIDIA's Project Maximus takes multi-GPU mainstream, 'Virtual Graphics' takes it to the cloud

    by 
    Darren Murph
    Darren Murph
    08.08.2011

    NVIDIA just wrapped up an intimate press briefing here at SIGGRAPH 2011, where -- amongst other things -- it officially took the wraps off of two major initiatives. Project Maximus and Virtual Graphics are the two main topics of conversation here, and while both are obviously targeting working professionals at the moment, there's no question that a trickle-down effect is already on the company's mind. With Maximus, the outfit plans to stop recommending bigger GPUs to pros, and start recommending "a light Quadro GPU and as large a Tesla as you can get in the system." The overriding goal here is to make multi-GPU technology entirely more accessible; to date, it hasn't exactly been easy to get a finely tuned multi-GPU setup to the masses, but it sounds like a good deal of future flexibility (it'll be "nearly infinitely scalable") aims to change that. Just imagine: dynamic coupling and decoupling of GPUs depending on user load, at a far more detailed level within the application... Update: Regarding that Tesla bit, NVIDIA clarified with this: "What we're saying is for applications that are light on graphics / don't place a heavy demand on graphics, but more so a heavy demand on computational tasks, users will have an option to choose an entry- or mid-level Quadro card for graphics functions, such as the Quadro 600 or Quadro 2000. For certain applications, better performance is achieved by adding a Tesla companion processor, as opposed to scaling up the primary Quadro graphics. Users still require as much graphics as possible." %Gallery-130218%