3dScanning

Latest

  • The Smithsonian is 3D-scanning its collection for future generations

    by 
    Brian Heater
    Brian Heater
    05.14.2013

    The Smithsonian has been experimenting with 3D scanning for some time now, using tools like laser arm scanners to map models of whale fossils and other ancient artifacts. Now the museum is utilizing the technology to preserve its collection for posterity. Its "laser cowboys" Vince Rossi and Adam Metallo are working full-time to record items for future generations, as part of an extensive effort to digitize 14 million prioritized objects (a list that also includes artwork and lab specimen). After the break, check out a video of the team working to preserve a digital copy of the Philadelphia gunboat, America's oldest fighting vessel.

  • Disney researchers can now digitally shave your face, clone it for animatronics (video)

    by 
    Sean Buckley
    Sean Buckley
    08.12.2012

    The minds at Disney Research aren't only interested in tracking your face -- they want to map, shave and clone it, too. Through a pair of research projects, Walt's proteges have managed to create systems for not only mapping, digitally reconstructing and removing facial hair, but also for creating lifelike synthetic replicas of human faces for use in animatronics. Let's start with the beards, shall we? Facial hair is a big part of a person's physical identity, a quick shave can render a close friend unrecognizable -- but modern face-capture systems aren't really optimized for the stuff. Disney researchers attempted to address that issue by creating an algorithm that detects facial hair, reconstructs it in 3D and uses the information it gathers to suss out the shape of the skin underneath it. This produces a reconstruction of not only the skin episurface, but also of the subject's individual hairs, meaning the final product can be viewed with or without a clean shave. Another Disney team is also taking a careful look at the human face, but is working on more tangible reconstructions -- specifically for use on audio-animatronic robots. The team behind the Physical Face Cloning project hope to automate part of creating animatronics to speed up the task of replicating a human face for future Disney robots. This complicated process involves capturing a subjects face under a variety of conditions and using that data to optimize a composition of synthetic skin to best match the original. Fully bearded animatronic clones are still a ways off, of course, but isn't it comforting to know that Disney could one day replace you accurately replicate your visage in Walt Disney World for posterity? Dive into the specifics of the research at the source links below, or read on for a video summary of the basics.

  • HP TopShot LaserJet Pro M275 scans 3D objects but only prints in 2D (video)

    by 
    Daniel Cooper
    Daniel Cooper
    09.11.2011

    For some reason, HP thinks your small business really needs the ability to scan 3D objects -- which is why it is releasing the TopShot LaserJet Pro. "TopShot" is the fancy name for the all-in-one's overhanging arm with a high resolution camera, which combines six images (three with flashes from different angles, and three in ambient light conditions with different exposure levels) to mimic a studio-like product shot. What's more, thanks to the Biz Card app, the TopShot can scan and import multiple business cards simultaneously. Also included are Google Documents integration and cloud apps as well as the usual ePrint and AirPrint features, which you can run without a computer on the 3.5-inch touchscreen. HP isn't talking about pricing or availability, but you can see a walkthrough of the TopShot after the break.

  • Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    Lookin' to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach. Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed "a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need." Naturally, this one's most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices -- Microsoft's Kinect was specifically mentioned, and it doesn't take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we'd advise you to set aside a few hours (and rest up beforehand). %Gallery-130390%

  • Creator of ProFORMA 3D scanning system talks details, availability

    by 
    Donald Melanson
    Donald Melanson
    11.27.2009

    Still a bit curious how the ProFORMA system developed at Cambridge University can turn any old webcam into a fairly advanced 3D scanner? Then settle in for a few minutes, as the researcher behind the project, Qi Pan, has taken a bit of time to chat with the Shapeways blog about the how the system came to fruition and its potential availability to the public. Interestingly, he actually started out trying to model outdoor scenes, but moved to smaller objects after discovering that the processing power required was beyond his reach. That led to about a year and half of work on the current system, which works in two stages: the first being a tracker that works out the position and orientation of the object relative to the camera, and the second being the reconstruction stage, which seems to be as effortless to use as it is complicated to explain. Perhaps the best news, however, is that Qi says he soon plans to release a Linux-based demo to the general public, and a Windows version shortly thereafter.