3dmodeling

Latest

  • Microsoft

    Microsoft upgrades Paint 3D's drawing and magic select tools

    by 
    Rob LeFebvre
    Rob LeFebvre
    07.12.2017

    Microsoft's Paint 3D was introduced as part of the Windows 10 Creators update last October. The company wants to make 3D modeling as easy and accessible as using a 2D drawing program. The free Windows 10 app gives you the power to create, share and print anything you can think of in three dimensions. Now, the Microsoft team has two new updates for Paint 3D, available now, that should improve the experience of 3d modeling.

  • Chenglei Wu, Derek Bradley et. al.

    Disney can digitally recreate your teeth

    by 
    Jon Fingas
    Jon Fingas
    12.05.2016

    Digital models of humans can be uncannily accurate these days, but there's at least one area where they fall short: teeth. Unless you're willing to scan the inside of someone's mouth, you aren't going to get a very faithful representation of someone's pearly whites. Disney Research and ETH Zurich, however, have a far easier solution. They've just developed a technique to digitally recreate teeth beyond the gum line using little more than source data and everyday imagery. The team used 86 3D scans to create a model for an "average" set of teeth, and wrote an algorithm that adapts that model based on what it sees in the contours of teeth in photos and videos.

  • Smithsonian/Autodesk

    Explore a 3D scan of the Apollo 11 capsule

    by 
    Steve Dent
    Steve Dent
    07.21.2016

    It's been 47 years since NASA first put a man on the moon and you can now get an idea of what astronauts Buzz Aldrin, Neil Armstrong and Michael Collins experienced. The Smithsonian Institute, working with Autodesk, has created a high-resolution 3D scan of "Columbia," the Apollo 11 command module that carried the astronauts to the moon. Using the online viewer (or downloading the virtual reality or 3D print files) you can visit the hidden corners of the module in much more detail than in person at the museum.

  • Smart 3D modeling lets you mess with faces in videos

    by 
    Jon Fingas
    Jon Fingas
    03.21.2016

    Have you ever wanted to mess with a video by making its cast say things they never would on camera? You might get that chance. Researchers have built a face detection system that lets you impose your facial expressions on people in videos. The software uses an off-the-shelf webcam to create a 3D model of your face in real time, and distorts it to fit the facial details in the target footage. The result, as you'll see below, is eerily authentic-looking: you can have a dead-serious Vladimir Putin make funny faces, or Donald Trump blab when he'd otherwise stay silent.

  • Doctors will use 3D modeling ahead of your next sinus surgery

    by 
    Andrew Tarantola
    Andrew Tarantola
    02.15.2016

    Given the sinus' close proximity to your eyes and brain -- not to mention the area's super-sensitive nature -- the single slip of a surgeon's scalpel can have debilitating and permanent repercussions. That's why researchers at the Ohio State University Wexner Medical Center have developed a 3D-modelling technique that maps out each patient's sinus cavity prior to their surgery. By doing so, doctors will be able to practice the upcoming procedure as well as see exactly what sort of effect it will have on the patient.

  • The new Unreal Engine will bring eerily realistic skin to your games

    by 
    Jon Fingas
    Jon Fingas
    10.06.2014

    It hasn't been hard to produce realistic-looking skin in computer-generated movies, but it's much harder to do that in the context of a game running live on your console or PC. That trip to the uncanny valley is going to be much easier in the near future, though, thanks to the impending arrival of Unreal Engine 4.5. The gaming framework adds subsurface light scattering effects that give digital skin a more natural look. Instead of the harsh visuals you normally get (see the pale, excessively-shadowed face at left), you'll see softer, decidedly fleshier surfaces (middle and right). The scattering should also help out with leaves, candle wax and other materials that are rarely drawn well in your favorite action games.

  • Lynx A 3D point-and-shoot camera/tablet does motion capture and 3D modeling, we go hands-on

    by 
    Michael Gorman
    Michael Gorman
    04.17.2013

    Earlier this year, a group of enterprising students from the University of Texas unveiled the Lynx A 3D camera and asked for money to fund its construction on Kickstarter. Since then, they've soared past their funding goal of $50,000, and are getting ready to ship out their first set of cameras. Today at DEMO Mobile SF, we finally got to see a prototype unit for ourselves and watch it scan someone's head in real-time. For the uninitiated, the Lynx A is billed as a point-and-shoot 3D camera that uses Kinect-esque hardware to obtain depth mapping and imaging info from your surroundings. Using GPU computing power and some custom code, it turns that data into 3D scene and object models or motion capture, and it displays the finished models on its 14-inch screen a minute or two after it's finished recording -- all for $1,799. The Lynx A we witnessed working in person today was a prototype unit, so fit and finish were far from being retail ready, as wide gaps and exposed screws abounded. Lynx assured us that the units going out to its backers will not only have a more polished appearance, but also be six times more accurate and 30 percent smaller due to newer hardware components. Despite the prototype's rough appearance, the modeling process went off without a hitch. It was able to scan 2/3 of a human head in about a minute and within a couple minutes more it was displaying a 3D model ready to be manipulated and printed out by a Replicator or a Form 1. Don't believe us? See for yourself in the video after the break.%Gallery-186010%

  • Qumarion 3D modeling mannequin coming soon for $750, still won't play with your kid (video)

    by 
    Jon Fingas
    Jon Fingas
    05.08.2012

    Trying to get convincing, natural poses out of 3D models can be tricky, so it's a relief that two Japanese universities' joint ventures, the University of Electro-Communications' ViVienne and the University of Tsukuba's SoftEther, are close to wrapping up work on their posable mannequin. Now called Qumarion, the model formerly known as QUMA uses 32 sensors across 16 body joints to translate the humanoid statue's pose to the computer screen simply by bending limbs, much like you would the legion of action figures you had when you were eight. Neither you nor your kids will be using Qumarion to storm Fort Barbie anytime soon, but the 120 frames per second sample rate over USB does mean that poses are mirrored in your modeling tools almost instantly. You also won't have much longer to wait to buy one for your fledgling anime production: the mannequin and custom modeling software from Celsys should be bundled together sometime within the summer for a comparatively frugal $750.

  • Google sells SketchUp to Trimble Navigation for undisclosed sum

    by 
    James Trew
    James Trew
    04.26.2012

    While we're probably more accustomed to Google buying assets than selling them 'round here, every now and again the search giant does shed some skin. El Goog's 3D modeling platform, SketchUp, is to be sold to Trimble Navigation for an undisclosed sum reports Reuters. Trimble says it's hoping to use the acquisition to enhance its office-to-field platform. The two firms will also work together to develop SketchUp's online repository of 3D models for designers to use, share and contribute to. SketchUp's blog reassures users that the free version won't change under the move. The deal should get the final nod in Q2 this year.

  • Inhabitat's Week in Green: ten earth activities, transnatural stools and wood ash bike frames

    by 
    Inhabitat
    Inhabitat
    04.22.2012

    Each week our friends at Inhabitat recap the week's most interesting green developments and clean tech news for us -- it's the Week in Green. Happy Earth Day! In honor of Earth Week, this week we took a moment to think about the origins of this now-global event, exploring why we need Earth Day and how our society can possibly tackle the 7 biggest threats to our environment. If haven't yet made plans for Earth day make sure to take a look at our list of 10 Earth Day activities. One of the major themes of Earth Week this year was lighting, as green lighting innovations ranging from the useful to the absurd made it onto Inhabitat's radar screen. On the more practical end of the spectrum, we reviewed the SUNNAN, Ikea's solar-powered desk lamp, and although we found it to be a bit dim, it actually outperformed its expected charge time. On the lighter side, Randy Sarafan, the same guy who designed a chair that tweets his own farts (seriously), unveiled a lamp that shuts off whenever you shut your eyes. The downside: In order for it to work you have to attach electrodes to your face, which are plugged directly into the wall. Thanks, but we'll pass. And for those who prefer regular, old-fashioned lights, Philips launched its much-anticipated L-Prize winning 10-watt LED bulb on Earth Day. At $60 a pop, you might have to take out a second mortgage to replace every bulb in your home, but you'll recoup that money back on your energy bill, and Philips also announced some rebates to ease the pain.

  • Hands-on with Arqball Spin, the app that lets you create interactive 3D models

    by 
    Michael Gorman
    Michael Gorman
    04.19.2012

    Sometimes, standard two dimensional photos, even those taken by a 41-megapixel sensor, simply aren't enough to accurately depict a three dimensional object. Enter Arqball Spin, a free app that lets anyone with an iOS device create high-quality 3D models of whatever they like. Using the iPhone's camera, the app takes a series of images and uses some software black magic to create the finished product. The model, or "spin", can be cropped and adjusted (brightness, saturation and contrast) like a regular photograph, plus users can create custom annotations to identify or comment on specific parts of the "spin" as well. Viewers can then rotate the model 360 degrees and zoom in on any part that piques their interest. While it's currently an Apple-centric affair, support for DSLRs and other hi-res cameras (by uploading videos to the company's website for processing) and other mobile platforms is in the pipeline. The app works best if the object is situated on Arqball's stage, which rotates at an optimal three RPM -- the stage isn't available yet, but the company's going the Kickstarter route to get the capital needed to start manufacturing. Those who pitch in now can grab a stage for $60, and it'll cost $20 more if you want to wait until it's on sale. Of course, the app still functions if you want to hold your iPhone or iPad and walk around your subject, but you won't get near the quality result that you can when using the stage. Because the "spins" are hosted on Arqball's servers, they can easily be embedded on any website via HTML. By making photo-realistic 3D modeling so easy and accessible, Arqball sees this technology as a perfect fit for online retailers, educators, and, ahem, even gadget reviewers. While the app holds obvious commercial appeal, the company's not counting out casual users, and hopes to see a future filled with user-created 3D content. We got to see the app in action, and walked away thoroughly impressed with both the speed of the app and the detailed models it produces -- but you don't have to take our word for it, see a sample spin and our hands-on video after the break.

  • Paint3D app promises to let you sketch and print 3D models straight from Android

    by 
    Donald Melanson
    Donald Melanson
    02.29.2012

    3D printing may still have quite a ways to go before it becomes as ubiquitous as traditional printing, but there's plenty of developers out there working to make that happen. One such example comes out of the House 4 Hack group in Johannesburg, who have been working on an Android app called Paint3D that promises to let folks create 3D models and then print them out straight from their mobile device -- imagine saying that even just five years ago. Unfortunately, that's not available to the general public just yet, but you can get a closer look at the app and the results its able to produce at the source link below, and get an overview from one of the developers in the video after the break.

  • OrcaM sphere constructs detailed, digital 3D models of wares while you wait (video)

    by 
    Billy Steele
    Billy Steele
    01.22.2012

    Ever wanted a 3D digital copy of all those Little League trophies? Well, the NEK has whipped up something to lend a hand that's a bit larger than another recent scanner. Enter the OrcaM, an Orbital Camera System capable of producing an accurate, digital 3D model of objects up to 80cm (about 31.5 inches) wide and weighing up to 100kg (around 220lbs). Making use of seven shooters simultaneously, the system photographs the object while projecting various light and shadow combinations in order to determine the ware's geometry. The OrcaM is able to reproduce high-quality digital reproductions with a geometric accuracy less than millimeter (nearly .04 inches). As if that wasn't enough, it produces complete color, texture and reflectivity maps so that every minute detail is accounted for. Once your to-be-copied object has been loaded, the OrcaM takes over and is automatic, churning out the completed rendering shortly after the requisite photos are taken. Hit the video up top for a look at the beast in action.

  • Kinect sensor wants to guess astronauts' weight, tell them to hit the space gym

    by 
    Sean Buckley
    Sean Buckley
    12.27.2011

    How do you weigh yourself when there's no gravity keeping you down? Well, you can calculate your mass by sitting on an oscillating spring and comparing its standing frequency to your riding frequency (NASA's current method), or you could rig up a Kinect sensor to tell you when you're getting fat. Carmelo Velardo, a Eurocom computer scientist in Alphes-Maritimes, France, is developing the latter option. Working with colleagues at the Italian Institute of Technology's Center for Human Space Robotics, Velardo paired the Kinect sensor's 3D modeling digs with a database of weight to body measurements of 28,000 people -- the resulting system can guess your weight with a 97 percent accuracy. NASA scientist John Charles notes that while the rig works well on the ground, it might hit some snags in space. Microgravity can shift water around in an astronaut's body, changing their density and potentially throwing off the Kinect setup's readings. Still, Charles says the technique "appears feasible," and suggests pairing it with the existing weight measurement tools might "provide insights into changes in body density that might be illuminating." Velardo hopes to test the system in parabolic flight soon. If he succeeds, not even outer space will protect us from the shameful judgment of video game peripherals. Now if you'll excuse us, we have some squat-thrusts to get to.

  • SoftEther's sensor-laden QUMA robot demonstrates poses, intimidates your acting coach (video)

    by 
    Darren Murph
    Darren Murph
    07.24.2011

    A solution in search of a problem, or a solution to a problem that you were too proud to cop to? SoftEther has just revealed what might be the final blow to Barbie's distinguished career: the sensor-splashed QUMA. So far as we can tell, the human-shaped puppet contains a myriad sensors to pick up precise bends and flexes, and then pipes that information to a screen. Aside from showing your team of ballerinas exactly how their routine should look, we're guessing that the real future here is in far more sophisticated tasks -- things like artificial intelligence, major motion pictures and scientific research. As the saying goes, a video's worth a zillion words, so have a peek for yourself just after the break.

  • Creepy new Air Force camera can identify and track you from far, far away

    by 
    Terrence O'Brien
    Terrence O'Brien
    05.20.2011

    Sure you can do neat things like unlock your iPhone using facial recognition, but the Air Force has far grander visions for the tech. Specifically it wants a camera that can identify and track possible insurgents at a significant distance (though it's unclear how far we're talking about here) using only a few seconds of footage. It's turned to Photon-X Inc. to develop a sensor that combines spacial measurements, infrared and visible light to create a "bio-signature" that maps not only static facial features but muscle movements that are unique to each individual. The technology could also be used in targeting systems to identify enemy vehicles and integrated into robots to help them navigate and identify objects... or threatening meatbags. The Air Force even foresees law enforcement, banks, and private security firms using the cams to monitor customers and watch for suspicious activity. Similar tools have been created that use software to analyze video feeds, but they can't match the accuracy or range of this "behaviormetric" system. Normally, this is where we'd make some snide reference to Skynet or Big Brother but, honestly, we're too creeped out for jokes.

  • Pix4D turns your 2D aerial photographs into 3D maps on the fly (video)

    by 
    Christopher Trout
    Christopher Trout
    05.07.2011

    Assuming you own a Sensefly Swinglet CAM or some other high-res camera-equipped UAV, you could be just minutes away from turning your plain old 2D aerial photos into comprehensive 3D maps. Pix4D, a new software program coming out of EPFL -- the same institute that brought us this race of altruistic robots -- takes images shot using an aerial drone to render 3D maps in the cloud in just 30 minutes. Users upload images taken with their flying machines, at which point Pix4D kicks into action, defining high contrast points in the phots and pasting them together based on those points. It then renders a 3D model, overlays the graphics, and spits out a Google Earth-style map. So what's with this 4D business? Well, its developers claim that users can easily see the progression of any model by deploying their Sensefly drone whenever they see fit, throwing the added layer of time into the mix. You can see the fruits of Pix4D's labor in the video after the break.

  • Kinect + homemade Power Gloves = 3D modeling in free-space (video)

    by 
    Tim Stevens
    Tim Stevens
    03.15.2011

    The Kinect hacks keep rollin', and we just keep on lovin' every one of 'em -- despite most being decidedly non-practical. This one actually is, created by Sebastian Pirch at 3rD-EYE, a media production company. He's made a free-space 3D modeling tool using a Kinect camera to track his hands, which he uses to create points in space and draft a model. To provide greater control he then made two Arduino-powered gloves that detect finger touches -- basically DIY Peregrines. Using different connections of finger-presses he can move the entire model, move single points, create new points, create new polygons, and basically do everything he needs to do to create a mesh, which can then be imported into 3ds Max for further refinement. He even manages to make it all look fun, thus besting Lockheed Martin's similar system that's powered by zombies.

  • UNC researchers develop a system for creating 3D models using images pulled from Flickr, off-the-shelf components

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    11.28.2010

    A group of researchers from the University of North Carolina and the Swiss university ETH-Zurich have teamed up to develop a system for creating 3D models of famous landmarks using photos from photo sharing websites like Flickr. Unlike previous projects at Microsoft and the University of Washington, the team at UNC used a home PC (albeit one with four GPUs) to process millions of images pulled from the Internet and construct 3D models of such landmarks as the Colosseum and the Roman baths at Sagalassos (above). And all the models were created in less than a day. According to UNC Chapel Hill's Jan-Michael Frahm, the process improves on current commercial systems by a factor of 1,000 to one. "Our technique would be the equivalent of processing a stack of photos as high as the 828-meter Dubai Towers, using a single PC, versus the next best technique, which is the equivalent of processing a stack of photos 42 meters tall – as high as the ceiling of Notre Dame – using 62 PCs," he said. "This efficiency is essential if one is to fully utilize the billions of user-provided images continuously being uploaded to the Internet." He sees any number of uses for this technology, from AR integration to 3D maps for rescuers in case of a natural disaster.

  • 'Beautiful Modeler' app turns iPad into multitouch 3D sculpting device

    by 
    Paul Miller
    Paul Miller
    11.09.2010

    Sure, it might not make for as good of an R&B album title as Nilay Patel's "Beautiful Handcuffs," but Interactive Fabrication's "Beautiful Modeler" iPad app is probably a bit more useful (though not as useful with the ladies). The concept is to use the iPad's multitouch screen as an input for multi-finger 3D modeling on a computer, while the tablet's tilt sense lets you navigate around the object. Sure, it's not as slick or precise as, say, the Axsotic 3D mouse, but it also looks a whole lot more "tangible." Unfortunately, the app is currently unavailable on the App Store, and we have no idea if it's ever headed for a computer near you -- Interactive Fabrication is all about the high concept stuff, leaving the execution to individuals -- but there's some freely available GPL-licensed source code if you want to take a crack at compiling and making a real product out of this. Check out a video of the sculpting in action after the break. [Thanks, Danil]