depth-sensor

Latest

  • Headband detects obstacles and guides the blind haptically

    by 
    Steve Dent
    Steve Dent
    11.20.2015

    Until scientists perfect bionic vision, shouldn't there be a better way for the blind to get around than a simple cane? That's the idea behind Sentiri, a proximity-sensing headband that helps steer users around by motor-driven haptic feedback. It detects objects in the environment using infrared depth sensors, then varies the level of vibrations to the user's head to help them avoid obstacles. If it's connected to a smartphone with an app like Google Maps, the tool can also safely guide you from point 'A' to point 'B.' The company behind it, Chaotic Moon, also created a "haptic language" that transmits extra information to users by changing the frequency, intensity and number of vibrations.

  • Google's 3D-sensing Project Tango is no longer an experiment

    by 
    Terrence O'Brien
    Terrence O'Brien
    01.30.2015

    Yet another project is graduating from experiment to proper part of Google. Only two weeks ago, Glass left the confines of the Skunk Works-like Google X and became its own division headed up by Nest co-founder Tony Fadell. Now Project Tango, the 3D-sensing and -mapping concept, is moving on from the ATAP (Advanced Technology and Projects) labs to become a part of the company proper. Unfortunately what that means for the technology or what products it might eventually end up in isn't exactly clear. Will the next Nexus sport a depth-sensing IR camera? Maybe. Or perhaps they'll be used to build more advanced home automation and home monitoring tools for Nest. All we do know is that Tango will live on, even if the name "Project Tango" eventually fades away.

  • Kinect for Windows SDK gets accelerometer and infrared input, reaches China and Windows 8 desktops

    by 
    Jon Fingas
    Jon Fingas
    10.08.2012

    Microsoft had hinted that there were big things in store for its update to the Kinect for Windows SDK on October 8th. It wasn't bluffing; developers can now tap a much wider range of input than the usual frantic arm-waving. Gadgets that move the Kinect itself can use the accelerometer to register every tilt and jolt, while low-light fans can access the raw infrared sensor stream. The Redmond crew will even even let coders go beyond the usual boundaries, giving them access to depth information beyond 13 feet, fine-tuning the camera settings and tracking skeletal data from multiple sensors inside of one app. Just where we use the SDK has been expanded as well -- in addition to promised Chinese support, Kinect input is an option for Windows 8 desktop apps. Programmers who find regular hand control just too limiting can hit the source for the download link and check Microsoft's blog for grittier detail.

  • SoftKinetic's motion sensor tracks your hands and fingers, fits in them too (video)

    by 
    Steve Dent
    Steve Dent
    06.06.2012

    Coming out of its shell as a possible Kinect foe, SoftKinetic has launched a new range sensor at Computex right on the heels of its last model. Upping the accuracy while shrinking the size, the DepthSense 325 now sees your fingers and hand gestures in crisp HD and as close as 10cm (4 inches), an improvement from the 15cm (6 inches) of its DS311 predecessor. Two microphones are also tucked in, making the device suitable for video conferencing, gaming and whatever else OEMs and developers might have in mind. We haven't tried it yet, but judging from the video, it seems to hunt finger and hand movements quite competently. Hit the break to see for yourself. Show full PR text SoftKinetic Announces World's Smallest HD Gesture Recognition Camera and Releases Far and Close Interaction Middleware Professional Kit Available For Developers To Start Building a New Generation of Gesture-Based Experiences TAIPEI & BRUSSELS – June 5, 2012 – SoftKinetic, the pioneering provider of 3D vision and gesture recognition technology, today announced a device that will revolutionize the way people interact with their PCs. The DepthSense 325 (DS325), a pocket-sized camera that sees both in 3D (depth) and high-definition 2D (color), delivered as a professional kit, will enable developers to incorporate high-quality finger and hand tracking for PC video games, introduce new video conferencing experiences and many other immersive PC applications. The DS325 can operate from as close as 10cm and includes a high-resolution depth sensor with a wide field of view, combined with HD video and dual microphones. In addition, the company announced the general availability of iisu™ 3.5, its acclaimed gesture-recognition middleware compatible with most 3D sensors available on the market. In addition of its robust full body tracking features, iisu 3.5 now offers the capacity to accurately track users' individual fingers at 60 frames per second, opening up a new world of close-range applications. "SoftKinetic is proud to release these revolutionary products to developers and OEMs," said Michel Tombroff, CEO of SoftKinetic. "The availability of iisu 3.5 and the DS325 clearly marks a milestone for the 3D vision and gesture recognition markets. These technologies will enable new generations of video games, edutainment applications, video conference, virtual shopping, media browsing, social media connectivity and more." SoftKinetic will demonstrate the new PC and SmartTV experiences and at its booth at Computex, June 5-9, 2012, in the NanGang Expo Hall, Upper Level, booth N1214. For business appointments, send a meeting request to events@softkinetic.com. The DS325 Professional Kit is available for pre-order now at SoftKinetic's online store (http://www.softkinetic.com/Store.aspx) and is expected to begin shipping in the coming weeks. iisu 3.5 Software Development Kit is available free for non-commercial use at SoftKinetic's online store (http://www.softkinetic.com/Store.aspx) and at iisu.com. About SoftKinetic S.A. SoftKinetic's vision is to transform the way people interact with the digital world. SoftKinetic is the leading provider of gesture-based platforms for the consumer electronics and professional markets. The company offers a complete family of 3D imaging and gesture recognition solutions, including patented 3D CMOS time-of-flight sensors and cameras (DepthSense™ family of products, formerly known as Optrima product family), multi-platform and multi-camera 3D gesture recognition middleware and tools (iisu™ family of products) as well as games and applications from SoftKinetic Studios. With over 8 years of R&D on both hardware and software, SoftKinetic solutions have already been successfully used in the field of interactive digital entertainment, consumer electronics, health care and other professional markets (such as digital signage and medical systems). SoftKinetic, iisu, DepthSense and The Interface Is You are trade names or registered trademarks of SoftKinetic. For more information on SoftKinetic please visit www.softkinetic.com. For videos of SoftKinetic-related products visit SoftKinetic's YouTube channel: www.youtube.com/SoftKinetic.

  • SoftKinetic brings DepthSense range sensor to GDC, hopes to put it in your next TV

    by 
    Sean Buckley
    Sean Buckley
    03.08.2012

    Microsoft's Kinect may have put depth sensors in the eye of the common consumer, but they aren't the only outfit in the game -- Belgian startup SoftKinetic has their own twist on the distance sensing setup. The literally named "DepthSense" range sensor uses infrared time-of-flight technology, which according to representatives, allows it to not only accurately calculate depth-sensitivity in dark, cramped spaces, but more importantly offers a shallower operating distance than its competition. We dropped by SoftKinetic's GDC booth to see exactly how cramped we could get. It turns out the sensor can accurately read individual fingers between four to fourteen feet (1.5 - 4.5 meters), we had no trouble using it to pinch our way through a few levels of a mouse-emulated session of Angry Birds. The developer hardware we saw on the show floor was admittedly on the bulky side, but if all goes to plan, SoftKinetic says we'll see OEMs stuff the tech into laptops and ARM-powered TVs in the near future. In the meantime, though, gesture-crazy consumers can look forward to a slimmer version of this rig in stores sometime this holiday season. Hit the break for a quick demo of the friendly sensor in action. %Gallery-150189% Dante Cesa contributed to this post.

  • Prototype glasses use video cameras, face recognition to help people with limited vision

    by 
    Dana Wollman
    Dana Wollman
    07.06.2011

    We won't lie: we love us a heartwarming story about scientists using run-of-the-mill tech to help people with disabilities, especially when the results are decidedly bionic. Today's tale centers on a team of Oxford researchers developing sensor-laden glasses capable of displaying key information to people with poor (read: nearly eroded) vision. The frames, on display at the Royal Society Summer Science Exhibition, have cameras mounted on the edges, while the lenses are studded with lights -- a setup that allows people suffering from macular degeneration and other conditions to see a simplified version of their surroundings, up close. And the best part, really, is that the glasses cull that data using garden-variety technology such as face detection, tracking software, position detectors, and depth sensors -- precisely the kind of tech you'd expect to find in handsets and gaming systems. Meanwhile, all of the processing required to recognize objects happens in a smartphone-esque computer that could easily fit inside a pocket. And while those frames won't exactly look like normal glasses, they'd still be see-through, allowing for eye contact. Team leader Stephen Hicks admits that vision-impaired people will have to get used to receiving all these flashes of information, but when they do, they might be able to assign different colors to people and objects, and read barcodes and newspaper headlines. It'll be awhile before scientists cross that bridge, though -- while the researchers estimate the glasses could one day cost £500 ($800), they're only beginning to build prototypes.

  • Microsoft seeking to quadruple Kinect accuracy?

    by 
    Sean Hollister
    Sean Hollister
    12.19.2010

    Hacked your Kinect recently? Then you probably know something most regular Xbox 360 gamers don't -- namely, that the Kinect's infrared camera is actually capable of higher resolution than the game console itself supports. Though Microsoft originally told us it ran at 320 x 240, you'll find both color and depth cameras display 640 x 480 images if you hook the peripheral up to a PC, and now an anonymous source tells Eurogamer that Microsoft wants to do the very same in the video game space. Reportedly, Redmond artificially limited the Kinect on console in order to leave room for other USB peripherals to run at the same time, but if the company can find a way around the limitation, it could issue a firmware update that could make the Kinect sensitive enough to detect individual finger motions and inevitably lead to gesture control. One of multiple ways Microsoft intends to make the world of Minority Report a reality, we're sure.

  • Kinect sensor bolted to an iRobot Create, starts looking for trouble

    by 
    Paul Miller
    Paul Miller
    11.17.2010

    While there have already been a lot of great proof-of-concepts for the Kinect, what we're really excited for are the actual applications that will come from it. On the top of our list? Robots. The Personal Robots Group at MIT has put a battery-powered Kinect sensor on top of the iRobot Create platform, and is beaming the camera and depth sensor data to a remote computer for processing into a 3D map -- which in turn can be used for navigation by the bot. They're also using the data for human recognition, which allows for controlling the bot using natural gestures. Looking to do something similar with your own robot? Well, the ROS folks have a Kinect driver in the works that will presumably allow you to feed all that great Kinect data into ROS's already impressive libraries for machine vision. Tie in the Kinect's multi-array microphones, accelerometer, and tilt motor and you've got a highly aware, semi-anthropomorphic "three-eyed" robot just waiting to happen. We hope it will be friends with us. Video of the ROS experimentation is after the break.