FaceDetection

Latest

  • Facial recognition software helps convict a robber

    by 
    Jon Fingas
    Jon Fingas
    06.09.2014

    Watch Dogs' vision of a super-connected Chicago may be truer than you think. A local judge has convicted Pierre Martin of armed robbery after police used facial recognition software (NEC's NeoFace) to match surveillance camera footage with an existing mugshot. While the cops still used witnesses to confirm their findings and make an arrest, the technology was vital to pinpointing Martin in the first place -- it's doubtful that investigators would have had time to sift through 4.5 million booking photos.

  • The NSA has collected 'millions' of faces from the web

    by 
    Jon Fingas
    Jon Fingas
    06.01.2014

    The NSA isn't just interested in pure communications intelligence like call records; it wants to look for faces, too. Documents leaked by Edward Snowden reveal that the NSA has been using facial recognition software to scan the internet for portraits and match them with investigative data. The agency can determine whether or not a suspect spotted in a photo or video chat has a valid passport, for example, or find out if informants have said anything about that person. It can even spot subtle changes (like beards) and link photos to satellite info to pinpoint someone's whereabouts. As of 2011, the NSA was getting about 55,000 "facial recognition quality" images per day out of "millions" of candidates, according to the leaked material.

  • Emotient's Google Glass app tells you how others are feeling

    by 
    Jon Fingas
    Jon Fingas
    03.06.2014

    It's not always easy to read someone's emotions -- and that's a problem for retailers, which can't easily tell if their products intrigue you or simply confuse you. They may not have to guess for much longer, though, as Emotient has launched the private beta for a Google Glass app that identifies feelings using the device's camera. The software scans faces for emotional cues that reflect an overall sentiment, even if it's subtle; the app can tell if you're mildly pleased, for instance. Privacy shouldn't be an issue, since the app is only saving anonymous data, not images. Emotient is testing its app with just a handful of companies right now, but the finished app should help stores please customers in the future. There's also a chance you'll see the underlying technology in something you can try for yourself. The company tells The Next Web that its emotion detection will reach Intel's RealSense platform, so don't be surprised if your next webcam can tell that you're in a good mood.

  • Insert Coin: the ixi-play robot owl monitors toddlers, helps them learn (video)

    by 
    Steve Dent
    Steve Dent
    08.13.2013

    In Insert Coin, we look at an exciting new tech project that requires funding before it can hit production. If you'd like to pitch a project, please send us a tip with "Insert Coin" as the subject line. Isn't a baby monitor effectively a waste of technology? With a bit more thought and an operating system, couldn't it do much more with its components than just scope your infant? That's the premise behind Y Combinator-backed ixi-play, an Android-powered robot that just launched on the Crowdhoster crowdfunding platform. On top of Android 4.2, a dual-core ARM Cortex A9 CPU, 1GB RAM and a 720p camera, the owlish 'bot has face, card and object detection, voice recognition, a touch-sensor on the head, eye displays for animations, a tweeter/woofer speaker combo and child-proof "high robustness." For motion, the team adopted a design used in flight simulators, giving ixi-play "agile and silent" 3-axis translation and rotation moves. All that tech is in the service of one thing, of course: your precious snowflake. There are currently three apps for ixi-play: a baby monitor, language learning and animal-themed emotion cards. As the video shows (after the break), the latter app lets your toddler flash cards to the bot to make it move or emote via the eye displays, matching the anger or happiness shown on the card. In baby monitor mode, on top of sending a live (encoded) video stream to your tablet, it'll also play soothing music and sing or talk your toddler to sleep. The device will also include an SDK that includes low-level motion control and vision programming, providing a way for developers to create more apps. As for pricing, you can snap one up starting at $299 for delivery around July 24th, 2014, provided the company meets its $957,000 funding goal (pledges are backed by Crowdtilt). That's exactly the same price we saw recently for far less amusing-sounding baby monitor, so if you're interested, hit the source.

  • Galaxy S 4, future Samsung devices to use DigitalOptics tech for face tracking (updated)

    by 
    Jon Fingas
    Jon Fingas
    04.23.2013

    When Samsung unveiled the Galaxy S 4 in March, there was a near-inescapable emphasis on face detection features. What we didn't know is just whose technology was making them possible. As it happens, it's not entirely Samsung's -- DigitalOptics has stepped forward to claim some of the responsibility. The California firm recently struck a multi-year licensing deal with Samsung to supply its Face Detection and Face Tracking software, which can detect pupils for interface features (think Smart Stay or Smart Pause) and keep tabs on photo subjects. DigitalOptics hasn't provided the exact details of its involvement in the GS4, let alone a roadmap, but it's safe to presume that Samsung isn't dropping its emphasis on camera-driven software anytime soon. Update: DigitalOptics says the release wasn't clear on just what was involved in the deal: while the face detection and tracking are present, Samsung didn't pick up the pupil component. As such, you're mostly seeing DigitalOptics' influence in regular camera features and other software that doesn't involve eye tracking.

  • Apple applies for patent that scales content to match face distance, save us from squinting

    by 
    Jon Fingas
    Jon Fingas
    11.15.2012

    Most software has to be designed around a presumed viewing distance, whether it's up close for a smartphone or the 10-foot interface of a home theater hub. Apple has been imagining a day when the exact distance could be irrelevant: it's applying for a patent that would automatically resize any content based on viewing distance. By using a camera, infrared or other sensors to detect face proximity through facial recognition or pure range, the technique could dynamically resize a map or website to keep it legible at varying ranges. Although the trick could work with most any device, the company sees that flexibility as most relevant for a tablet, and it's easy to understand why -- iPad owners could read on the couch without needing to manually zoom in as they settle into a more relaxed position. There's no knowing the likelihood that Apple will implement an automatic scaling feature in iOS or OS X, let alone make it the default setting. If the Cupertino team ever goes that far, though, we'll only have our own eyesight to blame if we can't read what's on screen.

  • Sony takes SOEmote live for EverQuest II, lets gamers show their true CG selves (video)

    by 
    Jon Fingas
    Jon Fingas
    08.07.2012

    We had a fun time trying Sony's SOEmote expression capture tech at E3; now everyone can try it. As of today, most EverQuest II players with a webcam can map their facial behavior to their virtual personas while they play, whether it's to catch the nuances of conversation or drive home an exaggerated game face. Voice masking also lets RPG fans stay as much in (or out of) character as they'd like. About the only question left for those willing to brave the uncanny valley is when other games will get the SOEmote treatment. Catch our video look after the break if you need a refresher.

  • Apple patents iOS 5's exposure metering based on face detection, keeps friends in full view

    by 
    Jon Fingas
    Jon Fingas
    07.31.2012

    Many photographers will tell you that their least favorite shooting situation involves a portrait with the sun to the subject's back: there's a good chance the shot ends up an unintentional silhouette study unless the shooter meters just perfectly from that grinning face. Apple has just been granted a patent for the metering technique that takes all the guesswork out of those human-focused shots on an iOS 5 device like the iPhone 4S or new iPad. As it's designed, the invention finds faces in the scene and adjusts the camera exposure to keep them all well-lit, even if they're fidgety enough to move at the last second. Group shots are just as much of a breeze, with the software using head proximity and other factors to pick either a main face as the metering target (such as a person standing in front of a crowd) or an average if there's enough people posing for a close-up. You can explore the full details at the source. Camera-toting rivals, however, will have to explore alternative ideas.

  • Second Story uses Kinect for augmented shopping, tells us how much that doggie is in the window (video)

    by 
    Jon Fingas
    Jon Fingas
    07.26.2012

    Second Story isn't content to leave window shoppers guessing at whether or not they can afford that dress or buy it in mauve. A new project at the creative studio uses the combination of a Kinect for Windows sensor with a Planar LookThru transparent LCD enclosure to provide an augmented reality overlay for whatever passers-by see inside the box. The Microsoft peripheral's face detection keeps the perspective accurate and (hopefully) entrances would-be customers. Coming from an outlet that specializes in bringing this sort of work to corporate clients, the potential for retail use is more than a little obvious, but not exclusive: the creators imagine it also applying to art galleries, museums and anywhere else that some context would come in handy. If it becomes a practical reality, we're looking forward to Second Story's project dissuading us from the occasional impulse luxury purchase.

  • Google patent filing would identify faces in videos, spot the You in YouTube

    by 
    Jon Fingas
    Jon Fingas
    07.03.2012

    Face detection is a common sight in still photography, but it's a rarity in video outside of certain research projects. Google may be keen to take some of the mystery out of those clips through a just-published patent application: its technique uses video frames to generate clusters of face representations that are attached to a given person. By knowing what a subject looks like from various angles, Google could then attach a name to a face whenever it shows up in a clip, even at different angles and in strange lighting conditions. The most obvious purpose would be to give YouTube viewers a Flickr-like option to tag people in videos, but it could also be used to spot people in augmented reality apps and get their details -- imagine never being at a loss for information about a new friend as long as you're wearing Project Glass. As a patent, it's not a definitive roadmap for where Google is going with any of its properties, but it could be a clue as to the search giant's thinking. Don't be surprised if YouTube can eventually prove that a Google+ friend really did streak across the stage at a concert.

  • Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

    by 
    James Trew
    James Trew
    06.26.2012

    Don't tell Google, but its latest X lab project is something performed by the great internet public every day. For free. Mountain View's secret lab stitched together 1,000 computers totaling 16,000 cores to form a neural network with over 1 billion connections, and sent it to YouTube looking for cats. Unlike the popular human time-sink, this was all in the name of science: specifically, simulating the human brain. The neural machine was presented with 10 million images taken from random videos, and went about teaching itself what our feline friends look like. Unlike similar experiments, where some manual guidance and supervision is involved, Google's pseudo-brain was given no such assistance. It wasn't just about cats, of course -- the broader aim was to see whether computers can learn face detection without labeled images. After studying the large set of image-data, the cluster revealed that indeed it could, in addition to being able to develop concepts for human body parts and -- of course -- cats. Overall, there was 15.8 percent accuracy in recognizing 20,000 object categories, which the researchers claim is a 70 percent jump over previous studies. Full details of the hows and whys will be presented at a forthcoming conference in Edinburgh.

  • Samsung's SmartStay replicated by ISeeYou Android app, keeps screens on while you're watching

    by 
    Alexis Santos
    Alexis Santos
    06.21.2012

    If you're not joining the Galaxy S III bandwagon and aren't keen on feeling completely left out, the ISeeYou app can give you a hand. Mimicking Sammy's SmartStay feature, the app prevents your ICS device from slipping into sleep mode when you're staring at its display. Springing $0.99 for the app nets you control over the frequency and length of the peeks taken by your phone's front-facing camera -- helpful for coordinating with a handset's sleep settings and presumably for optimizing battery life. A free version can be taken for a spin, though it doesn't allow for such fine tuning. Yearning to simulate part of the Galaxy S III experience? Mosey over to Google Play for the downloads.

  • Ice Cream Sandwich revamps Android camera and gallery features

    by 
    Myriam Joire
    Myriam Joire
    10.18.2011

    It's been a long time coming, but with the introduction of Ice Cream Sandwich, Android finally takes a significant leap forward in terms of camera and gallery features. The camera interface is completely new -- it's faster and easier to use with instant access from the lock screen. Shots are taken immediately thanks to zero shutter lag and continuous autofocus with automatic face detection. Touch-to-focus with exposure lock is now supported, and the UI adds a proper digital zoom slider. The camera app also includes a new sweep panorama feature. The gallery app is also significantly improved, with Instagram-like "hipster filters" and a built-in photo editor that lets you crop and rotate pictures at arbitrary angles. Any tweaks you make are saved in a separate file, keeping the original shot intact. Images can now be sorted by location (using geotagging), and by person (if manually tagged). Video also receives a serious boost in functionality with 1080p capture, continuous autofocus, and the ability to zoom while recording. Additionally, it's now possible to create time lapse videos right from your phone. It's too early to tell if all these features will trickle down to legacy devices or remain exclusive to the Galaxy Nexus, but we'll find out soon enough.

  • Prototype glasses use video cameras, face recognition to help people with limited vision

    by 
    Dana Wollman
    Dana Wollman
    07.06.2011

    We won't lie: we love us a heartwarming story about scientists using run-of-the-mill tech to help people with disabilities, especially when the results are decidedly bionic. Today's tale centers on a team of Oxford researchers developing sensor-laden glasses capable of displaying key information to people with poor (read: nearly eroded) vision. The frames, on display at the Royal Society Summer Science Exhibition, have cameras mounted on the edges, while the lenses are studded with lights -- a setup that allows people suffering from macular degeneration and other conditions to see a simplified version of their surroundings, up close. And the best part, really, is that the glasses cull that data using garden-variety technology such as face detection, tracking software, position detectors, and depth sensors -- precisely the kind of tech you'd expect to find in handsets and gaming systems. Meanwhile, all of the processing required to recognize objects happens in a smartphone-esque computer that could easily fit inside a pocket. And while those frames won't exactly look like normal glasses, they'd still be see-through, allowing for eye contact. Team leader Stephen Hicks admits that vision-impaired people will have to get used to receiving all these flashes of information, but when they do, they might be able to assign different colors to people and objects, and read barcodes and newspaper headlines. It'll be awhile before scientists cross that bridge, though -- while the researchers estimate the glasses could one day cost £500 ($800), they're only beginning to build prototypes.

  • Student thwarts face detection software with 'CV Dazzle' makeup

    by 
    Donald Melanson
    Donald Melanson
    03.15.2011

    Not interested in having yourself automatically identified in photos across the internet? Then you might want to take a cue from Adam Ant (or Blade Runner's Pris, if you prefer), as Adam Harvey, a student in NYU's Interactive Telecommunication Program, has discovered that some over the top face makeup applied in just the right way can thwart most facial recognition software. Dubbed CV Dazzle (after the Dazzle camouflage used in World War I), the makeup works simply by enhancing areas of the face that you otherwise wouldn't ordinarily enhance -- so instead of applying the makeup around your eyes, you'd apply some on your cheeks and effectively "invert" that area. According to Harvey, that method is effective at blocking the face recognition used by Facebook, Picasa and Flickr -- and it doesn't simply cause some mild confusion, it actually prevents the software from detecting any face at all. Head on past the break for a quick video.

  • Microsoft's OneVision Video Recognizer can detect, identify, and track your face on video... so smile!

    by 
    Vlad Savov
    Vlad Savov
    03.11.2011

    Here's your classic case of "just because you can, doesn't mean you should." Microsoft's Innovation Labs have just demonstrated a OneVision Video Recognizer algorithm that's powerful enough to perform face detection duties on a running video feed. It can recognize and track humanoid visages even while they're moving, accept tags that allow auto-identification of people as they enter the frame, and can ultimately lead to some highly sophisticated video editing and indexing via its automated information gathering. Of course, it's that very ease with which it can keep a watchful eye on everyone that has us feeling uneasy right now, but what are you gonna do? Watch the video after the break, that's what.

  • RIM shows off BlackBerry 6 multimedia experience, in pictures

    by 
    Sean Hollister
    Sean Hollister
    07.21.2010

    While there's still no (official) word on when we'll get any BlackBerry OS 6 hardware, much less that 9800 Bold, RIM has seen fit to provide us another glimpse at the software front. This time round we're looking at multimedia features, including the photo gallery, a brand-new podcasts app and YouTube, alongside extra camera controls (including a face detection mode) and roundabout confirmation that at least some new BlackBerries will support pinch-to-zoom. Oddly enough, there's no video showing off the new multimedia functionality, just a set of stills, but we suppose RIM realizes it's all been done before and Crackberry addicts will take whatever they can get right now.

  • Sony unveils 3DTV release dates and pricing for Japan

    by 
    Richard Lawler
    Richard Lawler
    03.09.2010

    Kicking off an expected repeat flood of 3DTV info over the next few days (Samsung and Panasonic both have events scheduled over the next couple of days) Sony has revealed pricing and shipping information for its new televisions and related accessories in Japan. The new sets share that sweet/ominous monolithic style of the already available NX800 series (also announced today in Japan, along with fellow 2D-only HX700 LCD and DVR-packing BX30H televisions,) with the edge-lit LED LX900 bringing the entire 3D package. With IR emitter built in and two pairs of RealD active shutter glasses, all you'll need to add is a source to the 60-, 51-, 46- and 40-inch models, ranging in price from ¥580,000 ($6,444) to ¥290,000 ($3,222.) Even if the TDG-BR100 / TDG-BR50 3D glasses (also available as an accessory for ¥12,000 ($133) or so) aren't on your face this WiFi-connected abyss of entertainment will look back into you, using face tracking to detect if someone is sitting too close and warn them to move back, as well as dimming and eventually turning off the screen if you leave the room or simply looked away from the TV for an extended period. Want to get the full 3D effect with the LED backlit HX900 and edge-lit HX800? Expect to purchase the glasses and TMR-BR100 IR emitter (¥5,000 or $55) separately, or just live a 2D lifestyle and know the 3D is there if you ever want to upgrade. Feel free to wander through Sony Japan's machine-translated website for more specs and prices of these June & July scheduled displays or alternatively, wait a little while, enjoy the trailer embedded after the break, and we should get some find out U.S.-specific details soon that will likely be considerably easier on the wallet.

  • Apple dreaming of object identification, new messaging UI in iPhone OS patent

    by 
    Darren Murph
    Darren Murph
    07.09.2009

    Seriously Apple, what's up with the patent application bender? Over the past week, we've seen a whole gaggle of new apps, though the latest few just might be the most intriguing. In essence, Apple engineers have outlined plans to integrate object recognition, face detection / recognition, a text message filter (for the parents, you know) and a new, smarter messaging interface that could remind you of unread messages before allowing you to make a call and spout off unnecessarily. Moreover, we're told of a new voice output selection that could enable Oprah or Cookie Monster to read your turn-by-turn directions, bedtime stories or recipes. Suddenly, iPhone OS 3.0 feels so... antediluvian.[Via Unwired View]Read - Unread messages applicationRead - New messaging interface applicationRead - Face detection application

  • iPhoto '09 uses face detection package from Omron

    by 
    Robert Palmer
    Robert Palmer
    01.30.2009

    An intrepid tipster emailed us late yesterday, and described an interesting challenge: He figured that if Apple didn't develop iPhoto's face recognition technology themselves, who did? He disassembled the app using OTX, a developer tool based on Apple's otool, and found the areas of the software related to facial recognition. There, the string "OKAO" appeared, including in the "FaceRecognitionManager" object. OKAO Vision is a product from Japanese firm Omron Global that -- hey hey -- recognizes faces and their various features. Does the face have big eyes? Are they in trouble? What is the person looking at? The transliteration "okao" apparently means "face" in Japanese, according to their website. "OMRON is committed to raising the accuracy of face detection so that OKAO Vision can be used in many different lifestyle occasions and social settings," their website reads. iPhoto '09 must fit in with that plan. Omron has other facial recognition products, including software for mobile phones, and a camera-plus-hardware-plus-software console that can accurately tell if a person is smiling or not. The software works reasonably well, according to Gizmodo, but does pick up some false positives in patterns, or, say, Mount Rushmore.