Wha? Even before a Hololens sequel could grace Microsoft's stage at MWC, the company has revived the Kinect, but in a buttoned-downed business sense. Nearly a decade since the Kinect first launched, the Azure Kinect combines a depth sensor, high-def camera and a spatial microphone array. It's got an "intelligent edge", in that it sees and hears in high levels of detail, but also interprets those inputs. The new camera module has a depth sensor with wide or narrow views, depending on the use case.
According to Microsoft, early adopter users have already been using the new Kinect in very useful ways. AVA Retail, for example, has been using Azure Kinect, in combination with Azure AI, for enabling self checkout and grab-and-go shopping.
Perhaps most interesting in the health case uses: Ocuvera, a company that uses software to predict patient bed falls in hospitals and care homes. Apparently, after using Azure Kinect, it's been able to get patient falls from 11,000 a year to 0.
Microsoft announced that it planned to repurpose its Kinect know-how into this area at last year's Build conference, teasing tinier sensors.
According to the company's official listing, the Azure Kinect developer kit houses a "best-in-class" 1-megapixel depth camera, 360-degree 7-microphone circular array, 12MP RGB camera, and orientation sensor for "building advanced computer vision and speech models." The whole thing is less than 5 inches long and 1.5 inches thick.
It's less about the hardware here, however. It's more about the power of its Azure services and AI to make even more intelligent systems. The sensor can be used on its own, or paired with other Azure Kinect sensors. It will cost $399 and developers can preorder it today.