Siri undoubtedly experienced a number of growing pains upon release, but over time, Apple has done a great job of really boosting the service's speech recognition capabilities. In particular, dictation on iOS is as smooth and efficient as ever.
Now comes word via Wired that Siri may soon be getting a whole lot smarter thanks to neural network algorithms Apple may implement thanks to an ever-growing team of artificial intelligence experts working on Siri.
For instance, Apple last year hired Alex Acero from Microsoft where he spent the better part of two decades researching voice recognition technology.
Wired further adds:
Apple has also poached speech researchers from Nuance, including Siri Manager Gunnar Evermann. Another speech research hire: Arnab Ghoshal, a researcher from the University of Edinburgh.
"Apple is not hiring only in the managerial level, but hiring also people on the team-leading level and the researcher level," says Abdel-rahman Mohamed, a postdoctoral researcher at the University of Toronto, who was courted by Apple. "They're building a very strong team for speech recognition research."
Aside from enhanced speech recognition, it's worth pointing out that Siri with iOS 8 will be graced with a number of new features, including Shazam integration, 22 new dictation languages, and streaming voice recognition technology that types what you say as you say it.
On a related note, there are long standing rumors which point to Apple working hard to develop its own speech recognition software instead of relying on Nuance's technology. Over the past few years, Apple has made quite a few notable hires in the space, including hiring a number of reputed speech recognition experts from Nuance itself.
Lastly, it's worth nothing a recent Wall Street Journal article which claimed that Samsung might be interested in purchasing Nuance. Nuance currently has a market cap of $5.8 billion so any potential suitor would have to have extremely deep pockets.