Right now, whenever a user makes an inquiry on an Alexa-powered device, there is a delay while the virtual assistant contacts the cloud in order to interpret the request. While Echo devices would continue to rely on the cloud for complex inquiries, adding speech recognition directly would allow Alexa to perform simple tasks, such as checking the time, without that cloud delay.
Amazon acquired chip designer Annapurna Labs back in 2015, and has slowly begun churning out its own processors. It was only a matter of time before it started designing and producing chips specifically for its own hardware needs. The company has also begin hiring chip engineers for Amazon Web Services, signaling that it may be moving to its own proprietary chips for these data centers as well.
It should be noted that Google and Apple have both designed their own AI chips, and Google also is using its own chips to support services such as Street View, Photos, Search and Translate. Amazon is just the latest company to go down this route, though it should be noted that just because the company is reportedly designing these chips does not mean it will achieve the performance from them that it desires.