“Today, Apple published a long and informative blog post by its audio software engineering and speech teams about how they use machine learning to make Siri responsive on the HomePod, and it reveals a lot about why Apple has made machine learning such a focus of late,” Samuel Axon writes for Ars Techinca.
“The post discusses working in a far-field setting where users are calling on Siri from any number of locations around the room relative to the HomePod’s location. The premise is essentially that making Siri work on the HomePod is harder than on the iPhone for that reason,” Axon writes. “The device must compete with loud music playback from itself.”
“The post covers a wide range of topics, like echo cancellation and suppression, mask-based noise reduction, and deep-learning based streamer selection, among other things. There is a considerable amount of technical and mathematical detail, as it’s written like an academic paper with detailed citations,” Axon writes. “We won’t recap it all here it’s a lot — but give it a read if you’re interested in a fairly deep dive on the techniques being used at Apple (and other tech companies, although specific approaches do vary).”
Read more in the full article here.
MacDailyNews Take: HomePod is already leading in voice recognition. It just works more often than competitor’s inferior-sounding so-called smart speakers. Now, if Apple could only make Siri as smart as her voice recognition capabilities!
Apple’s full blog post is here.