Apple explains how ‘Hey Siri’ works using a deep neural network and machine learning

“Apple has published a fascinating new entry in its Machine Learning Journal this month that explains how the voice-activated ‘Hey Siri’ detector works in detail,” Zac Hall writes for 9to5Mac. “While many of these entries tend to be too in depth for the average reader (i.e. me), October’s entry from the Siri team includes several interesting (and understandable!) tidbits about what happens behind-the-scenes when you use ‘Hey Siri’ on your iPhone and Apple Watch.”

“Apple explains that the iPhone and Apple Watch microphone “turns your voice into a stream of instantaneous waveform samples, at a rate of 16000 per second” before the detector on device decides if you intended to invoke Siri with your voice,” Hall reports. “As we know, ‘Hey Siri’ relies on the co-processor in iPhones to listen for the trigger word without requiring physical interaction or eating up battery life, and the Apple Watch treats ‘Hey Siri’ differently as it requires having the display on. Apple explains that ‘Hey Siri’ only uses about 5% of the compute budget using this method.”

Hall writes, “The full entry is a neat read, especially if you’re interested in speech recognition or use ‘Hey Siri’ on your iPhone or Apple Watch.”

Read more in the full article here.

MacDailyNews Take: So, why does “Hey Siri” seem to work better on our Apple Watch units vs. our iPhones? Microphone differences? Psychological?

[protected-iframe id=”77b158c3f60fd96e41987410bfa2ac69-17146794-18685410″ info=”//z-na.amazon-adsystem.com/widgets/onejs?MarketPlace=US” ]

[Thanks to MacDailyNews Reader “Arline M.” for the heads up.]

2 Comments

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.