“Starting in iOS 10 and continuing with new features in iOS 11, we base Siri voices on deep learning,” Siri Team writes for Apple’s Machine Learning Journal. “The resulting voices are more natural, smoother, and allow Siri’s personality to shine through.”

“Recently, deep learning has gained momentum the field of speech technology, largely surpassing conventional techniques, such as hidden Markov models (HMMs). Parametric synthesis has benefited greatly from deep learning technology,” Siri Team writes. “Deep learning has also enabled a completely new approach for speech synthesis called direct waveform modeling (for example using WaveNet), which has the potential to provide both the high quality of unit selection synthesis and flexibility of parametric synthesis. However, given its extremely high computational cost, it is not yet feasible for a production system.”

“In order to provide the best possible quality for Siri’s voices across all platforms,” Siri Team writes, “Apple is now taking a step forward to utilize deep learning in an on-device hybrid unit selection system.”

Read more in the full article here.

MacDailyNews Take: The new US English Siri voice certainly does sound better than ever!