“In our latest 800 question test of AI assistant Siri, she was able to understand 99% of queries and correctly answer 75% of them, earning a C grade,” Gene Munster writes for Loup Ventures. “In April of 2017, Siri earned a D+ on the same test where she understood 94% of queries and correctly answered 66%.”

“These tests are conducted with the same methodology and question set as our smart speaker tests found here,” Munster writes. “It involves asking 800 questions divided into five categories (local, commerce, navigation, information, and command) designed to test the full range of abilities and accuracy of an AI assistant.”

“We’re tough graders in that during our testing of Siri, we only counted correct answers when she was able to deliver a single concise answer herself rather than bringing you to search results that might help you find an answer,” Munster writes. “This means, ‘I found this on the web for…’ is counted as incorrect. Siri improved 9% since our April test but remains far away from the A grade that we expect will drive AI assistant technology to mainstream adoption.”

Read more in the full article here.

MacDailyNews Take: Good to see improvement, but overcoming ingrained impressions may prove a tougher than transforming Siri into an A student.

SEE ALSO:
Alexa is killing Siri at CES 2018, and HomePod, if it ever ships, isn’t going to make a difference – January 9, 2018
Believe it or not, Apple’s Siri is actually quite good at lots of things, recent data shows – December 4, 2017