Apple vs. Google: The future of premium smartphones is a machine-learning battle

“The big new thing in smartphones lately is one of those buzz phrases you’ll have heard tossed around: machine learning. Like augmented and virtual reality, machine learning is often thought of as a distant promise. However, in 2017, it has materialized in major ways. Machine learning is at the heart of what makes this year’s iPhone X from Apple and Pixel 2 / XL from Google unique,” Vlad Savov writes for The Verge. “It is the driver of differentiation both today and tomorrow, and the companies that fall behind in it will find themselves desperately out of contention.”

“A machine learning advantage can’t be easily replicated, cloned, or reverse-engineered: to compete with the likes of Apple and Google at this game, you need to have as much computing power and user data as they do (which you probably lack) and as much time as they’ve invested (which you probably don’t have),” Savov writes. “In simple terms, machine learning promises to be the holy grail for giant tech companies that want to scale peaks that smaller rivals can’t reach. It capitalizes on vast resources and user bases, and it keeps getting better with time, so competitors have to keep moving just to stay within reach.”

“Chinese companies may work at ludicrous speeds when iterating on hardware, however the rules change when the thing you’re trying to replicate is months and years of machine learning training,” Savov writes. “The old days of phone makers being able to secure a major hardware advantage for longer than a few months are now gone. At this late stage of the evolution of smartphones, machine learning is the only path toward securing meaningful differentiation.”

Read more in the full article here.

MacDailyNews Take: Owing to the time, money, and expertise required, it will certainly be more difficult, if not downright impossible, for some rambunctious startup to unseat Apple or Google as machine learning begins to power everything from cameras to biometrics and beyond.

Meanwhile, Google has a data advantage in terms of amount of and, perhaps, ability to work with, but also a severe disadvantage when it comes to custom silicon in which Apple wisely began to invest heavily under Steve Jobs and which, under Tim Cook, has only grown in importance and capability (Secure Enclave, Neural Engine, etc.).

Apple explains how ‘Hey Siri’ works using a deep neural network and machine learning – October 19, 2017
How Apple’s machine learning beats Google Android’s – August 22, 2017
Apple launches new Machine Learning website – July 19, 2017
Apple’s Artificial Intelligence Director discusses computers that can remember – March 29, 2017
New hire could be critical step toward attracting high-profile AI research talent to Apple – October 18, 2016
Apple hires a big brain in AI to smarten up Siri – October 17, 2016
Apple transforms Turi into dedicated machine learning division to build future product features – August 31, 2016
An exclusive inside look at how artificial intelligence and machine learning work at Apple – August 24, 2016
Apple rumored to be taking big piece of Seattle-area office market in expansion – August 12, 2016
Why Apple will become a leader in artificial intelligence – August 8, 2016
Apple buys machine-learning startup Turi for $200 million – August 6, 2016
Apple touts Artificial Intelligence in iOS and opens ‘crown jewels’ to developers – June 14, 2016
Smartphones to die out within five years, replaced by artificial intelligence – survey – December 9, 2015
Apple’s extreme secrecy retarding its artificial intelligence work – October 30, 2015
Apple hires NVIDIA’s artificial intelligence director – October 24, 2015
Apple acquires advanced artificial intelligence startup Perceptio – October 5, 2015
Apple buys artificial intelligence natural language start-up VocalIQ – October 2, 2015


  1. Wrong!

    The future of the Premium Smartphone is about security and protection of personal data;

    Guess where Google stands on that ….. nowhere!!

    Therefore, the winner, hands down is the iPhone.

  2. And unfortunately (i truly hate to say this!)…Google is way ahead.
    Googles contextual understanding data analytics and correlations are way ahead.. at least in my experiance.

    Apple has no choice.. but to put massive resourses behind this and make it top top priority.

    They can start with something as simple as their spellcheck and contextual understanding … Which are in a jurassic state compared to google….and then the sporadic and inconsistent implementation of their Ai through out the ecosys . (its perplexing and mind boggling that no one seems to care at Apple ) …
    Why should i, as an Apple user, have to resort to google for elementary things like spell check or contextual understanding and etc….. let alone more complex AI interactions.

    It is do or die for u Apple… not catching up is not even an option….!!!
    Fingers crossed.. for speedy catchup…. big time !

    1. I think one has to distinguish between user specific ML, which is better kept on-device for privacy and latency reasons, and general purpose ML, where you need (ideally all) data from all users. Apple is better at the former, due to its specific hardware approach, and Google is better at the latter because they use all data (disregarding privacy).

  3. Agreed that Apple currently has a distinct advantage in the mobile area with their A-series chips. If the future movement is towards machine learning however, Google may be able to use their work on Tensorflow to develop silicon (or whatever substrate it ends up being) for Android devices which they may be able to also sell to OEMs.

  4. Why did Apple let Siri fall so far behind Amazon, Alphabet’s and Microsoft’s counterparts? Apple just sat back in some corporate easy chair and let those other tech companies catch and pass them when there was no reason for it to happen. Siri had an early headstart and ran a snail’s pace. The HomePod will be the last to market out of all the dozens of voice assistants available. Amazon is going absolutely crazy with smart assistant modules. The low cost means one module for every room in the house including the bathroom. “Alexa, order me some Charmin.”

    I hope when Apple repatriates that overseas cash they can get IBM to allow them to shrink Watson into a single SoC.

    1. The answer is fairly simple: access to customer data. Google “crowdsourced” its machine learning, in that they simply harvested customers’ searches and resulting clicks, they analysed customers’ interactions on their phones, on the web, in their text messages, phone conversations… Massive amount of customer online behaviour gave them what Apple can never make up for with engineering (or processing power).

      Because Google has no concern for the privacy of their users, and it has such large global market share, they got the most priceless trove of data to build their AI.

      1. That’s not exactly the case. Apple does collect (anonymously we’re told) customer inquiries of Siri. They have from the beginning. But clearly, they haven’t done enough with the results to come up with significant (IMHO) elaborations upon Siri’s code.

        Google is used to analyzing the guts out of their data for maximum commercial applicability. Apple is not. Apple clearly need someone over there with a maniacal interest in Siri and an ability to boot out the laggards and find the future’s AI programming mavens. I’d call it a revolution. We can dream.

    2. Apple did indeed both sit on Siri as well as deliberately hobble it compared to its original abilities. (Skipping over the history…)

      But there are plenty of comparisons of the various personal assistant systems pointing out that they all suck in various ways. None of them is ‘intelligent’. That includes IBM’s Watson. They all rely on Question, Database Lookup, Answer, aka Expert System protocols. None of them is adequate at dealing with complex questions. They’re all slaves to the contemporary quality of speech-to-text, text-to-speech systems, most of which are by way of Dragon, licensed from Nuance. And what they used is a minimalist rendition of Dragon. If one were to buy a full version of Dragon, it costs over $200, plus you’d want to buy specific subject libraries and take time to train it to better understand your specific voice. And even then, Dragon has difficulty getting much beyond about 95% accuracy.

      Then there are my usual complaints that Apple’s old PlainTalk speech recognition system (which was always lame compared to Dragon) allowed the use of scripting to expand its abilities beyond what is currently capable in Siri. A simple example was the ability of PlainTalk to perform interactive Knock Knock jokes. Siri has not-a-clue how to do that. Why hasn’t Apple made any effort to combine these two speech technologies? Apple obviously don’t get the clue. It’s become a literal generation gap.

      Sad: In 2016 Apple terminated Sal Soghoian, Product Manager of Automation Technologies “for business reasons.” If only Apple had had the insight to apply Sal’s skills at scripting Siri into something far more ‘intelligent’, at least as smart as PlainTalk. Hire him back! And how about begging Dag Kittlaus, the chief creator of Siri, to return to Apple as well!!!

      Lost opportunities. (;_;) 😿

  5. Lol, software is where it’s at now. Has anyone with a Windows machine right clicked the speaker icon near the clock, clicked playback devices, checked properties on the resulting window, then went to the enhancements tab? Try it. I doubt any high priced headphones can do what Windows does with audio hnhancement. Apple has nothing like this. Devin Prater Assistive Technology Instructor in Training JAWS, Microsoft Outlook, Excel, Word, and Powerpoint certified by World Services for the Blind


Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.