Performance shootout: Apple’s Intel vs. ARM systems

“There have been some comments questioning the meaning of the performance benchmarks which I quoted previously as part of my argument that Apple may need to switch Macs from using Intel processors to its own systems-on-a-chip based on ARM processors,” Howard Oakley writes for Eclectic Light Company. “Here are some additional details.”

“These figures are based on last year’s products and their processors. If Apple delivers similar improvements in their own SoCs delivered in the autumn of this year and next, it isn’t hard to see how it could deliver Macs with significantly improved performance and at lower cost by switching from Intel to its own ARM-based SoCs,” Oakley writes. “That’s without considering GPUs, where the cost and performance differences are even greater, and the iPad Pro 11-inch already outperforms the great majority of current Macs, including iMac Pros.”

Oakley writes, “the iPad Pro delivers a considerably lower cost per K Geekbench than either of the two Macs, and an overall benchmark of almost 18K, which is now very close to those offered by the fastest processor option for two of the most popular current Mac models.”

Read more in the full article here.

MacDailyNews Take: The writing is on the wall for Intel.

SEE ALSO:
Macs may need ARM processors to survive – April 17, 2019
Steve Jobs predicted the Mac’s move from Intel to ARM processors – April 8, 2019
Intel execs believe that Apple’s ARM-based Macs could come as soon as 2020 – February 21, 2019
Apple’s Project Marzipan could mean big things for the future of the Macintosh – February 20, 2019
Apple iPad Pro’s A12X chip has no real rivals; it delivers performance unseen on Android tablets – November 1, 2018
Ming-Chi Kuo: Apple A-series Macs coming in 2020 or 2021, Apple Car in 2023-2025 – October 17, 2018
MacBooks powered by Apple A-series chips are finally going to happen soon – September 18, 2018
Apple A-series-powered Mac idea boosted as ARM claims its chips can out-perform Intel – August 16, 2018
Did Apple just show its hand on future low-end, A-series-powered MacBooks? – July 13, 2018
How Apple might approach an ARM-based Mac – May 30, 2018
Pegatron said to assemble Apple’s upcoming ‘ARM-based MacBook’ codenamed ‘Star’ – May 29, 2018
Intel 10nm Cannon Lake delays push MacBook Pro with potential 32GB RAM into 2019 – April 27, 2018
Why the next Mac processor transition won’t be like the last two – April 4, 2018
Apple’s ‘Kalamata’ project will move Macs from Intel to Apple A-series processors – April 2, 2018
Apple plans on dumping Intel for its own chips in Macs as early as 2020 – April 2, 2018
Apple is working to unite iOS and macOS; will they standardize their chip platform next? – December 21, 2017
Why Apple would want to unify iOS and Mac apps in 2018 – December 20, 2017
Apple to provide tool for developers build cross-platform apps that run on iOS and macOS in 2018 – December 20, 2017
The once and future OS for Apple – December 8, 2017
Apple ships more microprocessors than Intel – October 2, 2017
Apple embarrasses Intel – June 14, 2017
Apple developing new chip for Macintosh in test of Intel independence – February 1, 2017
Apple’s A10 Fusion chip ‘blows away the competition,’ could easily power MacBook Air – Linley Group – October 21, 2016

47 Comments

  1. There is now way I believe in cross platform benchmarks.
    On iOS the chip far fewer underlying processes that the full boat OSX.

    “That’s without considering GPUs, where the cost and performance differences are even greater, and the iPad Pro 11-inch already outperforms the great majority of current Macs, including iMac Pros.”

    Not something to be bragging about, are the Macs that lame? nvidia is laughing their brains out!

    1. its economics, we are swung much into the realm of our computing being mobile. Parity for general use has been reached. like programming languages of the past that reigned and fell, the x86 has had its run, its really no longer relevant. Game publishers want parity and ease of development, web creators want the same, the future is arm inside.

  2. I have been predicting the Intel to A-series transition on Macs for the past few years. My predictions were admittedly optimistic – I thought that there was a good possibility that a low-end Mac based on an A-series processor would be released in 2018. That obviously did not happen. But the logic and rationale for this transition still hold true.

    The article did not do a good job of addressing processor cost. Using the end-item cost was lame. I don’t have the data needed to do a better job, but I suspect that the economies of scale in iOS devices means that Apple can easily procure tens of millions of additional A-series chips at relatively low cost relative to buying Intel chips. And A-series processors are also more power efficient, which is great for laptops.

    The article also failed to explore the idea of Macs using multiple A-series chips. IF the A-series architecture allows two, four, or more processors to cooperate in parallel, then an individual A-series processor does not have to beat the higher-end Intel processors to make the transition viable. If not, then Apple should work to enable multi-processor implementations of A-series processors. I want the option to order Macs with two, four, eight, or more A-series processors in parallel. Consider a Mac Pro running 32 or 64 A-series processors.

    It appears is entirely possible that A-series processors will evolve to beat Intel processors head-to-head over the next few years. If so, then Apple would have the option to offer entry-level Macs with a single processor and higher-end Macs with 2, 4, 8, or more A-series processors.

    By eliminating Intel processors from Macs, we would lose the ability to dual-boot natively into Windows. But Windows and Windows apps could still be run on A-series based Macs in emulation mode using Parallels, etc. We did this on the old G3, G4, and G5 Macs and it was adequate for most people. I don’t think that Intel inside and native Windows compatibility is as important as it was back in 2007. It was a safety net back then to reduce concern during the transition.

    Long story short – A-series processors are nearly at parity with higher-end Intel processors based on benchmarks. Therefore, single processor Macs based on A-series processors may be released at any time. If the A-series architecture (now or in the future) enables multiple processors to be ganged together, then you could see MBPs with workstation performance and iMacs and Mac Pros with supercomputer performance in a few years.

  3. I still worry about the speed of multiple programs and other processes running in foreground and background. iOS limits the number of processes by allowing only very limited multiprocessing. You can’t have 200 windows open at once, like I have seen on a Mac. The A-series chips are not being tested under that sort of load, so we don’t know how they will handle it.

    The same goes for emulation. That has to be done as an operating system service. It can’t be done with something like Parallels because it won’t just be necessary to emulate an x86 for Windows, but for every existing MacOS program as well. For compatibility during the transition, programs will have to be compiled into packages that include both x86 and A-series executables. Yes, the better-supported programs will be recompiled into Fat Binaries reasonably quickly, but some won’t. Because this is a switch from CISC to RISC, the binaries are really going to be fat!

    It took years for some programs to make the switch to x86 from PowerPC and others never switched at all. It took even longer to make the transition from 680xx to Power; for several years, there were parts of the operating system that were still running in emulation. That was possible because the Power chips were so much faster than 680xx, and Intel so much faster than Power, that the performance hit from emulation was minimized. We don’t know if the A-series are that much faster than x86 in a true multitasking environment.

  4. The A series is cheap enough to put two in one desktop computer, if Apple wanted to, there are so many things they can do, but Apple has lost focus on the Mac, routers, and monitors. They want to make Tv shows…..

    1. Meanwhile desktop chipsets have upwards of 28 cores, efficient multiprocessing, huge lanes for I/O, flexibility in GPU optimization, etc. A chips aren’t designed for anything that Macs need.

  5. The A series SoC would be fine for an entry level laptop. Most of these users probably don’t run Windows in either Parallels or Bootcamp. The A series chip would be cheaper and provide better battery life.

    I suspect that for the iMac and Mac Pro we won’t see a sharp transition to the A series chips. Once they are capable enough to run FCP X head to head against an Intel chip then we might see a hybrid device. macOS would run on the A series chip and there would also be an Intel CPU available for rapid execution of that code. This would be a bridge to the all A series world.

    There used to be an option like this long ago. I believe that you could get a board with an Intel CPU to plug into a Mac.

    1. Again, it isn’t just Windows. Every publicly available bit or byte of Mac software in the world is Intel-only. An A-series machine, whether entry level or not, could not run a single existing Mac app without emulation. Performance would be crap except on a very fast computer, almost by definition not entry level.

      Yes, in time, apps could be recompiled to run on both processor families, but why would third-party software companies bother? People who buy cheap computers do not buy expensive software.

      The A-series chips are getting really fast, but are they fast enough to handle emulation in a multi-windowed multitasking environment? The iPad is not a fair test. The day will probably come when slow development at Intel forces a switch, like the slow development of 680xx and PowerPC processors did, but is that day yet?

      1. “could not run a single existing Mac app without emulation”
        Unless the developer recompiled their app. And, if the developer wants to continue to make money in the future, they WILL recompile their app.

        If the app is no longer actively developed, then it won’t run. Which means if you NEED that app, you just don’t upgrade to an A-series system.

        There seems to be an assumption that will Apple goes all-in all the customers do to, and customers don’t need to all go dump their current capable systems and buy new systems that are going to cause them continuity problems.

      2. TxUser, I generally agree with your posts. But not this one.

        The tools used to develop Mac apps, such as Xcode, can compile that source code for different targets. You can bet that when Apple is ready to make the shift to A-series processors on the Mac, that the tools will be ready to cross-compile, you might even have a fat binary option that will support both Intel and A-series in one app bundle.

        In my opinion, your pessimism is excessive and unwarranted. Apple has a history of handling these transitions quite well. In several years, you may very well be dining on crow.

        1. I think it will certainly be as soon as next year. While Intel doesn’t KNOW Apple’s plans, they DO have a view into how many Intel CPU’s Apple is buying for the next year… and THEY expect Apple to transition in 2020.

          Apple COULD contract a ton of chips late in the cycle, but I think Apple’s plans for Fall 2020 are already set.

      3. I’m not saying it isn’t going to happen. It probably is, someday. I’m just skeptical of the growing meme that Apple can easily sell cheap low-end computers with A-series chips. Because of the emulation requirement during the transition and the big files for fat binaries, A-series Macs will require considerable resources, and that will not come terribly cheap.

        Porting between processors and operating systems is never just a matter of “flip a compiler flag and recompile.” There are always hardware dependency issues. The optimization tools for ARM based code have not had decades to mature like the x86 tools. Developers are probably not going to provide free upgrades to the recompiled fat-binary revisions of their programs, so that’s not going to be very cheap, either. If users are hesitant to pay for a whole new App Folder worth of software, the developers are going to be hesitant to do the extra work.

        All of the work is doable, but the transition cannot possibly be as painfree as many of the posters here seem to think.

        1. Apple seems incapable of doing anything “easily” these days. Moreover, Cook is so infatuated with subscriptions that he long ago cut off all significant personal computers development budgets. Big Brother wants you to pay by the month, that is the primary purpose of iOS. Apple will never move the Mac to A chips because to do so would cost money that Cook would rather spend on media production for his rental biz.

    2. We didn’t see “transition” hybrids that included both PowerPC and Intel processors, and I think the same will apply here. Once they decide to go A-Series, it will be all in. And, either a solution will be available for a potential consumer, OR those consumers will stick with their current Intel system. It’s not like the Intel systems are all going to evaporate overnight 🙂

      The boards with Intel CPU’s weren’t transition options offered by Apple, those were products sold for people that wanted to run Wintel apps.

  6. Should Apple switch Macs to A-Series processors (A-14?) they would provide the major developers with APIs and a recompile utility first, probably a year in advance of Arm powered Mac launch.

    Yep, to take advantage of performance increases you will have to upgrade your softwares (I don’t see an emulation mode), but new software is a given no matter what happens. It’s the nature of product evolution for both hardware and software, so it’s not an issue except the cheapest of complainers.

    If the benefit is high enough, within 3 years of launch 80% of heavy use Macs will have upgraded, and all the handwringing will be like before, a waste of breath.

    1. It isnt a matter of “to see performance increases you will have to upgrade your software,” Without an emulation mode, no existing software—whether from Apple, a third party, or internally developed—will work at all. Every single app you now own will have to be replaced before you can get ANY work done, and your new software will not work on any existing Macs. There is no “80% will be upgraded,” since nothing less than 100% will do any good. None of the past processor or OS transitions were anywhere near that radical. It will happen, I’m sure, but anybody who thinks it will be painless is dreaming.

      1. “None of the past processor or OS transitions were anywhere near that radical.”
        The iOS transition to a 64-bit only code base was that radical. Apps that worked one day, didn’t work the next, EVEN on systems that still had 32-bit capable parts inside. Every developer that still wanted to sell iOS applications had to recompile and tweak (the same would be true of an ARM to Intel transition). Some apps were never re-released, and users that want to use those apps have to stick to an older OS, also, same as with an ARM to Intel transition).

        The biggest difference is that the number of people affected, who will find themselves locked out of their software if they upgrade either their OS or their hardware, will be far fewer…. there were 100 million active Mac users reported as of 2017, and 1.3 BILLION active iOS systems in 2018. It wasn’t painless, it was painful, yet, here we are at iOS 12, speculating about what the next OS will include rather than the tumult that the processor change caused.

  7. This speculation that Apple is going to suddenly do a massive Mac migration to yet another architecture makes no sense. Why would Apple do this?
    – All RISC workstation lines were abandoned by 2009 (this includes Apple, HP, IBM, SGI, and Sun). The only oulier is Raptor Computer Systems
    – Workstations that rely on Intel CISC designs include not only Apple’s Mac and MS Windows but also FreeBSD, Linux, and Oracle Solaris, SGI Virtu, Fujitsu Celsius, etc. Do you really believe all these companies would have intentionally chosen an inferior chipset?
    – The ARM architecture is licensed, so Apple would not have the ability to innovate at will even if outsource-to-the-max Cookie suddenly got the notion for Apple to push for in-house innovation.
    – ARM architecture is neither new nor competitive for server or desktop performance where limiting power draw isn’t the top priority.
    – Apple’s contracted chip foundries are stressed enough prioritizing iOS chip production, there is no evidence to show that Samsung or TSMC has more spare capacity to build Mac chips as well.
    – the price of the multiple ARM chips necessary to do x86 multithreaded operations makes it a false economy
    – without worrying about battery drain, Intel’s desktop chips are easily clocked at over 5 GHz today. The latest and most powerful Apple A12X chip is available with a 2.5 GHz clock speed.
    – Intel has some tricks up its sleeve. While competitors keep pushing for miniaturization, Intel has been adding value with other stuff like 512-bit advanced vector extensions, security enhancements, 3D manufacturing,

    I could go on, but you get the point. ARM is great for thin client mobile systems. They are cheaper and they are more battery friendly if you are willing to accept all their limitations. It is ludicrous however to move Macs over to a lesser performance chipset architecture. If Apple does this, we will all know that Apple is no longer in the race for top performance computing products. Fashion is more important to Apple it seems.

    1. Except none of your rhetoric relates to customer or product benefit, it’s all assumptive. You’re applying 1980s thinking; GHz x core count = performance, you acknowledge AVX512 whilst ignoring A-Series greater diversity of custom silicon, citing corporate politics was the low point for me.
      A-Series SoCs significantly outperform their Intel counterparts of similar TDP (although it’s a difficult comparison as Intel fakes this by citing low base clocks which boost to do anything useful). This is based on common applications like Affinity Photo/Designer and with Adobe & MS writing ARM natives now, the x86 monopoly is practically over.
      Why would they do it? To control their own products rather than being another Intel lapdog. Intel have continually underperformed with late releases and process shrink delays. When IBM did this, Apple ditched them too. Not Aarch64 though, their own ISA; their chief ARM HW designer just left, A11-A12 performance increase was flat=shift to emulation? And Vortex = virtual cortex? Who knows but Intel spec bumps don’t even warrant a press event these days.

        1. BTW, that ARM Japanese supercomputer was 2.8 times faster than the second place computer. It also won all the other workload rankings for supercomputers and that is the first time that has ever happened in the rankings. What was that you were saying about physics and energy and stuff? Bwah ha haaaaa!

        2. Yes, and? Anyone can type physics equations into the internets (though it clearly helps if you understand them!).

          ∯(𝜕𝜴)E∙dS=4π∭(𝜴)𝜌dV.

          Aren’t I impressive?

          The problem is, the claim that were are anywhere near the fundamental energy limit implied by Newton’s equations is not only baseless, and absurd, it is empirically nonsense.
          Your “limit” could be arbitrarily assigned at any point in time! How were we ever able to break those laws of “physics” and make a faster processor that the original 4004? Why does it suddenly magically create a limit now? Why do companies like Intel continue to try to create faster processors, if, as you claim, the fundamental laws of physics makes that impossible?!?
          Hint: it doesn’t. We are nowhere near that fundamental information processing limit.

          I have no need to argue with myself. I prefer to argue with you, lest anyone lurking this thread be misled for even a second and believe there is any merit to your objection here.

        3. applecynic, breaking the laws of physics with every thought. A self hating cynic, he is cynical even about himself. If you want to know what scum sounds like, just read an applecynic post.

        4. You know what’s even funnier, a laugh riot, even? The fact that you don’t understand even simply physics, but post this nonsense here anyway. NOTHING in the work equation implies in any way, shape, or form that we have reached any sort of limit in terms of processor speed, let alone the idea that the ARM architecture has reached it, when it has not even come close to its theoretical max clock frequency!
          If your logic were to be applicable, you would also have to explain how Intel is still able to extract incremental performance upgrades, PER CORE, independent of frequency. Or why Intel’s CISC architecture is the final, ultimate state of processor technology (despite AMD cleaning their clock).
          And BTW, if the equations for work were applicable here, they are more than satisfied by merely inputing more energy! Using a separate, high throughput core that turns on as needed, taking over from the low power usage cores when workload increases.

          You literally don’t know what you’re talking about.

        5. “NOTHING in the work equation implies in any way, shape, or form that we have reached any sort of limit in terms of processor speed”

          Did I say that? But a chip that only consumes x Watts of Power can never perform more that x Watts of work, often just a tiny fraction.

          But hey, it serves me right for arguing with someone that cut and pastes E&M integrals when the topic is thermodynamics.

        6. First of all, yes, you did say that. It is inherent in your post, if it is claimed to have any reliance whatsoever. It’s the subject of this thread and article, after all!
          And the amount of work performed is completely irrelevant without a prior definition of “work” as it relates to this issue (let alone “distance”)/
          As for your last idiotic claim, please cite where you claim I cut and pasted that integral, and paste the same equation here.
          Good luck with that. That equation was entered by hand.

          Also, good job failing to address a single issue with your post raised by anyone.

          Again, you don’t know what you’re talking abut.

        7. Cynic, think about it for a second. Suppose that you are performing the calculations by hand. The “work” you are interested in is the math. That isn’t measured by the amount of “work” expended in moving the pencil. Similarly, a more efficient algorithm or floating point unit might be able to do more “work” in terms of math results while expending less “work” in terms of power consumption or processor cycles.

        8. @TxUser
          The work is primarily the moving of the electrons in the chip to perform the calculation. That is literally equivalent to moving a pencil. It depends on the instructions and number of cycles to perform the instructions. These electrons are moved by electrical potential, who’s first derivative is the force acting on them.

          DeusExMachina is not only offensive, but wrong!

        9. @DeusExMachina,
          Don’t be clueless okay?
          If electrons don’t move, there is no calculation. Potential moves them (who’s first derivative is force acting on them), work is thus performed, and there is a theoretical minimum to the amount of work required by any calculation. Along the way heat is dissipated and is wasted enery.

          Power, as I’ve already said is work per unit time.

          Sadly, your comprehension of physics is not yet at a theoretical minimum.

        10. Clueless?!? You’re the one parroting nonsense you read somewhere you have no understanding of! Yes, the work is the moving of electrons against an EMF. NO. ONE sad ANYTHING about the work equations not being an issue in processor development. But what you have NOT done is make ANY sort of case that ANY current processor technology is up against ANY sort of theoretical maximum here! Yes, there is one, but you have not even attempted to address it. In fact, there is a great deal of headway. Leaving alone the fact that ARM is not a hardware design, but a reference spec, and the hardware is not locked to any particular architecture, let alone fabrication technology, many of the physical parameters that effect total speed, like wire distance, nano capacitance, etc., affect current CMOS chips differently than they might some other fabrication and transistor design type. And these other technologies can have TOTALLY different heat envelopes, capacitance values, wire lengths, etc..
          But again, we don’t even get there because you have yet to respond to a single objection actually made here, preferring to tilt at windmills of your own creation.

          Yes, aster clock speeds generate more heat, and high heats lead to circuit melt downs, bad bits and numerous other issues (leaving alone quantum effects). But what you have continuously failed to do is tie them together.
          So AGAIN, what aspect of the work equation, W=Fd, implies that processors are up against at wall here? Why are you avoiding that question?

        11. @DeusExMachina

          Okay, last try…

          McD to whom I was replying said… “

          “You’re applying 1980s thinking; GHz x core count = performance”

          To which I gave my reply.

          “Yes… Physics existed in the ‘80s.“.

          Which was a snarky way of saying GHz x Core count is still a measure of performance. You would argue that an architectural change would break from a linear trend, and you would be right, But it would be setting up a new one, and under that architecture GHz x Core count would still be a measure of performance at a new baseline.

          Then I hit you with the work equation which is a direct translation of GHz x Core count (expressed as power or as you know work per unit time).

          Nowhere did I say we were up a limit as to possible architectural performance, just theoretical performance on a given chip.

          Now stop being a douchnozzle, because I’m done with this.

  8. It’s fascinating to read the foregoing discussion from 15 months ago. Most of us got some of it right and some wrong. The demonstrations at WWDC suggest that some Apple silicon Macs are already running about as fast as entry-level Intel Macs, and the systems that will actually be offered for sale near the end of the year will be substantially faster.

    Yes, there will be an emulation mode (Rosetta 2) that allows existing Mac software to run. We don’t know how fast yet, but probably usably fast. Unfortunately, the emulation won’t have the low-level system hooks that allow Boot Camp and Parallels to run Windows. Those who need to run both Universal MacOS software and Intel Windows software will be able to buy new Intel Macs that are still in the pipeline. Those machines will be supported in new MacOS versions for at least two or three additional years.

    Yes, there will be fat binaries (Universal 2) that will run as native code on both processor families. Apple tells us that recompiling well-behaved Intel Mac software into a universal binary should only take a week. We have yet to see how much existing software (including libraries) is actually that well behaved and how much will require debugging hidden hardware dependencies. We have yet to see how many developers will take the time to update existing Mac software to Universal 2 and how many will instead release new Apple silicon software with the same functionality but a repeat purchase price.

    It will be a fascinating transition.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.