Apple silicon-powered Macs could have better graphics performance than discrete GPUs

Apple offered an impressive look at the graphics performance of Apple silicon-powered Macs when it showed a Shadow of the Tomb Raider demo during its WWDC keynote which ran noticeably smoothly this ran under Rosetta 2.

Apple silicon-powered Macs: Developer Transition Kit
Apple Silicon-powered Developer Transition Kit

New Apple support documentation seems to suggest that the company’s shift to Macs powered by Apple Silicon will drop support for non-Apple GPUs.

Ben Lovejoy for 9to5Mac:

The session introduced something known as Tile Based Deferred Rendering (TBDR).

Meet the Tile Based Deferred Rendering (TBDR) GPU architecture for Apple Silicon Macs — the heart of your Metal app or game’s graphics performance. Learn how you can translate or port your graphics-intensive app over to Apple Silicon, and how to take advantage of TBDR and Metal when building natively for the platform.

We’ll look at how TBDR compares with the Immediate Mode Rendering pipeline of older Macs, go through common issues you may face when bringing an app or game over, and explore how to offer incredible performance when building with the native SDK.

Joanna Nelius for Gizmodo:

TBDR captures the entire scene before it starts to render it, splitting it up into multiple small regions, or tiles, that get processed separately, so it processes information pretty fast and doesn’t require a lot of memory bandwidth. From there, the architecture won’t actually render the scene until it rejects any and all occluded pixels.

On the other hand, IMR does things the opposite way, rendering the entire scene before it decides what pixels need to be thrown out. As you probably guessed, this method is inefficient, yet it’s how modern discrete GPUs operate, and they need a lot of bandwidth to do so.

For Apple Silicon ARM architecture, TBDR is a much better match because its focus is on speed and lower power consumption – not to mention the GPU is on the same chip as the CPU, hence the term SoC. This is probably why Apple wrote, “Don’t assume a discrete GPU means better performance,” in its developer support document. It’s all that dang bandwidth it doesn’t need.

MacDailyNews Take: Apple wouldn’t be making the move to Apple silicon-powered Macs if they could not make Macs that are demonstrably better than with Intel chips.


  1. “Apple…will drop support for non-Apple GPUs.”

    I’ll bet it’s the other way around. i.e. GPU manufacturers will not bother to rewrite all their software for Apple’s new niche silicon platform.

    This is another bad omen for the Apple switch away from Intel architecture.

    1. Apples uses/used two: AMD and Imagimation for their own mobile silicon, but has moved away from them with their own graphics.

      No one is leaving Apple. The advantages will be clear and Metal is a key piece to the developer puzzle. Apple designing and interesting their own chipset and software dev world to match – it is unparalleled in the industry, and will draw in more secs than ever before.

      Neil pretty much nails it all here:

  2. Glad to know you have better business insights than Warren Buffet – who has purchased $90B+ in Apple stock. Is your nickname the “Wizard of Dude?”

  3. So is anyone (and I do mean anyone) seriously claiming that any integrated Apple Silicon GPU issued in the next year or so can compete with a pair of Nvidia 2080 Ti cards (or Nvidia’s soon to be shipping Ampere based units) or a pair of AMD’s 5700 XT Anniversary GPU (or the Radeon VII)?

    Remember the current Mac Pro can incorporate, directly from Apple as a build to order, four (4!) Radeon Vega GPUs. Apple’s integrated GPUs are not going to be able to deliver that kind of performance in the near future. They just are not.

    For the next couple of years (or maybe even longer) Apple’s GPUs are not going to be capability competitive with the very high end GPUs from either of the two leading GPU vendors. That’s just fact.

    Apple’s integrated GPU is going to be much more cost effective (or at least I belive it will) for most users, but it won’t displace the performance as any cost end of the spectrum for discrete GPUs.

    Could Apple get there and dispense with discrete GPUs in 3-6 years? Yes. However, it will take a very focused effort to do so. AMD’s and Nvidia’s top ends are moving targets. Expect AMD’s and Nvidia’s top end to be 100% faster in six years or less. Apple has a long, long way to go from where it is today with integrated GPUs to beat where AMD and Nvidia will be in six years.

    1. Apple Silicon it will never happen Intel is forever, however Apple Silicon by the end of this year the Geeks are in rage mode like a Tesla Fanboy…..

    2. You appear to be one on the most serious and better informed persons in this comments section. And I agree with your timelines too.

      One thing is an entry-level Mac, and other a real high-end desktop workstation. Improve over Intel integrated graphics is something Apple could do and can be done by the end of the year. Apple has been ready anyway with their latest Ax chips.

      The other end is not going to be easy or soon enough. Apple could offer a reasonable “powerful” or robust entry-level Mac workstation in the next 2-4 years. In 2 years it could be done if Apple develops an efficient array of low power chips as one GPU, with a proprietary cooler. Or either Apple could develop one big chip, as Nvidia and AMD. But I doubt the later option because this is where it gets very complex and requires a lot of very specific experience and dedication. Now, because Apple owns the entire design of the machine they could do wonders. I could only hope. Now, lets say Apple delivers such a machine.

      My question is who will be there with Apple and their Arm-Macs on the high-end software side? Adobe, ok but how many of the entire Adobe software will be ready. But Autodesk, Maxon, The Foundry, Pixologic… just to name a few on the creative business. I am not an expert on other engineering or scientific areas. So fill in the blanks please.

      It took years for developers to catch up with their Mac versions on the latest Rosetta transition. I know, I was there. It was a pain, and it still is. Even today we don’t have a full spectrum of high-end software available on the Mac. But it got to a point it was possible to find a good high-end Mac application.

      Now, ask me if I am going to be an early adopter… No way.

  4. This is clear fantasy. Will the new Apple Silicon Macs be faster than similar Intel Based Macs HELL YES! Will they be fore power efficient – Hell YES! Will they have faster graphics performance than Intel Macs with Intel Graphics – HELL YES! Will they beat a Mac with a discrete graphics card – who are you trying to kid…come on now…think about it.

      1. I think the point for Apple to build and entire Pro machine is to go for the real gold. To beat all X86 machines. Or maybe offer a better price-point option.

        The first point is very hard to do.. Consider the 32-64 AMD CPU monsters of today but in 2 more years, and Apple has to do a better job. But also include better GPU(s) options.

        The second point could be done because it wont be the ultimate machine but a very efficient one.

        Don’t expect Apple producing the ultimate workstation soon, in 2-4 years.

  5. Bad Apple for trying to build low-wattage computer systems that don’t satisfy the gaming PC market. I’m sure everyone needs an over-clockable X86 processor and a couple of RTX 2080Ti cards to play some games with… NOT! Let those companies keep building power-sucking computer systems for consumers if that’s what they want. I’d say that’s a rather tiny niche of users.

    I believe Apple is taking the right course for the majority of consumers which is just my personal opinion. I honestly think the days of high-wattage X86 iron for consumers is coming to an end for the most part.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.