Luxology President responds to G5 benchmark skeptics

Brad Peebler, President of Luxology has responded to the G5 skeptics on the Luxology web site. Here is his open letter in full:

The impressive SPEC benchmark results presented at the Apple G5 launch have been greeted with skepticism and a host of questions about their validity and methodology.

Luxology was one of the ‘real world’ applications used to showcase the speed of the new processor. We would like to outline a few key facts about our development practices and how we went about putting the demo together. We hope this will shed some light on the performance users can expect from the G5, and clear up some confusion surrounding the various speed measurements.

Before I get started, let me mention something about the platforms we support. We project that 65 to 70% of our sales in the next 3 years will be to Windows customers. We plan to support Mac, Windows and quite possibly Linux. In our development team, 75% of the engineers work on Windows machines as their primary workstation and 25% use Macs. We like to consider ourselves platform agnostic. As a business we migrate towards the platform with the fastest CPU and OpenGL as it makes our applications look better. Also, in visual fx, compute time is money so we must be acutely aware of which systems will be most economical for our customers. As artists, however, many of us simply prefer OS X for its attention to detail and workflow for everyday use.

Luxology uses a custom-built cross platform toolkit to handle all platform-specific operations such as mousing and windowing. All the good bits in our app, the 3D engines, etc, are made up of identical code that is simply recompiled on the various platforms and linked with the appropriate toolkit. It is for this reason that our code is actually quite perfect for a cross platform performance test. It also allows us to support multiple platforms with a relatively small development team. Huzzah.

The performance demo made with Luxology technology shows our animation playback tools utilizing motion capture data. Typically with 3D animation playback the application taxes the GPU (graphics processor) using Open GL or Direct 3D to handle the hardcore 3D computations. In the case of our demonstration we actually moved many of those functions to go through the CPU and stated as much in the presentation. After all, this was a test of raw CPU power, and not the graphics cards in the box (which were identical Radeon 9800s, by the way). We did quite a bit of performance tuning during the preparation for the demo. However, we did absolutely no altivec, or SSE coding. In fact, the performance tuning was done on Windows and OSX. We used Intel’s vTune, AMD’s CodeAnalyst and Apple’s Shark. Again, 75% of our engineers were on Windows boxes and 25% working on the Mac, not to mention the fact that we only had one engineer with access and security clearance for the G5 prototype. That is hardly relevant, however, as any optimization done on one system is implicitly passed on to the other, as these were general optimizations to our raw 3D engines. The demo set up itself was designed to require a large number of computes and to push a large amount of data in and out of the chip to show both processor speed and bandwidth. I believe the demo accomplished this in an effective and very fair manner.

While we do not currently have commercial software on the market (we are in the R&D phases for several concurrent products) we would be more than happy to host any press persons (contact) and/or some technical experts to come to our Bay Area R&D facility to evaluate these claims.

Brad Peebler
Luxology, LLC

The letter is posted on the Luxology website here.


  1. Hi Brad;

    So what you are saying in effect is that you tested floating point performance and bandwidth. It is good that the G5 has excellent floating point software, but how does the overall performance of the machine stack up against the alternatives?

    The performance of the G5 running more mainstream applications is what is up in the air. Granted it will be extremely good, but to say that the G5 is the worlds fastest personal computer based on contrived application builds targeting its floating point unit is a little much. Maybe Apple is confused in their marketing but they are marketing this unit a personal computer.

    Don’t get me wrong some aspects of the G5 will result in stellar performance. But what I really want to know is how well the machine holds up running more run of the mill applications. Obviously I could wait until they are available on the market, but it does seem like Apple has highlighted applications that take advantage of the best parts of the G5, more balanced info would elimnate some of the concerns people have with the current performance information.


  2. Gee whiz, I haven’t seen this many scared Wintel users in years. What are they going to do when the 980 comes out? I’ve had impressive performance with my Dual 1GHz G4 using LightWave 7.5 but can’t wait to buy the new G5 fully loaded with 8GB RAM.

    I also went to the opening of the Apple Store opening on the Magnicent Mile this weekend – you wouldn’t believe how many pro users were there talking about 3D. Just think when Apple opens another 50 or 60 stores that show people how much easier and faster a Mac is to use. Keep the hits coming, Mr. Jobs.

  3. Run-of-the-mill applications? What? AppleWorks, Safari, Internet Explorer (RIP), MS Office, XPress, Illustrator, DiskWarrior et cetera all run quite fine and dandy on current machines. The real world applications shown are the very applications that NEED to take advantage of the raw power of the G5. How much more ‘real’ do you need to get?

    Precious few people out there consider Photoshop, visual fx, FCP, Maya, Logic … as ‘run-of-the-mill’ applications.

    Balance, perspective and proportion, dave (all lower case apparently)

  4. The real bottleneck in 3D apps is not CPU speed, it’s the GPU (video card) and the efficiency of the drivers. Brad mentioned that the benchmarks involved *software* rendering to benchmark the CPU. That’s fine, but it’s not “real world”. Most software is written to take advantage of the video hardware via drivers. Had the PC and Mac been using their stock “out of the box” video drivers and the Luxology app was not “hobbled” by forcing it to use the CPU for rendering, the PC would likely easily beat the Mac. That’s because the drivers on the PC side are much more optimized and have much more of an effect on rendering speed than the CPU’s speed. What Apple needs to do is assist ATi and Nvidia in optimizing their video drivers. This would improve the speed not only of Luxology’s apps, but with all Mac 3D apps. As a cross-platform 3D software engineer for quite some time now, I’ve been waiting for the day when I can finally say “Our Mac version is our fastest” without reservation. This will only happen when the drivers have been optimized to the point that the CPU becomes the rate limiting step in these benchmarks.

  5. It is the worlds fastest simply because a computer can only process data as fast as it gets it. Thus the G5 with its 1GHz FSB can theoretically process more data faster than any other desktop processor today. The speed at which it handles small datasets is insignificant, for all the top tier processors it is nearly instantaneous. It is only with large datasets that a processor is truly tested. The large photoshop real world demo of the Nemo poster bears this out as does luxologies tests.

    The other factor to keep in mind that we are not even talking SIMD in which the G5 is clearly the fastest as well something luxology would also have benefited by. It is clear from all the testing that the G5 is the current speed champ, let’s move on.

  6. Johnny C, you completely missed the point of the Luxology tests. they were not referring to the power of their 3D software. they were not referring to the power of the G5 graphics card. they were testing the raw G5 CPU power, and the efficiency of the bus etc, using one of the most demanding tasks available. and the G5 won hands down.

    for those who don’t know, there are 2 types of rendering: CPU and GPU. CPU is software rendering, like Finding Nemo (smooth and accurate), while GPU is often Open GL quality (like any computer game). CPU takes several minutes per frame, and is high quality. GPU renders many frames per second, at low quality.

    the reason for using that luxology test (which showed a real time open gl animation of a motion capture person) was because of the nature of the World wide developers conference: an audience will not be willing to sit through 20 minutes to see a single frame of Finding Nemo render. they need a more instant representation. NO 3D rendering program can software render complex animation scenes in real time. however, they can do ‘open GL’ renders in real time, which is what was shown. so luxology basically gave a more useable representation of rendering performance of the G5.

  7. “The real bottleneck in 3D apps is not CPU speed, it’s the GPU (video card) and the efficiency of the drivers.”

    wrong. the biggest bottleneck in 3D is RENDERING (software based). this occurs SOLELY in the CPU. the GPU is only used during modelling and navigation in the 3d environment, all during the real time ‘preview’ stage (with open GL quality etc). the GPU outputs game quality ‘renders’ several times a second, while the CPU puts out film quality renders (usually many minutes per frame). while rendering on a CPU often takes many hours, the collective ‘lag’ on a slower GPU is minimal. for the price you pay for a G5, the GPU is respectable. it is priced and marketed as a desktop, but has workstation CPU power as an added bonus.

    true, some high end graphics cards are unavailable for the mac, but the market for these cards are workstation users who pay thousands of dollars more for their computers. in any case, the performance difference between mac and PC drivers would be, at most, a 20% difference for a particular GPU. Unlike with CPUs, a 20% GPU increase is rarely perceptable. all it means is that the ‘on screen’ frames in a 3D app (like scrubbing through a complex animation in open GL) skip 20% less often. a hundred 0.3 second skips from a slower GPU do not significantly affect a 3D user’s workflow. a 6 hour time saving from a faster CPU does.

    for practically everything that the G5 is used for, CPU is far more important. accordingly, when you get a G5, you get a significantly faster CPU and a less diverse GPU range than some (more expensive) PC workstations offer. the G5 benefits FAR outweigh the limited GPU options, for any serious (production environment) 3D user.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.