Benchmarks show Power Mac G5 Quad 2.5GHz a magnificently powerful performance demon

“There are several factors that make the G5 Quad a worthwhile purchase at this time. First, it’s magnificently powerful, as we’ll see in the benchmark results below. If you presently use something along the lines of a dual 2.0 GHz G5 (or slower), you will see performance improvements that could allow this machine to pay for itself in productivity enhancements over the next 18 months or so. That’s the best argument the G5 Quad has going for it. It’s so fast that it forces you to consider it seriously, even if you know for certain that it would be the last PowerPC-based computer you’d ever buy,” Dave Nagel writes for Creative Mac.

“Second, we don’t really know what Apple’s plans are for its Intel hardware rollout. We know that June 2006 is the expected launch date for the consumer-level Intel machines (whatever that means), and June 2007 is the expected launch date for the ‘pro’ machines,” Nagel writes. “But, as professionals, we have work to do now, and it may not be the best idea to wait six months for a hardware bump, especially if it’s only to discover that the Mactel machines have been delayed, that Apple’s changed its mind about the whole thing, that the first generation of Mactel hardware isn’t all that wonderful, etc. (Believe me: If the first Mactel machines are simple dual Xeon configurations without dual cores, you’re looking at a speed dip, not a speed bump.)”

“And, third, when the new Intel-based Macs are released, it doesn’t mean that your PowerPC-based hardware is automatically obsolete that very moment. The G5 Quad will continue to get the job done long after the Intel Macs ship, albeit with a dwindling supply of software and support from third-party developers,” Nagel writes. “But what it’s going to come down to is your need for more power now versus your knowledge that a completely new (and potentially even faster) architecture is looming just ahead of us.”

Full article with benchmarks here.

Advertisement: Power Mac G5. Dual-core PowerPC processors with PCI Express. From $1999. Free shipping.

Related MacDailyNews articles:
Computerworld’s advice on Apple’s new Power Mac Quad G5: Place your orders now! – November 16, 2005
Apple Quad 2.5GHz Power Mac G5 vs. previous generation dual 2.5GHz Power Mac G5 – November 14, 2005
InfoWorld: Nothing can compare to Apple’s new Power Mac G5 Quad – true workstation at desktop price – October 24, 2005
NVIDIA brings workstation graphics to Apple Power Mac G5 – October 24, 2005
Apple’s new Power Mac G5 Quad supercharges rendering – October 22, 2005
AnandTech: Apple new Power Mac G5’s biggest improvement is the move to PCI Express – October 21, 2005
Photos of new dual core Apple Power Mac G5 interior, ports, and more – October 19, 2005
First benchmark tests of Apple’s new Power Mac G5 dual-core machines – October 19, 2005
Apple introduces Power Mac G5 Quad and Power Mac G5 Dual – October 19, 2005

77 Comments

  1. So, I’m not confusing myself. Direct3D is the API for Windows, and the drivers that video card makers develop for that combination tend to be better performing than the drivers they develop for the OpenGL/Mac combination. Also, certain software apps – and even tests – can be better optimized for one OS vs another. And while I don’t have a list of which ones ‘diss’ the Mac and/or favor Windows, the Windows friendly versions are out there in greater numbers (except if Apple writes it, of course).

    Yes, you are confusing yourself…..just as you’ve confused yourself when you tried to imply Maya results on a Mac are slower than windows because of the threading issue inherent in OSX. Apparently, you thought Maya was a database server app instead of a rendering workstation app.

    You’re just as confused here.

    There is no operating system advantages/disadvantages for OpenGL.

    Benchmarks under Linux/Windows show this rather well. Any rendering differences are other than video card driver issues themselves….such as SSE2 is not supported under many Linux games.

  2. Sammy:

    AARRGH! Brevity, man – please!

    Look, I’ve read enough to see some valid points, and some BS.

    My points on this issue, as well as the one you brought up from eons ago (as I remember it anyway) is that apps, indeed software in general, that are written ‘cross-platform’ are generally better optimized for Windows than for Mac. That advantage runs all the way from the app itself, down to the video drivers that usually influence the preformance of the app. There are exceptions, but not too many. This is not a debated point by anyone I know, or any testing site I’ve read. Well, anyone but you, apparently.

    Plus your usual tactic of avalanching on a topic (with some useful info but mostly your own opinions), as a way of refuting it, leads me to believe that maybe you just miss the forrest for the trees on things generally. Alright – I’ll try to address a few trees here.

    Regarding the issue of the API and it’s relation to the OS, you’re right – I got that wrong. While writing in a rush I had a brain fade regarding what the graphics subsystem the Mac uses for its GUI was (thinking it might be relevant). I shouted out to a friend who said “OpenGL!”, and off I went. After I posted I realized that “Aqua” was the right answer, and that once I bit on OpenGL my brain fought the good fight to figure out how to fit it in this case. It doesn’t, and I humbly ask your forgiveness.

    I’ll also tip my hat on whether or not OpenGL interacts with the OS the way Direct3D does with Windows. My knowledge regarding Direct3D is deeper (I know that it does interact), and I made the assumption that a similar concept extended to OpenGL. I haven’t yet found anything that contradicts that, but I’ll concede the point to you as I can see that it could very well be as you claim.

    Nonetheless, as for the rest, I didn’t neglect the fact that the app/API relationship is very important. And my mentioning that the app’s performance itself is highly dependent on whether it’s been fine tuned for the OS it’s working on is hardly the cop-out you characterize it as. The whole subject, as far as I’m concerned, is whether software coded for Mac is refined to the point that it is for Windows (it isn’t), and whether that has an impact (it does). And, while I didn’t mention it before, app performance is also CPU dependent (is Altivec utilized? SSE3? etc …) and that’s a software optimization issue too. None of this is theory, and software optimizations DO come into play in every case, whether ‘real world’ testing or artificial benchmarks. That’s just the way it is.

    Further, you give me credit at one point (I think. Maybe?) for the driver issue – which is obviously equally important here – and then diss me insofar as basically saying ‘anyone should know that’. And then earlier, where you tell me I’m absolutely wrong, I’m not. I’m decribing the situation pre-Win95, so maybe that has something to do with you not recognizing what I’m saying, but since I have a feeling I’ll just never win with you regardless … Oh well, moving on.

    As for Maya, I don’t recall the threading issue you bring up. You’re more on top of my posts than I am, so please – provide a link and I’ll re-read it. And the Cell heat yield thing: I think you have mistaken whatever I said, because I’ve never seen that as one of Cell’s ‘selling points’, at least in its present state. As an FYI (and I don’t know if this helps but …), Cell puts out about 185deg F at 4+Ghz in it’s current lackluster form. See PDF here for that and other info (like heat signature):

    “Design and Implimentation of First Generation Cell”
    www-306.ibm.com/…/techlib.nsf/techdocs/7FB9EC5D5BBF51ED87256FC000742186/$file/ISSCC-10.2-Cell_Design.PDF

    The credit I have given Cell, countless times, is in it’s modularity, and in the forward thinking ideas regarding its parallel/distributive processing abilities. With the former, whatever Cell lacks in it’s present state can be more easily modified, without eliminating its other advantages, than any other design I’ve seen (so maybe I made the case that a lower power PPC-based PPE could be developed for it). With the latter, I believe that, after multiple cores designs, Cell’s ability to ‘co-process’ distributively with any number of other Cells, over any kind of network or connection, is just a huge lever waiting to be utilized.

    I’ll have to leave that for another time though.

  3. Let me start out by pointing out that the link you provided above simply leads to a google search of “heat”. I’m sure you meant something else (or maybe trying to be sarcastic), but I find it amusing all of our debates, regardless of what the original thread was, always ends with a discussion on Cell. I will get to that later though, but first (I’ll try to keep the avalanche to a minimum):

    “My points on this issue, as well as the one you brought up from eons ago (as I remember it anyway) is that apps, indeed software in general, that are written ‘cross-platform’ are generally better optimized for Windows than for Mac. That advantage runs all the way from the app itself, down to the video drivers that usually influence the preformance of the app. There are exceptions, but not too many. This is not a debated point by anyone I know, or any testing site I’ve read. Well, anyone but you, apparently.”

    I’m not disagreeing that software, in general, will be more or less optimized towards one platform in a variety of ways. I am pointing out however, that the reasons you tried to apply to all software as a result of video card drivers (especially OpenGL) in relation to operating systems is invalid.


    Regarding the issue of the API and it’s relation to the OS, you’re right – I got that wrong. While writing in a rush I had a brain fade regarding what the graphics subsystem the Mac uses for its GUI was (thinking it might be relevant). I shouted out to a friend who said “OpenGL!”, and off I went. After I posted I realized that “Aqua” was the right answer, and that once I bit on OpenGL my brain fought the good fight to figure out how to fit it in this case. It doesn’t, and I humbly ask your forgiveness.

    You are forgiven 😀 I think, if OpenGL was truly hampered in any shape or form under OSX compared to Windows/Linux, you would not be the only person in the world preaching that shortcoming….The Mac would not even be a serious contender in the CAD market if that were the case.

    I’ll also tip my hat on whether or not OpenGL interacts with the OS the way Direct3D does with Windows. My knowledge regarding Direct3D is deeper (I know that it does interact), and I made the assumption that a similar concept extended to OpenGL. I haven’t yet found anything that contradicts that, but I’ll concede the point to you as I can see that it could very well be as you claim.

    Direct3D is a subset of DirectX, which is tightly integrated into Windows XP. You cannot have a “naked” Direct3D implementation (Direct3D by itself).

    OpenGL is not bound to any part of windows. OpenGL drivers traffic data directly from app to video hardware. It is not “claim”, it is simply fact. OpenGL apps will run the same speed on Linux as it will on Windows on the same machine. A difference in speed on a cross platform OpenGL app like Maya, isn’t due to video card drivers….but other aspects of the overall platform (primarily on CPU, memory, and chipsets).

    Nonetheless, as for the rest, I didn’t neglect the fact that the app/API relationship is very important. And my mentioning that the app’s performance itself is highly dependent on whether it’s been fine tuned for the OS it’s working on is hardly the cop-out you characterize it as. The whole subject, as far as I’m concerned, is whether software coded for Mac is refined to the point that it is for Windows (it isn’t), and whether that has an impact (it does). And, while I didn’t mention it before, app performance is also CPU dependent (is Altivec utilized? SSE3? etc …) and that’s a software optimization issue too. None of this is theory, and software optimizations DO come into play in every case, whether ‘real world’ testing or artificial benchmarks. That’s just the way it is.

    It didnt strike me until now, but I could have easily shot down your video card driver theory even earlier by mentioning Maya’s rendering offers a software rendering option….the video card would not come into play at all….and the results are still within range. Cinebench is a great example….a quad Opteron 280 scores higher than a quad G5. Your video card driver/operating system theory would not apply to the outcome as they are both not even in the equation (although I will note, 64-bit processors using Windows 64 will produce a higher score as they are running in full 64-bit mode, not 32 bit).

  4. Further, you give me credit at one point (I think. Maybe?) for the driver issue – which is obviously equally important here – and then diss me insofar as basically saying ‘anyone should know that’.

    I think I was trying to state that the reason the major video card vendors invest more into Direct3D/Windows is because it is that segment that drives the market, which put those GPU companies in their thrones in the first place. OpenGL has unfortunately fallen behind Direct3D in supported features over the last several years (in relation to gaming). As far as professional grade computer aided design is concerned however, there is simply no substitute for OpenGL.

    And then earlier, where you tell me I’m absolutely wrong, I’m not. I’m decribing the situation pre-Win95, so maybe that has something to do with you not recognizing what I’m saying, but since I have a feeling I’ll just never win with you regardless … Oh well, moving on.

    You are still absolutely wrong even when referring to the pre Win-95 era. Back in those days, it was NOT the computer vendors that had to write their own video drivers, but each software ITSELF had to have its own drivers in the days of CGA, EGA, VGA. VGA, EGA, CGA were all the same standard regardless of platform (Tandy, IBM, etc…) If you wanted to draw something in EGA, you wrote to the same address always….how the software handled the instruction with its own driver is another story.

    However I think you mean the first variations of OpenGL. Even back then, OpenGL implementations were independant of the operating system. 3DFX’s “Glide” worked in both DOS and windows….GL Quake, Hexen 2 all come to mind as apps that used early versions of OpenGL (MiniGL, Glide, etc..). The operating system was not a factor then (although DOS games did generally run faster, due solely to the overhead Windows 3.1 caused), and it is not a factor to this day. OpenGL is OpenGL…nothing more, nothing else.

    ….to be continued (need some sleep).

  5. I’m not sure if someone has covered this – if so sorry.

    facts, maybe wrote:

    “The file size of a universal binary is twice the size of an PPC or Intel specific binary. Now this may not be an issue for CD/DVD based retail software sales. But if you get your apps out there over the internet…..then bandwidth becomes a concern. Freeware/Shareware/Donateware could suffer from the size of the universal binaries.”

    Sorry to burst your point like a soap bubble, but the need for universal binaries is ONLY necessary for CD/DVD distributions.

    If you are downloading software from the net, you would KNOW if you had a PPC or an Intel Mac.

    As a result, you can simply CHOOSE which version you want to download.

    Sorry your BLIP has popped.

    My 2 cents.

    Luke

    ” width=”19″ height=”19″ alt=”raspberry” style=”border:0;” />

  6. Odessey67 asking someone else for brevity!!!???

    Now that is rich!

    And as far as I can see at least Sammy is posting posts with something new and original to say in each one – not just REHASHING his arguement OVER and OVER and OVER and OVER and OVER and OVER… sorry it must be catching! ” width=”19″ height=”19″ alt=”tongue wink” style=”border:0;” />

    By the way – if roadmaps are so UNRELIABLE – why do you believe IBM’s so completely.

    A press release is NOT a performance test!

    Luke

  7. Sorry one more thing.

    You can tell when someone’s getting desperate to prove a point.

    They pull out the old “from what I have been told” and “everything I have heard” lines.

    PLEASE?!?!?

    Unless you name the sources Odessey67 (and they can’t be someone’s 2nd cousin’s wifes uncle that is “high up at Apple”) – you have to have SPOKEN to them personally – then these conversations may as well be with the voices in your head!

    What have you heard – your fellow professional half glass full club members bitching about the move to Intel???

    How would you have ANY clue as to the negotiations that went on?

    I belive you could POSSIBLY if you ACTUALLY knew someone that was there. But I doubt you do, because your posts continue to use language suggesting your “interpretation/theories/ideas/beliefs”.

    I like the fact that you are so passionate about your BELIEF in this move being an UTTER mistake (to the point you discount or even IGNORE information that contradicts your stance), but please refrain from portraying your BELIEFS and OPINIONS as FACT.

    Luke

  8. Odyssey my man
    you really should stop your argument with sammy cause really you don’t know what you are talking about.

    You barely understand the difference between an API and a graphic subsystem and you also don’t understand the different components of a graphics application. what is a hardware abstraction layer and what is the relationship between that and the graphics subsystem?

    Also your “FACTS” are mostly wrong.

    For example OPENGL was not made for windows or osx first, openGL is a Silicon Graphics (SGI) API which was originally made for IRIX.

    It is completely OS independent and can’t be optimized for a particular os.

    I know sammy has said this but you don’t seem to like him so I think you are missing his points.

    Also Driver optimization is not only OS specific but importantly hardware platform specific so the mac in general will be helped by being on the intel platform, although not so with the higher level windows optimization.

    I want to state that like you I HATE, the intel move, but unlike you I don’t subscribe to the generally ignorant assumption that the chips the first CoreMacs will be mated to will be slower than their replacements.

    Even the present netbust P4s powering the present Mac Dev boxes are offering surprising performance.

    As far as Application optimization, I have read more PC users whine about how Apple uses PS and AE as its demo software because they are more Optimized for the mac.

    In many ways I believe this is more true then the other way around.

    AE, PS, AI, where originally mac only, and the current versions of those applications where done first on the MAC and although intel provides a lot of optimization help with PS and AE, I am pretty sure Apple does similar things with these major applications.

    Cinema4D which is what cinebench is based on is actually heavily optimized on PPC, because the Developers are MAC users first.
    At least this is what I understand about Maxxon.

    Apple has also helped Alias optimize Maya on PPC.
    The perception that mac apps are writen better for windows first I think comes from the MS OFFICE 6.0 fiasco when it was blatantly obvious that this was a rewrite of OFFICE95.

    I’ve been a Mac user long enough to know that the Majors don’t risk it anymore.

    Video drivers though on the mac have been sucking for a while vs their PC counterpart, but this is mainly due to the confusion Apple has created with its myriad of different and changing graphics API’s and Frameworks.
    what do you optimize for when one week is focused on quickdraw3d and the next day Quartz2d, and then the next a differently designed Quartz2d + Quartz3d with opengl and then Coregraphics, etc. man its all over the place I am not even sure anymore what is an API, a Framework or a subsystem in this mess.
    no DirectX model to work towards. D3D took 13 years to get to the point we are now, but on OSX we have only had 5 rocky years.

    I really wish we could get the same catalyst and reactor drivers as the MSPC, but this will only happen with time and focus.

  9. One more thing Odyssey before you rip me a new one…

    I am a PPC person, I read PPCZONE and PPCNERDS, I think the cell is brilliant and I mostly think that x86 is old maligned and in most ways antiquated.

    BUT the truth is that performance wise Opteron and AMD64 processors are for the next 3 years going to be winning every performance competition that happens between PPC, iX86 and amdX86.

    The main thing about x86 sadly is not its elegance, or better performance per watt advantage –it never will come close to ppc in that dept– but the Intel ecosystem.
    This is what Stevo was gunning for, and we are going to get this in spades.

    Already people are talking about ViiV and Apple in the same sentence.

    This is what Jobs wanted.
    This is the important marker of real success according to the press.
    so now we have it.
    its a model marriage, all about press. and what people think.
    for all we know we could move to Opteron for the top end PowerMacs (OptoMacs) by 2008 and all the heavy duty intel lifting will have been done, and since AMD surely will have won its anti Intel lawsuit by then, Intel wont be able to hold the marketing dollars over Apples head indefinitely.

    Ciao

  10. Seabass,

    Going from your last post first, that’s exactly the argument I’ve been making! I’ve never said that PPC currently outguns AMD’s Opteron. Read what I wrote and you’ll see that I’ve always given props where it’s due. And, now that apple’s dropped PPC, I agree totally that AMD will hold the crown for quite some time (without Apple, I can forsee IBM letting 970 languish). My contention has ALWAYS been that if moving to x86 was for the performance, than AMD would have been the choice. I’ve also mentioned ViiV ad nauseum, in addition to DRM. We also agree that the Intel ecosystem was the prime motivator here, perhaps diverging with regards to what Apple’s biggest draw to that system was – in that I believe that what Intel was doing in video was all Apple cared about. So, by and large, I don’t see what your dispute is regarding my position.

    My main beef is that if people would stop drinking the koolaid, and started looking more at the ramifications, they might start asking questions regarding whether video is really worth it. The more I see how this is playing out, in corporate strategy and federal legislation (which goes hand in hand), I’m predicting the day when all this sturm and drang on behalf of figuring out how to make ‘boob-tube 2.0’ is going to cost consumers a lot more than it’s worth. And I see Apple’s abandonment of an above average CPU architecture, with better future prospects (PPC), for a clearly inferior one (Intel x86) as the first indicator of that cost. Every computer geek’s alarm should have went off when it was announced, if being one still means caring about more than iTunes.

    If Apple went with AMD, I’d have been largely agnostic about the whole thing, because it would have at least shown that Apple was still interested in putting the best possible computer out there. After that, let everyone download (paid) to there hearts content.

    As for your first post, I’ve already addressed sammy about the mistake you mention regarding OpenGL and it’s relation to the OS – my bad. For some, the very fact that I will admit a mistake will induce them to obfusicate everything else I’m saying, or have said, but such is life. On that I fucked the duck.

    However, I didn’t say that OpenGL was made for Windows first, I said that Windows used it first – before Direct3D. If anyone had bothered reading the links I provided above, they would have read the following:

    http://www.vcnet.com/bms/features/3d.html

    “By 1992 it was clear that 3D graphics was poised to become a critical technology … Requests from independent software vendors led a consortium of companies to agree to support a common 3D graphics API… derived from a popular, older graphics library created by Silicon Graphics, … called OpenGL. OpenGL was to be a state-of-the-art API that could be implemented efficiently on a wide variety of computers. Its specification would be controlled by a committee (… the Architecture Review Board, or ARB) rather than by (and for the benefit of) a single vendor… The original members of the ARB were Digital Equipment Corporation, International Business Machines, Intel, MICROSOFT, and Silicon Graphics. … At this time Microsoft was developing the first version of its new high-end operating system, Windows NT. A large part of the computer-aided design market was inaccessible to Microsoft because of … shortcomings in Windows 3.1, and Microsoft sought to remedy those in … NT. One such shortcoming was the lack of good 3D graphics support, and OpenGL offered an expedient solution. … In 1995 and 1996 Microsoft established a new program to support games on PCs running its Windows 95 operating system… Microsoft chose not to use … OpenGL… Instead, Microsoft purchased Rendermorphics, Ltd. and acquired its … API known as RealityLab… reworked the device driver design … and announced … a new 3D graphics API called Direct3D Immediate-Mode (Direct3D).”

    Sorry for the long quote, but I’m just trying to be thorough. Since simply listing links provides too much room for FUD, present company excepted, I have to be a sammy here.

    As for the rest of what you write, regarding software optimizations, again we seem to agree more than not. Point of departure is one more of emphasis, I think. I will say that when I see a reasonable disclaimer by the author of the test on the subject, as here, I’ll tend to beleive him. And Photoshop has been cited as ‘Windows friendly’ in more places (including stories here) than I can shake a stick at. I know people who work with it on both OS’s who say the same (about that and a few other A/V programs). In other words, while I’m sure the situation has improved greatly since even when the G5 first came out (remember all the hullabaloo over those benchmarks, and who had what optimization advantages?), I’m content to stand largely where I am on that subject.

  11. Luke:

    sammy is NOT being original. This guy has been notorious for posting, and posting, and posting, and posting … regardless of the topic and regardless of how well he’s refuted, or by how many. If you want to call me a kettle, so be it. I do try to defend myself. But to the extent I bring up a subject more than once, it’s always in relation to a new story about the subject. Once I’ve made my piece, I’m off. sammy, on the other hand, will just inundate from this post and God knows what else from the days of yore. He’s castigated me for not responding even after I have responded multiple times on the same topic (or point within a topic), and properly signed off. THEN he’ll bring it all up again on a completely different post, and throw in new objections there too! And no matter how well I’ve tried to deal with him (like on Cell), he’ll claim I said things I didn’t, or that I “ran away” from him, or blah blah blah.

    Frankly, I think he’s in love with me, but just can’t utter the words.

    I’ve been lucky these last two days to have the time to spend on some of it, and you’re a good guy so I don’t mind mixing it up a bit here, but sammy’s go arounds are patently ridiculous, and nothing I care to match. So listen, while I do my best on these things, I recognize the limits.

    sammy – I CAN quit yoou!!!
    ” width=”19″ height=”19″ alt=”cool smirk” style=”border:0;” />

    Oh, and as for the “I believe, I think” nonsense. Look, I do have some insights here, but in the end, none of us are sitting in Jobs’ head. I’m pretty clear in saying that my inferences are just that – mine. That’s intellectualy honest, not a dodge. Also, I provide links as much as the next guy, but it’s not my fault if they’re not read.

  12. It would interesting if somebody offers a hack that allows OS X to be run on the dual Opteron boxes. I like Apple’s hardware, but if the price-points don’t change appreciabley on Apple’s end relative to the Opteron hardware, such a hack becomes worth the chance IMHO.

    Question – if the Intel HW is lacking in performance, yet the developement boxes “have surprising performace” for admitedly sub-optimal HW from Intel, do we care how good the PPC is if the result is a faster Intel-base PowerMac than what’s currently offered in PPC form?

    If Intel-based Macs with current Intel CPUs have surprising performance relative their PPC replacements, how does Apple keep PPC-based Macs selling? This would seem to impact the PowerMac audience the most. Does Apple cripple the Intel-based Macs so as not to hurt PowerMac sales (and alienate those “pro” customers)? Or does Apple say “f*ck it” and ship stuff running at full potential and have factories pumping the new HW out as fast as possible since PowerMac sales are the slimmest, anyway?

  13. Sammy:

    While I’m giving you some grief here, recognize that I’m not averse to debating these things out. I like to do that, since I invariably learn something. Basically I just want to deal in digestible chunks, and without all the ‘tude. I can dish it out too (see above), but really I’d rather keep things civil. Also, from watching Dr. Phil I’ve learned that it is NEVER productive to bring up old issues in a current argument. You do that entirely too much, and if our relationship is to survive, well … you get the picture.
    ” width=”19″ height=”19″ alt=”wink” style=”border:0;” />

  14. zupchuck:

    Since I’m still here, I’ll venture a comment or two. I’d love to see an AMD hack, if for no other reason than comparison’s sake. But I’d also enjoy the hell out of building an AMD/OSX box out of an old G4 PowerMac case!

    As for the developer comments on OSX/x86 performance; I’m not trying to be a hardhead, but I want to see it for myself. If you think about it, the devs have some incentive to talk the good talk regarding this transition, if for no other reason than they really have no choice. If they start bad mouthing Macintels, there own sales certainly won’t be aided. They depend on a healthy Mac market, and if that market is going to be based on x86 soon …

    I’m really looking forward to January in that regard. Although I’d love it even more if we had the best G4s to compare the new laptops too, it will still be instructive to see just how well OSX and the apps run regardless.

  15. Odyssey67,

    January ’06 and going forward will be instructive to say the least.

    And, you aren’t being hardheaded in wanting to see OSX/x86 performance for yourself. There’s always an RDF hanging about all things Apple and self-serving comments to protect one’s business is to be expected.

    If we are surpised (in an exceedingly positive way) by OSX/x86 performance, Apple has an interesting situation in managing the result.

    Here’s to ’06!

  16. Odyssey,

    I know you can downplay my posts all you wish, but as you can see many people here are fully aware of the repetitive arguments you put forth in thread after thread concerning the transition.

    You state that I keep posting even after being “refuted”, but I dont recall you ever “refuting” any points that I’ve made. In fact, I’ve been begging you to please back up your reasonings in some way or another with proof….not just post what you “think” is true. If you think I’ve said something that is factually incorrect, please do correct me. I will not argue when the proof you provide is right in front of me (and your links lately have either been irrelevant or not working).

    As for Maya, I don’t recall the threading issue you bring up. You’re more on top of my posts than I am, so please – provide a link and I’ll re-read it.

    These were your own words concerning the Renderman Maya benchmark: “The fact that the G5s are also crippled by OSX’s poor server performance vs Linux (which the Opterons are almost certainly running), also speaks well for them as hardware.”
    http://macdailynews.com/index.php/weblog/comments/7790/

    And the Cell heat yield thing: I think you have mistaken whatever I said, because I’ve never seen that as one of Cell’s ‘selling points’, at least in its present state. As an FYI (and I don’t know if this helps but …), Cell puts out about 185deg F at 4+Ghz in it’s current lackluster form. See PDF here for that and other info (like heat signature):

    I brought it up because I used it as an example as to how sometimes you make statements that you think is true, when in reality it is not. In that previous link, you posted the reason Cell dropped one physical SPE was to “alleviate” developers’ concerns by “adding instruction sets” to the PPE.

    I merely provided you facts (from Sony’s own president) that the only reason one spe was disabled (not physically dropped as you had thought) and clock speed reduced was to improve yield due to heat/defect issues.

  17. MDN,

    Can there be a dedicated Sammy vs. Odyssey67 page?

    This is turning into a great religious beliefs debate.

    Although I’d love to see a Sammy vs. Beeblebrox thread. Both are masters of the “more must be more” tactic.

  18. zupchuck’s right, this is getting silly. But I’m a glutton for punishment, so …

    sammy says: “I brought … up [the Cell heat yield thing] because I used it as an example as to how sometimes you make statements that you think is true, when in reality it is not.”

    That’s a neat trick, since I never made the statement. I’ve been boring down through my posts (which was surprisingly easy, since everytime you’ve posted to me your provided a link to a previous – and largely unrelated – post I made before) and there is not one word from me regarding Cell and heat. HOWEVER, in the interest of full dislosure …

    “… you posted the reason Cell dropped one physical SPE was to “alleviate” developers’ concerns by “adding instruction sets” to the PPE. I merely provided you facts (from Sony’s own president) that the only reason one spe was disabled (not physically dropped as you had thought) and clock speed reduced was to improve yield due to heat/defect issues.”

    Right, and I gave you props for that. I also explained that I got that info from an Ars piece on the subject, which expressed an opinion on the matter prior to Sony’s president’s statement (which I also hadn’t seen). Exactly what more am I supposed to do? Besides, the only issue I was dealing with at the time was Cell’s modular nature. While that specific example may have been wrong, it doesn’t negate the point since countless other places have made the same analysis.

    From Nick Blachford:
    http://www.blachford.info/computer/Cell/Cell4_v2.html
    “[In Cell] … The PPE could be replaced by something like a 970FX but this has a larger core and either the die size would have to grow to around 250mm square or a pair of SPEs removed. The 970 also consumes hefty amounts of power at high clock speeds – more than the entire Cell. In order to fit a 970 and keep it cool enough to run in a PS3 the frequency would probably be reduced to well under 3GHz …”

    From the date of his post, he’s referring to the old iteration of 970, and also what would be required to have a Cell/970 combo work within the confines of PS3 (tight quarters and presumably no less than 7 SPEs). But the general concept is established; that the design of Cell allows a wide range of possibilities, and incorporating future improvements (like lower power versions of 970) is possible.

    From IBM itself:
    http://domino.research.ibm.com/comm/research.nsf/pages/r.arch.innovation.html
    “… Cell is … a scalable system. The number of attached SPUs can be varied, to achieve different power/performance and price/performance points. And, the Cell architecture was conceived as a modular, extendible system where … a Power Architecture™ core and attached SPUs, can form a symmetric multiprocessor system.”

    Even Anand indicates the same:
    “There’s not much that’s impressive about the PPE, other than it’s a small, fast, efficient core. Put up against a Pentium 4 or an Athlon 64, the PPE would lose undoubtedly, but the PPE’s architecture is one answer to a shift in the performance paradigm… Should Cell ever make its way into a PC, the PPE would definitely … be beefed up, or at least paired with multiple other PPEs.”

    All validates the point I was making. Even you’ve never refuted it, but you have spent literally 10s of thousands of words beating a horse that I long since acknowledged was dead. Again – all trees and very little forrest.

    cont…

  19. ugh … falling … in and out of … sleep … must … struggle … to finish …

    sammy says: “These were your own words concerning the Renderman Maya benchmark: “The fact that the G5s are also crippled by OSX’s poor server performance vs Linux (which the Opterons are almost certainly running), also speaks well for them as hardware.”
    http://macdailynews.com/index.php/weblog/comments/7790/

    Alright. To re-cap: You supplied a link and stated “No surprise here, Quad G5’s still fall behind Quad Opterons.
    http://perso.wanadoo.fr/fabien.corrente/MR.htm“ After I re-read this, I remembered that I originally had read “No surprise here, Quad G5’s still FAR behind Quad Opterons” and then layed into you for the fact that, while G5 was second on that huge list, it wasn’t a distant second, and beat out a much larger field of various Intel rigs. My misreading of your statement clearly started the dominoes falling.

    As for Maya, I do realize what it does. But when I threw the bit in there about server performance I can see why it would look as if I don’t – the two are unrelated. Unfortunately, I don’t know what prompted me to do that. I’ve re-read as many of the MDN posts I could find that I was making that day, and found nothing definitive regarding OSX server thread issues. I do remember having that conversation with SOMEBODY (maybe even on another site) so I’m fairly certain that I just confused you with whoever else I was originally making that point with. Unfortunately, once it got involved in our conversation, I assumed we both knew why (when in fact neither of us really did) and later used ‘quick and dirty’ statements just to try to make points and move on.

    For example: “I know what Maya does, but the fact is that OSX Server is still a lower performing OS than Linux or Windows Server. I don’t like it, but that’s the way it is. OSX’s easier to use, and more stabel than Windows (about equal with Linux), but the others are much snappier in getting things done, no matter the task. One other thing I should have pointed out is that video card drivers (which are even more pertinient with Maya) are generally better written/optimized for Windows, and perhaps even Linux (depending on who you talk to), and yet STILL … the G5 comes in a close second in a field of about a hundred rigs.”

    I’m obviously splicing two diferent concepts into one paragraph here and, even worse, switching back and forth between them, so I’ll have to do major penance with my english profs and editors. But I now see why you were getting so nuts about it, so again I apologize. I definitely had too much going on that day.

    On the other hand, I will reiterate that your style of posting doesn’t help your quest for ‘accuracy’. In my case, when I saw how much you were throwing my way, with all that earlier stuff as well, I frankly stopped reading 70% of it (as I think said at the time). It’s not reasonable to expect a thorough intake of your usual level of posting (all at once normally) under any but the most dedicated of circumstances – let alone thoughtful, mistake free responses – and most people don’t have that level of dedication. At least not me, though as all this should attest to, I will give it the old school try. Nevertheless, I’d wager most people wouldn’t fall into that catagory.

    That said, clearly I made some mistakes. I doubt that means we agree on anything after all, but there you are. ” width=”19″ height=”19″ alt=”raspberry” style=”border:0;” />

  20. The first link you provided reinforces the limitations I have been pointing out to you concerning Cell. I think this is a case of us concentrating on opposite ends of the same idea, which is basically: It is possible to design Cell with similar complexity as current general purpose processors, however there are noticeable practical limitations. You have been concentrating on the possibility, I have been pointing out the limitations. In the manufacturing perspective, the limitation is much more important.

    Odyssey, have you ever held a CPU in the palm of your hand, and wonder to yourself if so much power can be held in such a small space…..why don’t we have a chip the size of a dinner table crammed with as much transistors as possible?

    Well for one thing, yield would probably be less than 1%….making a 300 million transistor chip without defects is already difficult….making one with 1.5 million million (300m x 5000, the size of a small dinner table) is near impossible. The possibility[/b is there….but the limitation exceeds practicality by a HUGE amount.

    That is the idea your link pretty much outlines. Cell is already an exceedingly large chip full of manufacturing headaches.

    The author states one of the things that can be done to put a 970 core on Cell is to increase die size. What on earth makes you think Sony would want to suffer an even greater yield problem then they have now? Transistors packed into the smallest die size possible with as low a heat signature attainable always produce the best yields, that is simple manufacturing logic. Even throwing away 3-4 SPE’s will still result in a heat and power issue, meaning a Cell chip with a 970 core would be too slow (as it would have to be clocked really low to overcome the heat issue) to make it useful. The manufacturing limitations are too great, as the author indicates…as what Sony’s President stated in the link I provided you….as what I’ve been pointing out to you for some time.

  21. Well, looks like I did a formatting error above. Anyways, to continue..

    Odyssey, I’m not saying it’s futile to look into the possibilities Cell offers….but it is futile to overlook the manufacturing limitations certain designs impose.

    Microsoft is having issues with its Xenos PPC. So major in fact, it is affecting their ability to deliver enough Xbox 360’s. Now take into consideration Cell is more complicated than Xenos, and IBM manufactures both, what do you think this will bode for Sony when PS3 debuts?

    Of course, Sony is subject to the same vagaries of IBM chip yields, and the cell processor is more complex than the XBox 360 one. But as is often the case, the issue here is not the technology problem, but how the business is marketing how it deals with problems. Microsoft has chosen to not talk about its chip yield problem until now, telling everyone that it was going to sell millions of XBox 360s in the first 90 days. Sony is taking a different tack. Sony has already been communicating challenging messages to consumers, such as the fact that its Playstation 3 will be “more expensive” (chip yield could be one of the factors that Sony is taking into account in this assessment)
    http://www.blackfriarsinc.com/blog/2005/12/low-chip-yields-are-causing-xbox-360.html

    The fact is, manufacturing limitations are much more important than “possibilities”…so much in fact that Sony NEEDS to “disable” one SPE (or basically include chips with a defective SPE) and reduce clockspeed just to get enough useful chips.

    As I stated to you before, there are no “super Cell” chips that’s even TAPED OUT…which occurs just about one year before mass production. There just isnt a manufacturing process currently available now and in the immediate future that would accomodate the Cell variation you are placing your hopes so high upon.

    Feel free to prove otherwise that such chips exist (sending you on missions like this I know you will never succeed in is a good learning experience for you).

    The IBM link/quote you provided stated exactly the same point Sony’s president stated in the link I provided: “An interesting question is what will be done with the Cell chips that only have six working SPEs,” continued Kutaragi. “We won’t use it for the PS3, of course. Rather, I’m seriously thinking about using two of these chips to create a home server. Home servers have less of a constraint in case size and board dimension when compared to the PS3, and we can make enough space for two Cell chips. That will make it a product with a total of 12 SPEs. This is possible with the Cell since it can use as many SPEs as it needs. And this will bring a use to Cell chips that aren’t fit for the PS3.”

    IBM is stating there is a market for all segments for Cell (as they are not going to simply throw away Cell chips with good working cores, even if there’s only 1,2,or 3 etc… of them out of 8). They are not stating they can slap on any number of SPE’s. (I really hope you didnt interpret it that way).

  22. “It is possible to design Cell with similar complexity as current general purpose processors, however there are noticeable practical limitations. You have been concentrating on the possibility, I have been pointing out the limitations. In the manufacturing perspective, the limitation is much more important.”

    Agreed that we are focusing on different things. I just think you overestimate the limitations. As important as some of them are, they aren’t the deal breakers you make them out to be, and I think the way you present your argument belies that.

    For instance, your example of CPUs the size of dinner tables is case in point – neither I, nor the authors of the material I linked to, come close to being that kind of ‘wild-eyed’ about the situation (mischaracterization), and it doesn’t provide a realistic picture of what would need to be done (exaggeration). That’s two inescapable strikes against when assessing your argument. That you see ‘current Cell vs table sized CPUs’ as a good either/or analogy (the kind of analogy which itself is artificial in this case) is, I think, also over the top, and thus strike three. Your ommissions don’t help either. You say “The author states one of the things that can be done to put a 970 core on Cell is to increase die size” and make that your primary argument against it ever happening. Yet what the author actually says is “… either the die size would have to grow to around 250mm square or a pair of SPEs removed.” So by leaving out the second part you artificially pump up your argument.

    Honestly, I’m not trying to be confrontational anymore, I’m just trying to point out that you are no paragon of the virtues you castagate me for not having.

    Anyway, for reference I found some specs on the single core AMD Athlon 64&FX; when they first came out: http://www.hardocp.com/article.html?art=NTI0
    It’s die size was 193mm square at the time; “… the die size might jump out at you as being massive … Certainly these are not inexpensive CPUs to build.” and yet AMD was still able to undercut Intel on price. Even with a die shrink, the current dual core version has grown to 199mm sq:
    http://www.hardocp.com/article.html?art=NzY2

    Pentium D is at 206mm sq
    http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2379&p=9

    Cell is currently listed as 221mm sq
    http://www.ps3portal.com/?page=ps3_cell
    So, that’s hardly a massive difference, and obviously (with all at 90nm manufacturing processes on 300cm wafers) it’s a reasonable yield too. BTW – if manufacturing difficulties are as great as you imagine, somebody better tell STI, cause they’ve built a factory in Japan to go with the one in Fishkill, NY and are going ahead w/Cell-based PS3, HDTVs, and god knows what else anyway.

    Anyway, back to point – G5’s sizes are as follows; PowerPC 970FX (single core) = 66.2mm sq, PowerPC 970MP (dual) = 154mm sq
    http://www.eweek.com/article2/0,1895,1627892,00.asp

    I couldn’t find specifics, but I’m sure that Cell’s PPE w/cache is smaller than the 970FX, although I doubt by very much. There are nine distict processing ‘elements’ on Cell (1 PPE and 8 SPEs), and I would estimate the PPE w/cache to be at least 2x – and probably closer to 3x – the size of an SPE. Simple division of the die size by the total ‘unit sizes’ (my term) would be 221mm sq/11 = 20.1mm sq. Multiply that bt 3 and you get 60.3mm sq for the PPE. Obviously that’s a back-o-the-napkin estimate, but even a smaller size – say 50mm sq – shows that a 970FX-style PPE wouldn’t increase the overall size of a Cell much, and not at all with some number (2, 3, even 4) of SPEs eliminated.

    You mention heat issues with a 970 core. The 970FX has a safe operating temp of 50-55 degrees.
    http://www-306.ibm.com/chips/techlib/techlib.nsf/techdocs/9DBF300EB19A60D287256E4B005E43EC
    Its extremely small die size makes dissipation difficult, but not impossible by any stretch, and certainly better than the old 970, which btw was force air cooled at all but it’s fastest settings. The new dual core 970 has already improved on that, so the 970FX must be better still. Cell’s numbers are unknown, other than it’s maximum 4+Ghz requires something like 130 Watts (about what current highend Pentium desktop CPUs draw). But it is extremely scalable – so much so that there are plans to throttle it down to a Gig or so and put it in a cell phone:
    http://www.cooltechzone.com/index.php?option=content&task=view&id=1944

    cont…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.