Virginia Tech ‘Big Mac’ System X achieves 12.25 teraflops

After achieving international honors and accolades for building System X, the fastest supercomputer at any academic institution in the world (November, 2003 TOP500 List), Virginia Tech announced today that its rebuilt System X is now operating at 12.25 teraflops.

“Virginia Tech will learn of its new ranking when the list is unveiled in November of this year at SuperComputing 2004 in Pittsburgh,” said Srinidhi Varadarajan, the lead designer of the system in the press release. “We expect to do well.”

“This new number is an increase of almost two teraflops over the original System X,” said Hassan Aref, dean of Virginia Tech’s College of Engineering in the press release. “We are extremely pleased with the performance, using the new Apple machines.”

Virginia Tech revealed plans to migrate its cluster of Power Mac G5 desktop computers to Apple’s new Xserve G5 in January. The Xserve G5, the most powerful Xserve yet, delivers more than 18 gigaflops of peak double-precision processing power per system and features the same revolutionary PowerPC G5, 64-bit processor used in Virginia Tech’s original cluster of 1,100 Power Mac G5s.

When Virginia Tech used the G5s, it was establishing that a radically different communications technology could be used to create a large-scale scientific computing platform. After proving the technology worked, Virginia Tech moved to the Xserve G5 cluster due to its server optimized architecture, computing power per unit density, and ground-breaking performance and innovative management tools.

The original System X operated at 10.28 teraflops for the official records, but its peak theoretical performance was rated at 17.7 teraflops.

When Virginia Tech renegotiated with Apple to upgrade System X, the computer company arranged for 1,100 very special Xserve G5 servers to power their System X Supercluster. These systems were custom built by Apple for Virginia Tech utilizing dual 2.3GHz G5 processors. This configuration was developed specifically for Virginia Tech, and Apple currently has no plans to offer 2.3GHz processors in the Xserve G5 product line.

Varadajan and Cal Ribbens, both of Virginia Tech’s academic computer science department in the College of Engineering, confirmed the new benchmark numbers after numerous operations since August. Kevin Shinpaugh and Jason Lockhart, associate directors of the Terascale Computing Facility, assisted on this project, as well as other members of the engineering college and the information technology office.

The supercomputer is part of the university’s Institute for Critical Technology and Applied Science (ICTAS). ICTAS fosters multi-disciplinary, large research projects. The grand challenge problems in science and engineering that can only be solved by a powerful supercomputer meet these criteria.

“We believed that we could build a very high performance machine for a fifth to a tenth of the cost of what supercomputers now cost, and we did,” Aref, a former chief scientist at the San Diego Supercomputer Center, said in the press release. “And we wanted to have our own supercomputer to use for ICTAS, where we will be conducting multidisciplinary work on such topics as nanoelectronics, aerodynamics, and the molecular modeling of proteins. With this machine, our researchers will be able to build computer modeling in days, not years.”

The additional cost to rebuild System X was about $600,000, and included 50 additional nodes. The original cost of System X was $5.2 million.

In addition to the companies that participated in the first design of System X

36 Comments

  1. It’s not a standard Xserve G5, it was custom made by Apple. It’s not using OSX it’s using unix. It’s really an IBM cluster because it uses IBM chips. Apple has had little to do with it.

    Let the bulls*it begin. It is a win for Apple but the Wintellians will cry foul.

  2. There has been far too much made of the 2.3GHz XServe G5’s used at Va Tech versus the standard 2.0GHz versions. While the extra 15% clock speed is nice, it isn’t as if the “general public” is offered a substandard product or that the 2.0GHz Xserve is not a high performer. To me the articles give that flavor when they stress “custom-made,” as if the effort would not have worked nearly as well with standard Apple hardware. They seem to overlook the fact that the Xserve version of System X only provides about 20% more throughput than the original version that used “off-the-shelf” 2.0 GHz PowerMacs.

  3. yeah, i know it’s sarcasm. i did catch the wintellian quip (another reference to 1984? caught that one too).

    it was impressive to wrap a multi-level sarcastic retort that i know 95% missed.

    cheers!

  4. Hmmm … quite good scaling (99%) with the extra hundred processors and extra Ghz ..

    Old config:
    Apple G5 dual 2.0 GHz IBM Power PC 970s, Infiniband 4X primary fabric,
    Cisco Gigabit Ethernet secondary fabric
    2200 processors
    Rmax=10280
    Nmax=520000
    N1/2=152000
    Rpeak=17600 (i.e. 2200*2.0GHz*4 ops)
    Rmax/Rpeak=58.4%

    New config (presumably):
    Apple G5 dual 2.3 GHz IBM Power PC 970s,
    Infiniband 4X primary fabric,
    Cisco Gigabit Ethernet secondary fabric
    2300 processors
    Rmax=12250
    Nmax=? (probably similar to former numbers)
    N1/2=? (probably similar to former numbers)
    Rpeak=21160 (i.e. 2300*2.3GHz*4 ops)
    Rmax/Rpeak=57.9%

    Pretty cool! I daresay better numbers are possible if the high speed interconnections can be improved.

    How System X/BigMac/Whatever performs on other benchmarks/codes would be interesting. In my experience some supercomputers have better general performance (in a robust sense than others) e.g. compare a Fujitsu VPP300 vector machine versus a shared memory machine such as a SGI Power Challenge. The latter machine while much slower at Linpack was easier to optimize for codes that were not so trivially vector oriented.

    Well, since there are many kinds of scientific problems worth solving, the more competition and variety in supercomputerland the better! Bring it on! 😎

  5. Trippah:
    “Not a bad result, but nothing spectacular.”

    Hmmm … a lone unjustified statement with no quantifiable meaning or reference point.

    Maybe Trippah would like some all singing all dancing fireworks spectacular spectacular ala Moulin Rouge … or would that overwhelm Trippah’s sensory capabilities … discernment is the first to go! 😎

    Sorry … I know I shouldn’t … but the blind-IBM-PC-compatible/MS crowd make such nice sitting ducks!
    Good name for a Microsoft Supercomputer … Sitting Duck!

    Anyway, Trippah, what part of the good scaling and good performance and good price/performance do you not like? Absence of a NOS like Windows? (NOS = Non-operating system … especially when Windows mal-ware gets out of hand!)

  6. Um let me get this right. V. Tech built the 2snd fastes Supercomputer, #1 super cluster for $5.2 Million that ran at 10.28 Teraflops, or 1.9 teraflops per million.

    The increase is above the original average cost per Teraflop. for an additional $600K they increased the speed by 1.97 Teraflops, or 57% more speed than an original $600K supplied. This meand that the current 12.25, cost a total of 2.2 teraflops per million dollars.

    Sorry, but thats a huge increase when you consider the realized gain for such a minimal expense.

  7. – Thorpedo: a lone unjustified statement with no quantifiable meaning or reference point.

    My Appologies for I do not talk poo like you.
    Heres a kleenex, wipe your brown mouth.

    “Concluding a 15-week effort with NASA and Intel to build and successfully install the world’s most powerful supercomputer Columbia. Columbia achieved sustained performance of 42.7 trillion calculations per second (teraflops)”

    WHAT A MOTHER OF A MACHINE!

    Which OS you ask? You do the research.

    10.28 Teraflops, I say again, not a bad effort, but its hardly spectacular. How long did it take to set this up from the ground up?

    Thorpedo not a bad effort from you either.

  8. Oh my appologies again, 17.7 teraflops (in theory).
    LOL

    And what % has “Apple” actually done anything useful? Lets see, itss not even powered by OS X, and forget about Apples all mighty geekipod. Uhhh who makes PowerPC processors again? Oh Apple wasn’t 100% responsible for SystemX development?

    “I have an Apple, it works straight out of the box! Thats good because I have no idea what goes inside my computer, and this way I am spoon fed by Steve Jobs himself!” – Many Apple Users

    Seriously.

  9. Trippah: Obviously you haven’t developed any of the cultural niceties one would expect from somebody who can handle a keyboard.

    The achievement of Virginia Tech will little funding and little time was quite worthy, and it is probably a little unfair to compare it with a goverment lab and a serious collaboration with a long-time supercomputer manufacturer, SGI. (Columbia seems to be a poor choice of name or does this system run rather hot? Memory leaks?)

    System X (presumably) is still the fastest (Linpack) supercomputer at an academic institution and took only a few months to come up and the upgrade took even less time. Price/performance is probaby still #1 and it is very likely it uses far less power than the more expensive and faster alternatives. Needless to say, Apple is not currently a supercomputer manufacturer, so from several perspectives it is still a spectacular result, compared with real supercomputers as well as existing Intel/AMD clusters.

    Of course Linpack is only one benchmark and a lightweight one at that! (For those who know!)

    So how much supercomputer time have you being using? Written and ported any supercomputer applications lately? Know anything real about supercomputing?

  10. Trippah:

    Oh by the way, I attended an SGI seminar early this year when they unveiled the SGI� Altix� series and how it could be scaled up and it was clear that they were going to be competing with IBM in the supercomputing market.

    This in no way negates the spectacular nature of Virginia Tech and Apple’s achievements. Of course hardware is going to improve and the more competition the better.

    But note how many Intel� Itanium� 2 processors it took, 8192 (16 of the 20 systems) for 42.7 Tflops
    which works out at 5.2 Gflops per processor.

    The 2300 2.3GHz Power PC 970 processors in System X running achieved 12.25 Tflops which works out at 5.3 Gflops per processor.

    Yes! The same sort of ballpark. Of course scaling is unlikely be linear to 8192 processors but we aren’t talking about a dedicated supercomputer company either. The Xserve G5’s (at 2.0GHz) are off the shelf dual processor components compared to a 512 processor system which is tightly coupled. So it should do better.

    What happens when IBM uses it’s latest POWER chips in a similar manner (Blue Gene is only an experimental PowerPC system and is going to be scaled up much more next year!) and scale to similar sizes?

    Apple is obviously not in the full blown supercomputer market (no need) but there are many more buyers for machines that deliver several Teraflops of power and don’t need much power or cooling to run. How hot do 512 Itaniums run, huh?

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.