Virginia Tech deploys new 29-teraflop Mac supercomputing cluster (324 Apple Mac Pros)

Mellanox Technologies, Ltd. today announced that its 40Gb/s InfiniBand technology interconnects the new 29TFlops computer systems research cluster at the Center for High-End Computing Systems (CHECS) within the Virginia Polytechnic Institute and State University. 40Gb/s InfiniBand was introduced in June 2008 and this compute cluster showcases the quick industry acceptance and adoption compared to other high speed interconnect technologies. Mellanox 40Gb/s InfiniBand technology provides the necessary scalability and performance for the CHECS research activities and will be the foundation for the development of next generation, power-aware high-end computing resources.

“Mellanox continues to lead the high-performance computing and enterprise data center industry with the highest performing interconnect technology that delivers unmatched performance for parallel compute clusters,” said Thad Omura, vice president of product marketing at Mellanox Technologies, in the press release. “In 2003 we partnered with Virginia Tech to build the first 10Gb/s InfiniBand large-scale cluster that was ranked number three on the Top500 list at the time. We are excited to partner with Virginia Tech for the first large-scale cluster installation using 40Gb/s InfiniBand, which leverages the mature InfiniBand ecosystem to support the ever increasing demands for efficient and scalable compute systems.”

The new system consists of 324 Apple Mac Pro Servers with the total of 2592 CPU cores efficiently connected with ConnectX 40Gb/s InfiniBand adapters and InfiniScale IV switches, and 40Gb/s copper cables from Amphenol and W.L. Gore & Associates. The system’s performance is ranked among the Top 100 systems of the June 2008 Top500 list of supercomputers. CHECS will be leveraging the system and the new InfiniBand capabilities of congestion control and adaptive routing for expanding high-end computing research activities, in particular in the areas of power aware systems, transparent distributed shared memory systems and high-performance distributed storage systems.

“Our mission is to build computing systems and environments that can efficiently and usably span the scale from department-sized machines to national-scale resources, and will meet the day-to-day needs of computational scientists,” said Srinidhi Varadarajan, Director of CHECS at Virginia Tech, in the press release. “Mellanox’s 40Gb/s InfiniBand technology brings the necessary capabilities to our research activities and will be the focus for the design and deployment of the next generation of high-end systems.”

More info about Mellanox 40G/s InfiniBand solutionshere.

Info about CHECS activities can be found here.

Source: Mellanox Technologies

[Attribution: MacNN. Thanks to MacDailyNews Reader “Ampar” for the heads up.]


  1. @Synthmeister

    I think they are using Mac Pro’s, Xserves are the rackmounted ones, both can be considered servers just by the fact that they are loaded with OSX Server.

    I would have expected them to use Xserves as well, I would think that rack-mounted hardware would be easier to manage then a ton of tower’s all over the place.

  2. Lazy European,

    Good one!

    Hey, didn’t one of the universities do this a couple of years ago? I remember one replacing a boatload of PowerMacs and they ended up selling the old ones, I believe through MacMall.

  3. imagine how fast and how often windows would crash on that thing? Guess you couldn’t complain about having to reboot windows after installing an app? then again vista would still be slower on this then a typical mac…

  4. @ MrScrith and Synthmeyester
    while the Xserv package would give you a more compact package you will nore that the pro has a higher top end 3.2 Ghz vs 3.0 Ghz and an extra terabyte of riad memory. The Octo-Core processors are probably the exact same parts in both units but thermal performance at the maximum speeds come into play in the tighter packages. Unless you are willing to over clock and use exotic cooling schemes it’s better to play it safe and use the larger more thermally efficient package for enhanced relaibility, plus this high speed bus interconnect is a custom copper cable so you probably need as many expansion slots as you can get a hold of to fit that stuff to the mother board bus. The extra 200Mhz per unit of performance probably allowed them to use fewer machines to get the achieved performance goal.

  5. @irishspoon
    “I bet Photoshop still crawls.”

    That was funny, but at the same time more perspective than many may realise. I have once built couple of SMPS clusters for a university and published a paper on them at AIAA. One was with Linux and one with Windows. Windows cluster is really an oxymoron. Except it is coveted in certain industries (Aerospace for me) where many of the software are windows native (shocking eh!)
    Basically, windows clusters are mostly fail-safe and backup system racks and NOT performance based. There are some solutions available from 3rd parties but all of them that I’ve tested still require the applications (that are to run) to contain certain code execution from the headers. Otherwise, the cluster resources will not be able to successfully ‘split’ the load of the application amongst its primary and client nodes (resource handling of cpus and memory mostly).
    Which means, one would need the source code of Photoshop etc. to manually add a few lines of C++ codes in the header file (or the specific headers).

    Now, this is where the next generation of OSX, the Snow Leopard will be a relevant monster. That is the future and I expect, Apple to leave the rest (including Linux) in the dust and become the de-facto OS for the enterprises and research labs across the globe.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.