SC09 – Interesting Tech – Shared Memory

We are beginning to approach the end of the conference formally known as SuperComputing, so I thought it was about time that I started to write up some of the copious volumes of notes that have begun to clutter up the hard-drive of my netbook.

One of the problems we had when we performed our last procurement was that real shared-memory systems couldn’t be fitted into the budget so we had to make do with a set of 16-core commoditty boxes. We have some codes that could do with scaling-out a little bit further than that.

Which brings me nicely to 3Leaf who are building technology to hook multiple commodity boxes together so that the OS (a normal Linux build plus some proprietary kernel modules) sees them as one machine. All hardware on the individual nodes should be visible to the OS just like it would on a single machine. So you can do weird things like software RAID across all the single SATA disks in a bunch of nodes. 3Leaf caution that it’s possible that there is some funky hardware out there that wouldn’t interact well with their setup but they haven’t met it yet. The interconnect is InfiniBand DDR. While it’s not stated up-front by 3Leaf conversations with them indicate that the ASIC is implementing some kind of vitualisation layer which makes it sound sort of like ScaleMP in hardware.

A stack of 3Leaf nodes is essentially a set of AMD boxes with the 3Leaf ASIC sitting in an extra AMD CPU socket. The on-board IB is then used to carry communications traffic between the separate nodes. The manager node (a separate lower spec box) controls the booting and partitioning of the nodes such that a stack can be brought up as one big box or several smaller units.

My favourite thing about the 3Leaf solution is that you can add extra IB cards which will behave normally for the OS. This means you interface the stack to things like Lustre or NFS/RDMA over IB which many HPC facilities will already have in operation.
While currently AMD only 3Leaf claim they will have a product ready for the release of the next version of Intel’s QPI.

And in case you think this might be vapourware apparently Florida State have just bought one.

On a more traditional note SGI announced the availability their new UV shared-memory machine. Essentially an ALTIX 4700 with uprated numalink and x86_64 chips rather than Itanium. The SGI folks swear that there is no proprietary code necessary to make these machines work and that all the kernel support is in mainline. If so that is a very positive step for SGI to take. Hardware MPI acceleration is supported by the SGI MPI hardware stack. It wasn’t clear to me whether SGI are expecting other MPIs to be able to take advantage of this capability. Depending on the price-point UV might be a very interesting machine.

Speaking of all things NUMA I had an interesting chat to the chaps at Numacale. It turns out they are a spin-off from Dolphin. They are making an interconnect card on HTX that will do ccNUMA on commodity AMD kit. The ccNUMA engine is a direct descendant of the one in the Dolphin SCI system (I should note that we still have a Dolphin cluster in operation back home). Like SCI this interconnect is wired together in a loop/torus/3d torus topology without a switch.

Numascale have evaluation kit built on FPGAs at the moment and expect tot tape-out the real ASICs early next year. Like 3Leaf they claim to be working on version for the next version of Intel’s QPI.

And now we move from shared-memory to memory-sharing. Portland’s own RNA Networks have a software technology for sharing memory over IB. You can take chunks of memory on several nodes and hook them together as a block device to use as fast cache. If you stack mount this over another networked-filesystem it acts as an extra layer of caching. So access will go to the local page cache then over IB to the RNA cache and finally over the network the original filesystem. I can see a number of use cases where this could be used to add parallel scaling to a single network filesystem. Although at roughly $1000 per node plus IB I’m not sure it works out cheaper than some of the cheaper ethernet based clustered storage systems.

You can also use this memory-based block device to run a local paralell filesystem if you want although I can’t quite see the use case.

One thing I forgot to ask was whether the cache can be used a straight physical RAM for those really naive codes that just run a whole bunch of data into memory and could do with access to extra space.