A Little Piece Of History

I received the item below in the post this week. And it has made me super happy.
CP-1 Graphite Fragment

The reason that the arrival of this unassuming chunk of grey material made me so gleeful is its history. I’m a bit of a nerd about nuclear power which is an obsession that’s currently slightly less acceptable than fox hunting. The history of nuclear reactors (artificial ones at least) started in a sports stadium in Chicago in 1942. A team lead by Enrico Fermi demonstrated that you could control a fission chain-reaction in a safe manner. The reactor they built was called Chicago Pile 1 (CP-1) since it was literally a pile of graphite blocks, some of which also contained pellets of uranium. All modern reactors, including those that provide the UK with a fifth of it’s electricity, are direct descendants of that first reactor in Chicago.

Recently the CP-1 site was being remediated and a number of original graphite blocks were found. One of these blocks was sliced up into small pieces to be sold for charity. Because I follow a lot of nuclear power folks on the Internet I saw the announcement and a few weeks later I now have a piece of the worlds first nuclear reactor sitting on my bookshelf.

In a pleasing piece of symmetry while most modern nuclear reactor are water-moderated the one closest to me (which probably supplies part of the electricity I use) is Dungeness which is graphite-moderated just like CP-1.

Being Slightly Smarter With seq

This post comes from my ever-growing file of ‘how the hell did I not know that’ issues.

On the cluster at work we have a bunch of machines with names like mysite12 or mysite237. So quite often I’m writing shell scripts to loop through all these boxes to get info. So I do something like this:

for number in `seq 12 256`
do
    node=mysite$number
    echo $node
done

Which produces

mysite12
mysite13
.....
mysite256

It occurred to me today that this is such a staggeringly common thing to do that seq probably has a way of doing it already. Sure enough after reading the man page it turns out that you can hand seq a PRINTF style format command. So I can create my node names purely in seq

for node in `seq -f "mysite%g" 12 255"
do
    echo $node
done


mysite12
mysite13
....
mysite255

How has it taken me ten years to work that out?

Stepping Through Large Database Tables in Python

In order to report usage on our PBSPro compute cluster at work I wrote a simple set of python scripts to dump the accounting information into a MySQL database. This has been working fine for the last year churning out reports every month.
This week I had cause to generate some statics aggregated across the whole three years of the data in the database. I’m using a mixture of Elixir and SQLalchemy to talk to the database. Normally I would do something like this:

mybigtablequery = MyBigTable.query()

for job in mybigtablequery:
    if job.attribute = "thing":
        dosomething()

Which worked fine when the database was quite small. I was horrified to see that as this loop went on my python process used more and more memory because the database connection object never throws away a row once it has been loaded. Fortunately I found an answer on stackoverflow.
So I ended up doing the following:

def batch_query(query,batch=10000):
    offset = 0
    while True:
        for elem in query.limit(batch).offset(offset):
            r = True
            yield elem
        offset += batch
        if not r:
            break
        r = False

mybigtablequery = MyBigTable.query()

for job in batch_query(mybigtablequery,50000):
    if job.attribute = "thing":
        dosomething()

“batch” is just an integer defining how many rows will be fetched by each query. The larger this is the more memory the python interpreter will use but the more efficiently the code will run.

Flexlm License Servers and Firewalls

If you are lucky enough to run your flexlm servers on a tightly controlled corporate network then you probably just turn the firewall off on those servers and get on with your life. Everyone else goes through a certain amount of hair-pulling before they work out how to make flexlm play nicely with firewalls. So I’m writing this post to document the process as much for me as anyone else.

So let us say that you have bought five copies of Bob’s Magical Pony Viewer an awesome graphical client that you can run to show you ponies. In order for Bob to be sure that you only run five copies he has used flexlm to secure his software. You have received a network license for BobSoft that looks like:

SERVER license1 0000eeeeeeee 2020
VENDOR bobs_lm
FEATURE PonyL bobs_lm 1.0 06-jan-2011 5 \
SIGN="EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE \
EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE"

So you think, great we can set that up with only port 2020 open on the license server and everything will excellent. Ponies for five concurrent users, hurray!

Except of course when you try that Pony Viewer adamantly claims that it can’t contact the server. Even though you can netcat/telnet to port 2020 on that server and the flexlm logs tell you that the server is running just fine.

It’s helpful at this point to have a copy of lmutil around to debug the problem. I don’t know where to get lmutil from as it came bundled with the license server software from one of our vendors. But it’s very useful when trying to work out what is going on.

So lets try some things.

#>lmutil lmstat -c 2020@license1
lmutil - Copyright (c) 1989-2004 by Macrovision Corporation. All rights reserved.
Flexible License Manager status on Thu 1/21/2010 19:56

Error getting status: Server node is down or not responding (-96,7)

This is the point at which one normally starts with the hair-tearing. The thing to realise about a flexlm server is that it’s actually two daemons working together. The lmgrd which is running on port 2020 and the vendor daemon (in this case bob_lm) which will start up on a RANDOM port. What is even better is that the vendor daemon will choose a different random port every time you restart the license server.

While discussing this with some fellow sysadmins it turns out that there is another option you can add to flexlm license files which ends this misery. You can tell the vendor daemon to start on a specific port. Like so:

SERVER license1 0000eeeeeeee 2020
VENDOR bobs_lm port=2021
FEATURE PonyL bobs_lm 1.0 06-jan-2011 5 \
SIGN="EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE \
EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE"

And now when we try lmutil

#>lmutil lmstat -c 2020@license1
Flexible License Manager status on Thu 1/21/2010 20:06

License server status: 2020@license1
License file(s) on license1: /opt/BobSoft/license.dat:

license1: license server UP (MASTER) v11.6

Vendor daemon status (on license1):

bob_lm: UP v11.6

Hurray Ponies!

One last thing to note make sure the hostname you specify in the license file matches the hostname of the license server and also the hostname you use when connecting to the server. This is because flexlm sends the hostname asked for as part of the license request and if the two don’t match you won’t get any ponies.

In short flexlm is a dreadful license server it’s just that all the others are even worse.

SC09 – Interesting Tech – Filesystems/Storage

All the usual suspects were visible in Portland this year. Including Panasas, Data Direct, Isilon, Lustre and IBM/GPFS. But we’ve all seen those all before. Two storage related technologies caught my eye at SC09 because I’d never seen them before.

I caught a technical session from a Korean company called Pspace. They developed a parallel filesystem called Infinistor for a couple of big Telco/ISPs in Korea. It’s a pretty straight-forward parallel filesystem with metadata and object data handled by separate servers. Object servers are always at least N+1 so that you can lose a whole object server without losing access to your data.

The neat things about infinistor are that it keeps track of how often data is accessed and it understands the concept that some storage is faster than others. So you could have some smaller servers based on SSD and Infinistor will replicate frequently accessed content to the fast disks. It can even handle multiple speeds of storage within one object server.

As you might expect from a project born in ISP-land it has a lot of support for replication across multiple sites. Since it’s always good to serve your client data from a node close to them on the network. Infinistor can replicate synchronously or asynchronously. With the latter prioritised for frequently accessed content.

File access is POSIX filesystem (It will do NFS or CIFS) or a REST API.

As ever with big conferences not everything you learn comes from the sessions or the Exhibition hall. I got chatting to an engineer from Pittsburgh Supercomputing Centre about the parallel filesystem they wrote called ZEST. The best thing about this filesystem is that you can’t read from it.

So I should back up for a second here and describe the problem ZEST is trying to solve since most of you are probably thinking “what use is a filesystem you can’t read from”. Here in HPC land we have all these big machines with thousands of very fast cores and big, fast interconnects. All this cost money. Unfortunately the more nodes you are running across the more likely you are to hit a problem (e.g dividing the current day in Mayan Long Count by the least significant digit in the firmware revision of your firmware cause your HCA to turn into a pumpkin or one of the million other failure modes that a wearyingly familiar to HPC ops people around the world). When this happens you don’t want to lose all the time you’ve spent up until the fault happened. And Lo unto the world did come Checkpointing.

Which is basically to say a lot of big codes will periodically dump their running state to disk so that in the event of a problem they can pick up from the last checkpoint. Now obviously this can be Terabytes of data and it takes a while to write it to disk. While you are doing that all those shiny, shiny CPUs are sitting idle. This makes the Intel salesman happy, but make your funding agencies cry.

So the approach in ZEST is to remove all the complexity involved in making a filesystem that you can read from in order to allow clients to write as fast as possible. There are a number of design decisions here that are interesting. ZEST storage servers don’t use RAID but assign write queues to each individual disk. All the checksumming and parity calculations are done on the client ( because these are over-endowed HPC nodes we are talking about here). By stripping away all this complexity ZEST aims to give each write client the the full bandwidth of the disk it’s writing to. Because most codes will be doing checkpointing from multiple nodes at once this is going to add up to significant aggregate bandwidth.

As an offline process the files that have been dumped to disk are re-aggreagated and copied onto a Lustre filesystem from where they can be read. So I kind of lied when I said it was read only. More technical detail can be found in the ZEST paper.

SC09 – Interesting Tech – Shared Memory

We are beginning to approach the end of the conference formally known as SuperComputing, so I thought it was about time that I started to write up some of the copious volumes of notes that have begun to clutter up the hard-drive of my netbook.

One of the problems we had when we performed our last procurement was that real shared-memory systems couldn’t be fitted into the budget so we had to make do with a set of 16-core commoditty boxes. We have some codes that could do with scaling-out a little bit further than that.

Which brings me nicely to 3Leaf who are building technology to hook multiple commodity boxes together so that the OS (a normal Linux build plus some proprietary kernel modules) sees them as one machine. All hardware on the individual nodes should be visible to the OS just like it would on a single machine. So you can do weird things like software RAID across all the single SATA disks in a bunch of nodes. 3Leaf caution that it’s possible that there is some funky hardware out there that wouldn’t interact well with their setup but they haven’t met it yet. The interconnect is InfiniBand DDR. While it’s not stated up-front by 3Leaf conversations with them indicate that the ASIC is implementing some kind of vitualisation layer which makes it sound sort of like ScaleMP in hardware.

A stack of 3Leaf nodes is essentially a set of AMD boxes with the 3Leaf ASIC sitting in an extra AMD CPU socket. The on-board IB is then used to carry communications traffic between the separate nodes. The manager node (a separate lower spec box) controls the booting and partitioning of the nodes such that a stack can be brought up as one big box or several smaller units.

My favourite thing about the 3Leaf solution is that you can add extra IB cards which will behave normally for the OS. This means you interface the stack to things like Lustre or NFS/RDMA over IB which many HPC facilities will already have in operation.
While currently AMD only 3Leaf claim they will have a product ready for the release of the next version of Intel’s QPI.

And in case you think this might be vapourware apparently Florida State have just bought one.

On a more traditional note SGI announced the availability their new UV shared-memory machine. Essentially an ALTIX 4700 with uprated numalink and x86_64 chips rather than Itanium. The SGI folks swear that there is no proprietary code necessary to make these machines work and that all the kernel support is in mainline. If so that is a very positive step for SGI to take. Hardware MPI acceleration is supported by the SGI MPI hardware stack. It wasn’t clear to me whether SGI are expecting other MPIs to be able to take advantage of this capability. Depending on the price-point UV might be a very interesting machine.

Speaking of all things NUMA I had an interesting chat to the chaps at Numacale. It turns out they are a spin-off from Dolphin. They are making an interconnect card on HTX that will do ccNUMA on commodity AMD kit. The ccNUMA engine is a direct descendant of the one in the Dolphin SCI system (I should note that we still have a Dolphin cluster in operation back home). Like SCI this interconnect is wired together in a loop/torus/3d torus topology without a switch.

Numascale have evaluation kit built on FPGAs at the moment and expect tot tape-out the real ASICs early next year. Like 3Leaf they claim to be working on version for the next version of Intel’s QPI.

And now we move from shared-memory to memory-sharing. Portland’s own RNA Networks have a software technology for sharing memory over IB. You can take chunks of memory on several nodes and hook them together as a block device to use as fast cache. If you stack mount this over another networked-filesystem it acts as an extra layer of caching. So access will go to the local page cache then over IB to the RNA cache and finally over the network the original filesystem. I can see a number of use cases where this could be used to add parallel scaling to a single network filesystem. Although at roughly $1000 per node plus IB I’m not sure it works out cheaper than some of the cheaper ethernet based clustered storage systems.

You can also use this memory-based block device to run a local paralell filesystem if you want although I can’t quite see the use case.

One thing I forgot to ask was whether the cache can be used a straight physical RAM for those really naive codes that just run a whole bunch of data into memory and could do with access to extra space.

TAM London Day One

The Amazing Meeting London (TAMLondon) kicked off today. My day started off at the back of a queue outside the Mermaid Theater in Blackfriars. Things started off badly for me as I spent most of my queuing time on my trusty Nokia E71 logged into Merlin removing a recalcitrant compute node from the job scheduling system.

All of TAM takes place in the main auditorium of the Mermaid which comfortably seats all ~600 delegates. First up on stage was Brian Cox to talk about the Large Hadron Collider and the scientific questions it is designed to answer. Although what his talk was really about was politician’s tin ear for the fundamental goals of basic science. This lead to a brief (but thoroughly deserved) shoeing for the shambolic STFC. Whenever I listen to Brian Cox I feel like I understand particle physics. This feeling usual lasts as long as it takes me to forget what a lepton is (i.e not very long).

Jon Ronson gave an entertaining account of his adventure at Bohemian Grove and some anecdotes about the people in his book The Men Who Stare at Goats. If you have read his books and seen his documentaries you probably know most of this already. This was lots of fun, but could have done with more David Icke reptile anecdotes.

Simon Singh rounded off the morning session with an update on his progress in the libel suit that the British Chiropracters Association brought against him. He outlined the gross unfairness of the English Libel system pointing out that not only is the burden of proof on the defendant, but it is 140 times more expensive to defend a libel suit in England that it is in most of the rest of Europe. The rapturous reception Singh recieved from the TAMLondon crowd shows that many skeptics share his disgust with English Libel law. As an added bonus superstar legal-blogger Jack Of Kent spoke from the audience.

Lunch was nice and I took the opportunity to wander around St Pauls. I lived in London for six and half years and managed never to visit it.

The glamorous Ariane Sherine lead off the afternoon session with a behind the scenes look at the Atheist Bus Campaign. This might have been usefully subtitled ‘Accidental Atheist Activism’. This section brought up the usual skeptics vs atheists debate which passed without rancor.
Ben Goldacre’s barnstorming presentation on the failures of science journalism was the highlight of the day for me. In particular Goldacre’s Law: ‘There is no piece of fuckwittery so stupid that I can’t find at least one Doctor or PhD to defend it to the death’. It almost goes without saying that Goldacre holds the current TAMLondon record for most profanities in a single presentation. This Brigstockian performance was punctuated by vehement applause from the audience on several occasions.

As a special treat James Randi joined us by skype. While I was disappointed that Randi couldn’t be here in person I’m glad he is listening to his physicians. Hearing Randi reminisce about the highlights of his career was a pleasure.

Drawing proceedings to a close Phil Plait presented Simon Singh with a JREF award in recognition of his on-going legal battle with the BCA.

Now that day one is at an end I must mention Richard Wiseman’s MC’ing, which has been a delight throughout. He is a genuinely funny stage presence and I was nearly in tears with laughter during his ‘teatowel into chicken’ trick.

TAMLondon day one has been more fun a barrel full of monkeys. My only wish is that tomorrow there will be a copy of ’59 seconds’ left so that I can purchase it and read it on the train back to Cardiff.

Europython – Days 3 to 5 – Roundup

As Europython got more hectic and my 3G connection got more erratic, my daily blogging ceased. So this roundup is mostly the result of notes I wrote on the train journey through the picturesque Welsh Borders back home to Cardiff. These are the talks that made an impact on me.

GIL isn’t Evil Russell Winder, who is every inch the sterotype of a former theoretical physicist, showed some simple benchmarks of threads vs Parallel Python vs Multiprocessing which showed that you could get good parallel speed-up in python by using the latter two approaches. We have a number of people who use Numpy and Scipy on the cluster and it would be interesting to see if we could get some quick speedups for them using these approaches.

Twisted, AMQP and Thrift A quick introduction to AMQP and the fact that lots of big financial companies are ripping out Tibco/IBM MQ to make way for AMQP. These guys wrote twisted interfaces to AMQP and Thrift so that you can make Thrift RPC calls and everything magically goes over AMQP. It was interesting but without taking a serious look through some example code I’m not sure that it will be useful for any of my particular projects.

PIPPER A python system where you add comments very much like OpenMP pragmas allowing you to parallelise For loops. It does this by serialising the function and the data that go to it and sending it over MPI to a c-based engine that runs the function and returns the data over MPI. This is nice because it lets you take advantage of the MPI and Interconnect on a proper compute cluster. However it can’t handle the full language set of python and you can’t use c extensions. Which means Numpy and Scipy are out of the window. Which is a shame because most of the codes that you could trivially parallelise with this system use Numpy.

Python and CouchDB An opinionated Mozilla hacker talking about how awesome CouchDB is. I understood it a bit more by the end of the session and kind of wondered what it would be like to dump log files straight into it. The talk in the corridors was the MongoDB looked a bit more production oriented. However I managed to miss that talk so will have to look it up later.

Keynote by Bruce Eckel. He started by pimping the unconference idea, which looked good to me. And got me thinking about whether there might be room for the approach at work. The language archaeology part was entertaining but I can’t remember a single thing from it.

Ctypes This was a really useful talk. Greg Holliing did a good job of going through some of the pitfalls of ctypes such as 32/64 bit int mis-match with the underlying c API. So you should always cast to one of the ctypes types to make it explicit what you are passing through. This was probably the talk that is most likely to make an impact on my production code over the next twelve months.

OpenERP In hindsight I should have also gone to the OpenObject talk which explained the underlying data model. The best thing about this was that each module can stand alone. i.e you can install the inventory module or the CRM module without all the others. OpenERP speaks a Web Services API so it would be very easy to develop against it. There is a chance I may be able to solve some of the organisational challenges at work by throwing this tech at them.

Python for System Admin A good talk somewhat hampered by John Pinner’s need to support Python2.2 so some of the code example looked a bit strange. John is of the opinion that argparse is better than optparse (which I habitually use). One of the other attendees pointed me to a pycon2009 talk on argparse which apparently explains the difference.

Software Apprenticeship An interesting approach to training programmers that made a lot of sense to me. In Britain we have an awful tendency to belittle vocational training in comparison to academic education when for a great many professions we could do with more of the former and less of the latter. Lots to think about and Christian Theune provided a wealth of advice based on his practical experience of helping to train apprentices.

This was my first EuroPython and I found it educational and entertaining. I was exposed to lots of interesting technology, some of which may improve my daily work. More important than that was the opportunity to talk to other Python developers about their experiences of using Python to get real work done. EP is in Birmingham again next year so I have no excuse not to attend. Many thanks to all the hard-working people who helped to organise this conference and the wonderful delegates who made it so much fun to attend.

Europython – Day 2 – Tutorials

Today’s notable achievements were that I managed to stay on power and network for most of the day. Mostly due to the fact that I lucked out to get a seat next to a power bar in the lecture theatre holding Luke Leighton’s Pyjamas tutorial. I was interested in Pyjamas for a web project I may have to get up and running quite quickly over the summer. Although there were some rocky patches due to SVN mismatches I mostly managed to get a handle on how Pyjamas works. As a note to future tutors: if you need your tutees to download the trunk from SVN it’s probably best to specify the revision that works. This avoids everyone turning up with a version of your code that won’t run the examples. Also, I still don’t understand decorators.

Today’s buffet lunch was nice. Props to the conference organisers.

The day was nicely rounded off by dinner at a fine indian restaurant and a pint of very nice beer in the Wellington. Looking forward to the start of the conference proper.