Category Archives: SysAdmin

Unix Server Reboot Controversy

Paul Venezia has been getting a certain amount of flack for suggesting that rebooting Unix boxes is an inherently bad thing. I mostly agree with him.

People have been pointing out the value of regular reboots as part of maintenance. Paul maintains that any bugs that get shaken out by a regular reboot are merely evidence of an insufficiently well managed configuration. I see a lack of reboots as evidence of an insufficiently well tested configuration. If you want to be sure that all your service startup and shutdowns are in the right order through a full reboot, the best way to test it is by doing a reboot. At that point you need never reboot your box again provided you are also in the habit of never changing its configuration. Clearly there is a value judgement to be made here. Not all changes to a system will have an impact on its startup. But, modern systems are complex with a lot of interdependencies and humans are imperfect creatures. A scheduled reboot is a small amount of managed downtime to double-check that you got things right in the hope of staving off a longer period of unmanaged downtime at some point in the future.

The other thing that often happens at reboot time is hardware failure. This is particularly the case with ECC memory modules exceeding their error thresholds. But to a lesser extent you see that same thing with hard disks and power supplies. It’s much nicer to be able to call up your vendor for parts during working hours than the middle of the night. This behaviour depends on how much your servers resemble big iron. As a server’s irony is embiggened it is much more likely to tell you things are going wrong, and to let you replace parts while still up and running.

The obvious point that I’ve failed to make so far is that this is only the case if rebooting a box is not the same thing as a service outage. If you have one solitary mail server with no failover then rebooting that box will take out you mail service for however long the reboot takes. In this case I would understand perfectly well if you did’t want to reboot it once a month when the chance of it suffering hardware failure in any given three year period is actually pretty small. Although I would still caution that finding out about the flaky PSU when you get a power cut at three AM is less favourable than finding the same thing at eight AM on a Thursday during your scheduled reboot.

I briefly mentioned failover. This is probably the reason most larger systems do scheduled reboots. If you have gone to the time and expense of installing complicated hardware and software failover systems you really have to test them with some regularity. Regular reboots and failover from one side of an HA system to the other are a sensible part of any such testing.

If you have a small number of single machines running lone services then regular reboots of your servers may be positively harmful to your service uptime. If you have a large fleet of machines running clustered services then the concept regular reboots is almost certainly beneficial. Deciding where you are on this spectrum is the tricky bit. As ever it’s a weighing up of costs and benefits against a background of not entirely quantifiable risks.

All that being said I believe Paul’s main beef was with the people who reboot as an initial fix for any mishap; Yes junior MCSE, I’m looking at you. Obviously, if your first response to any service outage is to hit the reset button then there is something wrong with you and you don’t belong anywhere near any IT system.

Being Slightly Smarter With seq

This post comes from my ever-growing file of ‘how the hell did I not know that’ issues.

On the cluster at work we have a bunch of machines with names like mysite12 or mysite237. So quite often I’m writing shell scripts to loop through all these boxes to get info. So I do something like this:

for number in `seq 12 256`
do
    node=mysite$number
    echo $node
done

Which produces

mysite12
mysite13
.....
mysite256

It occurred to me today that this is such a staggeringly common thing to do that seq probably has a way of doing it already. Sure enough after reading the man page it turns out that you can hand seq a PRINTF style format command. So I can create my node names purely in seq

for node in `seq -f "mysite%g" 12 255"
do
    echo $node
done


mysite12
mysite13
....
mysite255

How has it taken me ten years to work that out?

Flexlm License Servers and Firewalls

If you are lucky enough to run your flexlm servers on a tightly controlled corporate network then you probably just turn the firewall off on those servers and get on with your life. Everyone else goes through a certain amount of hair-pulling before they work out how to make flexlm play nicely with firewalls. So I’m writing this post to document the process as much for me as anyone else.

So let us say that you have bought five copies of Bob’s Magical Pony Viewer an awesome graphical client that you can run to show you ponies. In order for Bob to be sure that you only run five copies he has used flexlm to secure his software. You have received a network license for BobSoft that looks like:

SERVER license1 0000eeeeeeee 2020
VENDOR bobs_lm
FEATURE PonyL bobs_lm 1.0 06-jan-2011 5 \
SIGN="EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE \
EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE"

So you think, great we can set that up with only port 2020 open on the license server and everything will excellent. Ponies for five concurrent users, hurray!

Except of course when you try that Pony Viewer adamantly claims that it can’t contact the server. Even though you can netcat/telnet to port 2020 on that server and the flexlm logs tell you that the server is running just fine.

It’s helpful at this point to have a copy of lmutil around to debug the problem. I don’t know where to get lmutil from as it came bundled with the license server software from one of our vendors. But it’s very useful when trying to work out what is going on.

So lets try some things.

#>lmutil lmstat -c 2020@license1
lmutil - Copyright (c) 1989-2004 by Macrovision Corporation. All rights reserved.
Flexible License Manager status on Thu 1/21/2010 19:56

Error getting status: Server node is down or not responding (-96,7)

This is the point at which one normally starts with the hair-tearing. The thing to realise about a flexlm server is that it’s actually two daemons working together. The lmgrd which is running on port 2020 and the vendor daemon (in this case bob_lm) which will start up on a RANDOM port. What is even better is that the vendor daemon will choose a different random port every time you restart the license server.

While discussing this with some fellow sysadmins it turns out that there is another option you can add to flexlm license files which ends this misery. You can tell the vendor daemon to start on a specific port. Like so:

SERVER license1 0000eeeeeeee 2020
VENDOR bobs_lm port=2021
FEATURE PonyL bobs_lm 1.0 06-jan-2011 5 \
SIGN="EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE \
EEEE EEEE EEEE EEEE EEEE EEEE EEEE EEEE"

And now when we try lmutil

#>lmutil lmstat -c 2020@license1
Flexible License Manager status on Thu 1/21/2010 20:06

License server status: 2020@license1
License file(s) on license1: /opt/BobSoft/license.dat:

license1: license server UP (MASTER) v11.6

Vendor daemon status (on license1):

bob_lm: UP v11.6

Hurray Ponies!

One last thing to note make sure the hostname you specify in the license file matches the hostname of the license server and also the hostname you use when connecting to the server. This is because flexlm sends the hostname asked for as part of the license request and if the two don’t match you won’t get any ponies.

In short flexlm is a dreadful license server it’s just that all the others are even worse.

SC09 – Interesting Tech – Filesystems/Storage

All the usual suspects were visible in Portland this year. Including Panasas, Data Direct, Isilon, Lustre and IBM/GPFS. But we’ve all seen those all before. Two storage related technologies caught my eye at SC09 because I’d never seen them before.

I caught a technical session from a Korean company called Pspace. They developed a parallel filesystem called Infinistor for a couple of big Telco/ISPs in Korea. It’s a pretty straight-forward parallel filesystem with metadata and object data handled by separate servers. Object servers are always at least N+1 so that you can lose a whole object server without losing access to your data.

The neat things about infinistor are that it keeps track of how often data is accessed and it understands the concept that some storage is faster than others. So you could have some smaller servers based on SSD and Infinistor will replicate frequently accessed content to the fast disks. It can even handle multiple speeds of storage within one object server.

As you might expect from a project born in ISP-land it has a lot of support for replication across multiple sites. Since it’s always good to serve your client data from a node close to them on the network. Infinistor can replicate synchronously or asynchronously. With the latter prioritised for frequently accessed content.

File access is POSIX filesystem (It will do NFS or CIFS) or a REST API.

As ever with big conferences not everything you learn comes from the sessions or the Exhibition hall. I got chatting to an engineer from Pittsburgh Supercomputing Centre about the parallel filesystem they wrote called ZEST. The best thing about this filesystem is that you can’t read from it.

So I should back up for a second here and describe the problem ZEST is trying to solve since most of you are probably thinking “what use is a filesystem you can’t read from”. Here in HPC land we have all these big machines with thousands of very fast cores and big, fast interconnects. All this cost money. Unfortunately the more nodes you are running across the more likely you are to hit a problem (e.g dividing the current day in Mayan Long Count by the least significant digit in the firmware revision of your firmware cause your HCA to turn into a pumpkin or one of the million other failure modes that a wearyingly familiar to HPC ops people around the world). When this happens you don’t want to lose all the time you’ve spent up until the fault happened. And Lo unto the world did come Checkpointing.

Which is basically to say a lot of big codes will periodically dump their running state to disk so that in the event of a problem they can pick up from the last checkpoint. Now obviously this can be Terabytes of data and it takes a while to write it to disk. While you are doing that all those shiny, shiny CPUs are sitting idle. This makes the Intel salesman happy, but make your funding agencies cry.

So the approach in ZEST is to remove all the complexity involved in making a filesystem that you can read from in order to allow clients to write as fast as possible. There are a number of design decisions here that are interesting. ZEST storage servers don’t use RAID but assign write queues to each individual disk. All the checksumming and parity calculations are done on the client ( because these are over-endowed HPC nodes we are talking about here). By stripping away all this complexity ZEST aims to give each write client the the full bandwidth of the disk it’s writing to. Because most codes will be doing checkpointing from multiple nodes at once this is going to add up to significant aggregate bandwidth.

As an offline process the files that have been dumped to disk are re-aggreagated and copied onto a Lustre filesystem from where they can be read. So I kind of lied when I said it was read only. More technical detail can be found in the ZEST paper.

SC09 – Interesting Tech – Shared Memory

We are beginning to approach the end of the conference formally known as SuperComputing, so I thought it was about time that I started to write up some of the copious volumes of notes that have begun to clutter up the hard-drive of my netbook.

One of the problems we had when we performed our last procurement was that real shared-memory systems couldn’t be fitted into the budget so we had to make do with a set of 16-core commoditty boxes. We have some codes that could do with scaling-out a little bit further than that.

Which brings me nicely to 3Leaf who are building technology to hook multiple commodity boxes together so that the OS (a normal Linux build plus some proprietary kernel modules) sees them as one machine. All hardware on the individual nodes should be visible to the OS just like it would on a single machine. So you can do weird things like software RAID across all the single SATA disks in a bunch of nodes. 3Leaf caution that it’s possible that there is some funky hardware out there that wouldn’t interact well with their setup but they haven’t met it yet. The interconnect is InfiniBand DDR. While it’s not stated up-front by 3Leaf conversations with them indicate that the ASIC is implementing some kind of vitualisation layer which makes it sound sort of like ScaleMP in hardware.

A stack of 3Leaf nodes is essentially a set of AMD boxes with the 3Leaf ASIC sitting in an extra AMD CPU socket. The on-board IB is then used to carry communications traffic between the separate nodes. The manager node (a separate lower spec box) controls the booting and partitioning of the nodes such that a stack can be brought up as one big box or several smaller units.

My favourite thing about the 3Leaf solution is that you can add extra IB cards which will behave normally for the OS. This means you interface the stack to things like Lustre or NFS/RDMA over IB which many HPC facilities will already have in operation.
While currently AMD only 3Leaf claim they will have a product ready for the release of the next version of Intel’s QPI.

And in case you think this might be vapourware apparently Florida State have just bought one.

On a more traditional note SGI announced the availability their new UV shared-memory machine. Essentially an ALTIX 4700 with uprated numalink and x86_64 chips rather than Itanium. The SGI folks swear that there is no proprietary code necessary to make these machines work and that all the kernel support is in mainline. If so that is a very positive step for SGI to take. Hardware MPI acceleration is supported by the SGI MPI hardware stack. It wasn’t clear to me whether SGI are expecting other MPIs to be able to take advantage of this capability. Depending on the price-point UV might be a very interesting machine.

Speaking of all things NUMA I had an interesting chat to the chaps at Numacale. It turns out they are a spin-off from Dolphin. They are making an interconnect card on HTX that will do ccNUMA on commodity AMD kit. The ccNUMA engine is a direct descendant of the one in the Dolphin SCI system (I should note that we still have a Dolphin cluster in operation back home). Like SCI this interconnect is wired together in a loop/torus/3d torus topology without a switch.

Numascale have evaluation kit built on FPGAs at the moment and expect tot tape-out the real ASICs early next year. Like 3Leaf they claim to be working on version for the next version of Intel’s QPI.

And now we move from shared-memory to memory-sharing. Portland’s own RNA Networks have a software technology for sharing memory over IB. You can take chunks of memory on several nodes and hook them together as a block device to use as fast cache. If you stack mount this over another networked-filesystem it acts as an extra layer of caching. So access will go to the local page cache then over IB to the RNA cache and finally over the network the original filesystem. I can see a number of use cases where this could be used to add parallel scaling to a single network filesystem. Although at roughly $1000 per node plus IB I’m not sure it works out cheaper than some of the cheaper ethernet based clustered storage systems.

You can also use this memory-based block device to run a local paralell filesystem if you want although I can’t quite see the use case.

One thing I forgot to ask was whether the cache can be used a straight physical RAM for those really naive codes that just run a whole bunch of data into memory and could do with access to extra space.

Europython – Days 3 to 5 – Roundup

As Europython got more hectic and my 3G connection got more erratic, my daily blogging ceased. So this roundup is mostly the result of notes I wrote on the train journey through the picturesque Welsh Borders back home to Cardiff. These are the talks that made an impact on me.

GIL isn’t Evil Russell Winder, who is every inch the sterotype of a former theoretical physicist, showed some simple benchmarks of threads vs Parallel Python vs Multiprocessing which showed that you could get good parallel speed-up in python by using the latter two approaches. We have a number of people who use Numpy and Scipy on the cluster and it would be interesting to see if we could get some quick speedups for them using these approaches.

Twisted, AMQP and Thrift A quick introduction to AMQP and the fact that lots of big financial companies are ripping out Tibco/IBM MQ to make way for AMQP. These guys wrote twisted interfaces to AMQP and Thrift so that you can make Thrift RPC calls and everything magically goes over AMQP. It was interesting but without taking a serious look through some example code I’m not sure that it will be useful for any of my particular projects.

PIPPER A python system where you add comments very much like OpenMP pragmas allowing you to parallelise For loops. It does this by serialising the function and the data that go to it and sending it over MPI to a c-based engine that runs the function and returns the data over MPI. This is nice because it lets you take advantage of the MPI and Interconnect on a proper compute cluster. However it can’t handle the full language set of python and you can’t use c extensions. Which means Numpy and Scipy are out of the window. Which is a shame because most of the codes that you could trivially parallelise with this system use Numpy.

Python and CouchDB An opinionated Mozilla hacker talking about how awesome CouchDB is. I understood it a bit more by the end of the session and kind of wondered what it would be like to dump log files straight into it. The talk in the corridors was the MongoDB looked a bit more production oriented. However I managed to miss that talk so will have to look it up later.

Keynote by Bruce Eckel. He started by pimping the unconference idea, which looked good to me. And got me thinking about whether there might be room for the approach at work. The language archaeology part was entertaining but I can’t remember a single thing from it.

Ctypes This was a really useful talk. Greg Holliing did a good job of going through some of the pitfalls of ctypes such as 32/64 bit int mis-match with the underlying c API. So you should always cast to one of the ctypes types to make it explicit what you are passing through. This was probably the talk that is most likely to make an impact on my production code over the next twelve months.

OpenERP In hindsight I should have also gone to the OpenObject talk which explained the underlying data model. The best thing about this was that each module can stand alone. i.e you can install the inventory module or the CRM module without all the others. OpenERP speaks a Web Services API so it would be very easy to develop against it. There is a chance I may be able to solve some of the organisational challenges at work by throwing this tech at them.

Python for System Admin A good talk somewhat hampered by John Pinner’s need to support Python2.2 so some of the code example looked a bit strange. John is of the opinion that argparse is better than optparse (which I habitually use). One of the other attendees pointed me to a pycon2009 talk on argparse which apparently explains the difference.

Software Apprenticeship An interesting approach to training programmers that made a lot of sense to me. In Britain we have an awful tendency to belittle vocational training in comparison to academic education when for a great many professions we could do with more of the former and less of the latter. Lots to think about and Christian Theune provided a wealth of advice based on his practical experience of helping to train apprentices.

This was my first EuroPython and I found it educational and entertaining. I was exposed to lots of interesting technology, some of which may improve my daily work. More important than that was the opportunity to talk to other Python developers about their experiences of using Python to get real work done. EP is in Birmingham again next year so I have no excuse not to attend. Many thanks to all the hard-working people who helped to organise this conference and the wonderful delegates who made it so much fun to attend.

Europython – Day 2 – Tutorials

Today’s notable achievements were that I managed to stay on power and network for most of the day. Mostly due to the fact that I lucked out to get a seat next to a power bar in the lecture theatre holding Luke Leighton’s Pyjamas tutorial. I was interested in Pyjamas for a web project I may have to get up and running quite quickly over the summer. Although there were some rocky patches due to SVN mismatches I mostly managed to get a handle on how Pyjamas works. As a note to future tutors: if you need your tutees to download the trunk from SVN it’s probably best to specify the revision that works. This avoids everyone turning up with a version of your code that won’t run the examples. Also, I still don’t understand decorators.

Today’s buffet lunch was nice. Props to the conference organisers.

The day was nicely rounded off by dinner at a fine indian restaurant and a pint of very nice beer in the Wellington. Looking forward to the start of the conference proper.

Europython – Day 1 – Tutorials

I have to admit to a certain amount of trepidation when I signed up for EuroPython 2009. As primarily a sysadmin rather than a developer I was worried that I might not have the requisite knowledge to get the benefit of a week-long developer conference. After today’s experience I’m beginning to relax about that.

Today and tomorrow are the tutorial sessions before the conference proper starts. Having never been to a Python conference before I wasn’t sure what form the tutorials would take. From the outcome of the day I would have to say “much less programming than you might expect”.

The day started off with Michael Spark’s giving an introduction to Kamaelia the simple concurrency system designed by BBC Research. We started off by building a brain-dead simple version of Kamaelia to outline the principles by which it operates. This took us on to writing a bulletin-board system by chaining together simple Kamaelia components. This was, needless to say, pretty intense for a Sunday morning.

Having expected to be doing a lot of coding I dutifully spent Friday evening makeing sure that I had the suggested software installed and working on my netbook. As it turned out I only wrote about 20 lines of code during the whole tutorial. I was ever so slighlty miffed by this. This is the first time this tutorial has been given and in my opinion would benefit from being all-day with time for coding exercises between explanations.

Despite these minor problems I felt that the tutorial left me with enough of a grasp of Kamaelia’s basics that I could go away and write something simple in it without too much trouble. One other good point of the this session was the handout printed from lulu.com which was really nice. So nice in fact that I think we should spring for these next time we run a training course at work.

After lunch I was in Jonathan Fine’s JavaScript for Python Programmers tutorial. Which was in a room that was too small for the audience and much, much too hot. It also appeared to have a grand total of two power outlets. Fine started off with a horrifying list of the ways basic constructs in JS behave in ways that Pythonistas will find completely illogical. After the break he delved into the nitty-gritty of OO and Inheritance. As the tutorial progressed and Fine got further from his slides the session transformed into something more like a seminar rather that a tutorial. Overall I found this session enjoyable and informative, although my brain was beginning to melt by the end of the day.

I suspect that Wifi and power are what most people will grumble about, but knowing how hard it is to sort these out for events at my home institution I won’t carp too much.

Now for some time with the Django tutorial in preparations for tomorrow’s Pyjamas session.

Resolutions

Or should that be vague plans?

  • Attend FOSDEM. I’ve been incredibly lazy the last few years and haven’t attended.
  • Try to prevent the PET Centre from eating my life. This is probably more of a vain hope than a resolution.
  • Buy a flat or learn to drive. Because unless I make it an either/or I will do neither.
  • Attend the spring UKUUG meeting. Understanding kerberos is probably a worthwhile endeavour even if I never use it.
  • Skate more. Assuming we actually get a summer this year it shouldn’t be to hard to top the dismal amount of skating I did last year.
  • Attend at least one observing night of the Cardiff Astronomical Society to remember what the night sky is supposed to look like.
  • Migrate Peapod to a sensible modern XML library to fix some of its more annoying bugs
  • Make an effort to visit friends. Which is just code for be less of a social hermit.