Kernel Humour

The unfortunate limerick cascade on the Linux Kernel Mailing List is a horror to behold.

However I have to share with you Alan Cox’s song about memory management to the tune of the Beatles Eleanor Rigby.

Ah look at all the laundered pages
Ah look at all the laundered pages

Handling Pages
Pick up the list and the link where kswap has been
A paging scheme
Runs down the I/O
Watching the queues that now keep me a list of the store
Who is it for

All the laundered pages
Where do they all come from
All the laundered pages
Where do they all belong

Meeting bdflush
Writing the pages of a disk file that no one will clear
No task comes near
Look at it working
Sleeping a lot in the night when there’s no pressure there
What does it care

All the laundered pages
Where do they all come from
All the laundered pages
Where do they all belong

Ah look at all the laundered pages
Ah look at all the laundered pages

Oracle DB
Died under load and was freed along with its name
No admin came
Good old bdflush
Wiping the dirt from the pages as it walks down the chain
Nothing was aged

All the laundered pages
(Ah look at all the laundered pages)
Where do they all come from
All the laundered pages
(Ah look at all the laundered pages)
Where do they all belong

Moving ZFS filesystems between pools

When I originally set up the ZFS on my development v880 I added the internal disks as a raidz together with two volumes off the external fibre-channel array. As is the way with these things the development box has gradually become a production box. And I now realise that if the server goes pop I can’t just move the fibre-channel to another server because the ZFS pool contains that set of internal scsi disks.

To my horror I now discover that you can’t remove a top-level device (vdev in ZFS parlance) from a pool. Fortunately I have two spare volumes on the array so I can create a new pool and transfer the existing zfs filesystems to it. Here is a quick recipe for transferring zfs filesystems whilst keeping downtime to a minimum.

zfs snapshot oldpool/myfilesystem@snapshot1

zfs send oldpool/myfilesystem@snapshot1 | zfs receive newpool/myfilesystem

this will take a while but the filesystem can stay in use while you are doing it. Once this finishes you need to shut down any services that are relying on the filesystem and unmount it.

zfs unmount oldpool/myfilesystem

And take a new snapshot.

zfs snapshot oldpool/myfilesystem@snapshot2

you can now do an incremental send of the difference between the two snapshots which should be very quick.

zfs send -i oldpool/myfilesystem@snapshot1 \
             oldpool/myfilesystem@snapshot2 | zfs receive newpool/myfilesystem

Now you can point the services at the new filesystem and start over until all the filesystems on the original pool have been transferred.