Reassociating old Time Machine backups

In an attempt to get myself cheap remote backups over the internet, I bought a Raspberry Pi kit and set it up as a hackintosh Time Capsule by attaching my USB backup disk to the Pi. I however wanted to keep my existing backup history, so instead of using a fresh Linux-formatted partition (like a clever boy) I tried to get the Pi to use my existing HFS+ filesystem. Anyone interested in trying this should probably read about Linux’s flaky HFS+ user mapping and lack of journaling support first, and then back away very slowly. I blame this for all my subsequent problems.

After some effort I did get my aging Macbook to write a new backup on the Pi, but I couldn’t get it to see the existing backups on the drive. Apple uses hard links for deduplication of backups, and because remote filesystems can’t be guaranteed to support them it uses a trick. Remote backups are written not directly on the remote drive, but into a sparse disk image inside it. Thinking that it would be a relatively simple matter to move the old backups from the outer filesystem into the sparsebundle, I remounted the USB drive on the Mac (as Linux doesn’t understand sparsebundles, fair enough).

The Macbook first denied me the move, saying that the case sensitivity of the target filesystem was not correct for a backup – strange, because it had just created the sparsebundle itself moments before. Remembering the journaling hack¬† I performed “repair disk” on both the sparsebundle and then the physical disk itself. At this point disk utility complained that the filesystem was unrecoverable (“invalid key length”) and the physical disk would no longer mount. In an attempt to get better debug information from the repair, I ran fsck_hfs -drfy on the filesystem in a terminal. This didn’t help much with the source of the error, but I did notice that at the end it said “filesystem modified =1”. Running it again produced slightly different output, but again “filesystem modified =1”. It was doing something, so I kept going.

In the meantime, I had been looking into ways of improving the backup transfer speed over the internet. I originally planned to use a tunnel over openvpn, but this would involve channeling all backup traffic through my rented virtual server, which might not be so good for my bank account. I did some research into NAT traversal, and although the technology exists to allow direct connections between two NATed clients (libnice), I would have to write my own application around it and at this point I was getting nervous about having no backups for an extended period. I had also been working from home and getting frustrated with the bulk transfer speed between home and work, and came to the conclusion that my domestic internet connection couldn’t satisfy Time Machine’s aggressive and inflexible hourly backup schedule.

Six iterations of fsck_hfs -drfy later, the disk repair finally succeeded and the backup disk mounted cleanly. At this point, I decided a strategic retreat was in order. I went to set up Time Machine on the old disk, but it insisted that there were no existing backups, saying “last backup: none”. Alt-clicking on the TM icon in the tray and choosing “Browse Other Backup Disks” showed however that the backups were intact. While I could make new backups and browse old ones, they would not deduplicate. As I have a large number of RAW photographs to back up, this was far from ideal. There is a way to get a Mac to recognise another computer’s backups as its own (after upgrading your hardware, for example) . However, it threw “unexpectedly found no machine directories” when attempting the first step. It appeared that not only did it not recognise its own backup, it didn’t recognise it as a backup at all.

After a lot of googling at 2am, it emerged that local Time Machine backups use extended attributes on the backup folders to store information relating to (amongst other things) the identity of the computer that had made the backup. In my earlier orgy of fscking, the extended attributes on my Mac’s top backup folder had been erased. Luckily, I still had the abandoned sparsebundle backup in the trash. Inside a sparsebundle backup, the equivalent metadata is stored not as extended attributes, but in a plist file. In my case, this was in /Volumes/Backups3TB/.Trashes/501/galactica.sparsebundle/, and contained amongst other bits and bobs the following nuggets:


These key names were a similar format to the extended attributes on the daily subdirectories in the backup, so I applied them directly to the containing folder:

$ sudo xattr -w XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /Volumes/Backups3TB/Backups.backupdb/galactica
$ sudo xattr -w MacBookPro5,1 /Volumes/Backups3TB/Backups.backupdb/galactica

After that was fixed, I could inherit the old backups and reassociate each of the backed up volumes to their master copies:

$ sudo tmutil inheritbackup /Volumes/Backups3TB/Backups.backupdb/galactica/
$ sudo tmutil associatedisk -a / /Volumes/Backups3TB/Backups.backupdb/galactica/Latest/Macintosh\ HD/
$ sudo tmutil associatedisk -a /Volumes/WD\ 1 /Volumes/Backups3TB/Backups.backupdb/galactica/Latest/WD\ 1/

The only problem arose when I tried to reassociate the volume containing my photographs. Turns out they had never been backed up at all. They bloody well are now.


So what happened to my plan to run offsite backups? I bought a second Time Machine drive and will keep one plugged in at home and one asleep in my drawer in work, swapping them once a week. This is known as the bandwidth of FedEx.

DNS cache locking on Server 2008

So I’ve been informed that there are some bizarre problems resolving a website that has recently changed providers from digiweb to novara (wasn’t my idea). From elsewhere the new site appears reliably, but from inside our network we are getting the following results:

andgal@nbgal185:~$ host -t any has SOA record 2011080416 10800 3600 604800 14400 name server name server name server mail is handled by 10 has address

andgal@nbgal185:~$ host has address mail is handled by 10

The first set of results is the “correct” one, so why is host (and nslookup, and dig, and firefox…) still going to the old address by default? I suspect it is something to do with cache locking on our Server 2008 DNS forwarder. It seems that even after I have forced a fresh request by using “-t any”, the stale cached A record is being returned for normal searches. This is apparently a security measure to protect against cache poisoning. It would appear that the TTL on the old A record was unusually long, which means that I had to flush the cache on the primary DNS forwarder (the backup DNS forwarder is fine, presumably because the old record was never in its cache).

Sure enough, running “dnscmd /clearcache” on the offending server fixed the problem.









How to create a DNAME record in Server 2008

I had to do a little searching on the internet to work out how to do this, so here it is in a single post.

DNAME records are supported in Server 2008’s DNS service, but you cannot add them (or edit them) in the graphical tool. You need to use the command line. Right-click “Command Prompt” and run as administrator. Then you can type in the following:


If you now refresh the graphical DNS tool you will see a new record with a blank type and contents “Unknown – view properties for more info”. If you do this, you will see the raw hex data for the DNAME RR (type 0x27). The only thing you can do with it in the graphical tool is delete it.

Hidden SSIDs = broken

From this report :

Contrary to a common belief that the SSID is a WLAN security feature and its exposure a security risk, the SSID is nothing more than a wireless-space group label. It cannot be successfully hidden. Attempts to hide it will not only fail, but will negatively impact WLAN performance, and may result in additional exposure of the SSID to passive scanning.

I can tell you from bitter experience that this is true. This link was brought to my attention a couple of months ago by Mark Leyden when we were trying to debug some mysterious WLAN problems in work. We had been using SSID hiding, and some machines were continually disconnecting and reconnecting to the WLAN. We turned on broadcasting of the SSID and most of the problems just went away.

People in work still come up to me and say “I can see the SSID of the work WLAN – is that such a good idea?” and I have to keep explaining.

Later, I discovered that IBM T60p laptops wouldn’t connect to my home WLAN, even though my trusty iBook never had a problem (my trusty iBook never has a problem). I turned on SSID broadcast, and it worked.

I thought at the time that maybe this was causing my mysterious Wii network connection failure, but didn’t test it (I haven’t used the Wii in forever). Today I did. It works.

Today’s moral – a network with a hidden SSID is a broken network. End of story.

Network card drivers

I’m trying to reinstall a ThinkPad X41 tablet at the moment, after a bad hard disk crash. Unfortunately, we forgot to make a backup image of this machine’s clean install (like we usually do), so I’m installing Windows by hand. And there’s no network card driver.

It constantly frustrates me that network cards can’t Just Work the same way that keyboards and screens do. No matter what fancy new feature nVidia or ATI put into their latest graphics card, you can be 100% sure it talks to the VGA-compatible driver on your 5-year-old install disk so you can at least see what you’re doing. Why don’t network cards have a basic compatibility mode like this? The marginal costs would surely be cents at the most, and all we need it for is to download the proper driver from Windows Update (and all the other drivers while we’re at it), but it would save a hell of a lot of frustration. How many man-hours are wasted downloading network card drivers by hand?

And something similar would be useful for SATA host controllers too – Acronis 9.1d doesn’t recognise the disk in our new T61s, which has just killed our new laptop rollout plan. Never had that problem with IDE…

Come on, lads.

The need for imap5

After spending a frustrating afternoon migrating users to a new email server, I am more convinced than ever that we need an integrated, open-standard mail client protocol. Let us call it imap5.

imap5 would address the following problems with current client/server mail protocols:

1. The need to configure separate incoming and outgoing connections.

2. The use of the same port for both client and backhaul communication, which prompts heavy firewalling of port 25 on corporate networks, often making it impossible to send mail at all.

3. The need to store long lists of configuration options (server, port, authentication, encryption, all twice) on the client.

Solutions to these problems already exist, but are not widely supported.

1 and 2. Sending email via imap is possible on courier-imapd and mutt through the use of smart outboxes –¬† draft emails uploaded to a given imap folder are automatically forwarded to an MTA process by the imap server. SMTP therefore need not be supported on the client.

3. IMSP allowed config options (amongst other things) to be stored in a directory service, but this was obsoleted by ACAP, which then died a death.

imap5 would include functionality derived from the above. In addition, imap5 should:

1. Encapsulate all communication in HTTPS.

2. Only require the user to input his email address, password and a URL (preferably the URL of his webmail service) into his client. Further settings would be read from HTML metadata.

3. Allow server options to be set, including password, display name, autoreply, forward, and arbitrary settings (e.g. filtering) to be defined in a companion protocol.

However (unlike RPC over HTTPS) imap5 would not try to support address books, calendaring or other groupware features – open protocols for these (LDAP, iCal) already exist.

The advantages of imap5 over RPC/HTTPS would be threefold:

1. Three-field client configuration form.

2. Scope limited to email service provision.

3. Open, incremental improvements to well-understood protocols.

Greylisting brownouts


gld has packed in for the last time on me. Since I now use the university system to read my work email, I failed to notice that my personal mail had quietly stopped working.

Well, quietly from my side. From the public side, it was returning internal configuration errors left right and centre.

It was only when my brother noticed he had missed some emails that I looked and found that gld had fallen over again. You would think that postfix would handle this sort of failure gracefully – more spam being preferable to no email at all – but no.

So now I am going to try policyd instead, to see if it is any more stable.