I have been busy setting up ZFS. I am late to the party, as ever. This has been painful because my time has been at a real premium these days.
Somewhat ironically, the great advance offered by ZFS, in it sphere of storage and filing, is analogous to the great advance offered by ILNP, in its sphere of network naming and location. Mathematicians have a great word for this; isomorphism.
What I can say is this: a few of the third party packages available for FreeBSD make it easier to manage ZFS out-of-the-box. However, even with this kind of help, some self assembly may still be required for systems administrators to make it useful at their own sites.
I had hoped that tools would have matured enough for someone, without being overly familiar with ZFS, would be able to pick it up and start using it, and its new features, without fuss. I know some open source folk wax lyrical about forcing people to learn the tools, by not providing GUIs or other required integration for daily use; or indeed frowning upon people who use them.
This is a bit like forcing your child to drink cod liver oil. We know that it is ultimately good for us, but there are easier and more appropriate ways of getting the same results. I consider this proselytizing. Your mileage may vary.
What the tools in ports don’t do, is offer out-of-the-box configuration which does your thinking for you. However, I believe that what we need here is quite common. I was surprised that this was not something which I could just order like fried chicken in Hackney on a Friday night.
We have two servers: one master, one warm backup. The master is where all the action happens. If it fails, the warm backup will be switched in to replace it, albeit with some manual fail-over, which we can live with. We aren’t aiming for ‘High Availability’, just good enough availability.
Bzzzt – wrong answer. Over the weekend, after a barrage of email from cron, I came to a stunning epiphany: the name space for ZFS snapshots, must be common to all nodes involved, for incremental replication to work amongst them.
After disabling zfs-periodic on the backup server, I was able to use zxfer to pull the hourly snapshot, without it being clobbered by zfs-periodic running on the backup itself.
All zfs-periodic does is impose a naming scheme on this; snapshots are taken locally, and pruned according to simple rules. It has no notion of network backups. zxfer is a wrapper for ‘zfs send’ and rsync which automates certain repetitive command stanzas, but does so in a way which makes it very useful to have around.
The name space used by ZFS itself is flat. The way to deal with the split is probably to add a property to any snapshots copied by zxfer, marking them as ‘common’.
I haven’t implemented the necessary changes yet, as I have a zillion other things to do, but at least I am now more familiar with how ZFS snapshots and replication interact.
Again, this should all be taken with a pinch of salt. We know that ZFS is stable, because many people over the world have taken the time, and effort, to become familiar with it, and deploy it in their work. Perhaps the hard work involved is part of that, as it. However, we could perhaps all do better in promoting it.
A change in thinking is certainly required to get the best from it. To be certain, using ZFS in the infrastructure needs changes in practice also, and I know I am going to have fun explaining it to people who aren’t familiar with it.
I am happy to trade-off the blind flexibility of rsync, against getting the same job done at least forty times quicker, with none of the race conditions.
It is perhaps a case where some of us didn’t get the memo…