Blog

Blog

/ by Marek / , , ,  + .

Using MooseFS for Distributed Replicated Storage

One of the themes we have been busy working away on is the EU General Data Protection Regulation (GDPR). It is a significant evolution that replaces the 1995 Data Protection Directive, which in the United Kingdom was implemented as the Data Protection Act 1998. Over the next few months I’ll be writing several posts about some of the technical things we’ve done to strengthen our position — and what we can offer to our customers — for GDPR compliance.

Our CRM as a service stores a significant number of customer documents. Our previous data store use multiple disks under btrfs, which has served us well on top of our virtual server hosting platform. We were originally planning to migrate this filesystem — storing millions of files — to an object store under Ceph… but then we spotted a potential alternative.

MooseFS offered all the features we were looking for, and seemed to have a much simpler deployment process than Ceph:

  • redundancy (potentially with erasure coding)
  • snapshots
  • tiered storage
  • one node at a time upgrades

We wanted to take advantage of the tiered storage so that our clients would be served their data from as close as possible — in our case we wanted to ensure we had a full copy of customer data in at least two data-centres, separated by ~20ms RTT. It was quite easy to do build our mfsmount.cfg file to accommodate this by tweaking mfsprefslabels based on pillar data in our Salt States.

In many ways the hardest part of pre-production testing was making sure that the _netdev option was used in /etc/fstab so that our serving nodes would not fail to mount the filesystem during bootup. We ended up with this:

/srv/media:
  mount.mounted:
    - device: mfsmount
    - fstype: fuse
    - opts: _netdev
    - dump: 0
    - pass_num: 0
    - persist: True
    - mkmnt: True
    - require:
      - pkg: linux-image-amd64
      - file: /etc/hosts
      - pkg: moosefs-client
      - file: /etc/mfs/mfsmount.cfg
      - file: /etc/modules
      - pkg: fuse
      - cmd: modprobe fuse

Making the transition from our old btrfs-over-NFS system to MooseFS in production was relatively pain-free. We needed to rsync the bulk of the data, another pass for changes, and then a final time after cutover to check there was nothing to mop up.

We make heavy use of borgbackup and our MooseFS deployment was no exception. We store encrypted backups at a third off-net location (also within the “EU data protection zone”) to make sure we are keeping data secure from breaches but also safe from loss.

For now we are relying on our virtualisation platform, and its underlying replicated storage, to help guarantee our controlling server’s availability; and a metalogger in a separate location for disaster recovery. But in future we may wish to evaluate the MooseFS Pro license for leader/follower controlling servers for even more durability.

Update from May 2018

We’ve been using MooseFS in production for a few months now, and are still exceedingly pleased with it. fulcrm’s background job processing systems make constant use of the MooseFS cluster to store reports, and all our customers’ document attachments go into MooseFS. Despite suffering a drive failure — and even a brief reachability issue between two parts of our cluster caused by a third party outage — we’ve had perfect availability.