Blog

Blog

/ by Marek / , , , , ,  + .

Build in Geneva

Stylised photograph of servers in a data-centre.
Building Out

The Build

For our build in Geneva, where we have equipment in a third data-centre, our focus was on business continuity and resilience. We chose to expand abroad to give us extreme diversity of energy supply, network connectivity, and even geo-politics. We tried to continue these themes of reliability as much as possible with the infrastructure build in Geneva, which is a little easier for us as a “boutique” hosting provider because we are comfortable being very hands-on with hardware, software, routers, switches, and electronics. Having planned the overall architecture in spring, we began purchasing equipment in July with some slightly unusual requirements:

  • a mix of traditional “spinning rust” and SSDs allowing us to balance performance and capacity by using technologies like bcache
  • disks were sourced from three separate suppliers (two in the UK, one in Germany) so as to minimise the chance of all drives being from one batch in the hope of reducing common-mode failures
  • the disks are all — somewhat ironically — CCTV models, built specifically to withstand higher temperatures that are common in hidden-away CCTV systems (or in our case, data-centres with no chilling equipment besides what might be fairly warm air from outdoors)
  • servers sourced from more than one location
  • cryptographic key material for routers’ and VPNs’ SSL certificates was generated on an air-gapped system with hardware RNG
  • install sources downloaded and checksums verified on firewalled hardware running a less ubiquitous hardware architecture: PowerPC G5
  • visual, temperature and humidity monitoring via a Raspberry Pi, Pi Camera, and Pi Sense HAT
  • remotely-managed PDUs connected to diverse power feeds to be installed by the data-centre electricians by special request
  • connectivity for another RIPE Atlas probe provided to us at UKNOF35

The routers arrived first, and so we began configuring what would become our core network. This started out effectively as a secure enclave within our office, statefully firewalled off and only able to connect outwards to fetch software updates. Once the routers and switches were ready we began to prepare the servers.

At Faelix we heavily use automation wherever possible, and the plan for our virtual server cluster in Geneva was to deploy and manage it using SaltStack. After making some modifications to our Salt States as used on our cluster in Manchester, and adding some improvements learned from a year of experience, it was just a single “highstate” command to bring the new cluster into configuration. Coincidentally, England was then blessed with a heatwave — the perfect opportunity to stress-test the new arrivals! They ran for a couple of days at full tilt, generating lots of heat and testing their RAM sticks, and no crashes or errors were detected.

The Commissioning

The week before we travelled was something of a whirlwind as components were still arriving, final configuration changes had to be tested, we were attending UKNOF[UKNOF], and we still had to pack everything into boxes for the long drive down to Geneva. Thankfully it all fitted into two cars — only just! — and, (tongue in cheek) continuing our theme of diversity, we each took separate routes for the 800-mile drive before converging upon the data-centre.

Work to bring the equipment online took a few days: racking up, cabling, testing, minor tweaks to configurations all required a little time. Finding a suitable mounting for the environmental monitoring Pi proved a little tricky. There were a few other minor hiccups along the way but eventually, late on Friday, we were confident enough in the setup to switch on all the servers before retreating to a local eatery for a fantastic dinner.

A few more days of testing then followed while upstream providers updated their prefix filters. We wanted to do our own reachability testing, and check that various internal systems were working correctly before we made any announcements.

Overall we have found working with the Infomaniak team to be a pleasure. In particular I would like to thank to Matteo (colocation specialist) and Rene (network operations) for their work welcoming Faelix to Geneva.

It’s been a year in the making, but finally we’re able to take orders for hosting in a 100% renewable energy powered facility with PUE <1.1 in a well-connected country within Europe that has judicial oversight of Internet surveillance.