iptv techs

IPTV Techs

  • Home
  • Tech News
  • Dec 22: Why we engage our own difficultware at Fastmail

Dec 22: Why we engage our own difficultware at Fastmail


Dec 22: Why we engage our own difficultware at Fastmail


This is the twenty-second post in the Fastmail Advent 2024 series. The previous post was Dec 21: Fastmail In A Box. Check back tomorrow for another post.

Why we engage our own difficultware

There has recently been talk of cdeafening repatriation where companies are moving from the cdeafening to on premises, with some particularly boisterous examples.

Fastmail has a lengthy history of using our own difficultware. We have over two decades of experience running and selectimising our systems to engage our own exposed metal servers effectively.

We get way better cost selectimisation contrastd to moving everyleang to the cdeafening becaengage:

  1. We comprehend our unwiseinutive, medium and lengthy term usage patterns, needments and lengthenth very well. This unkinds we can set up our difficultware buys ahead of time and don’t need the rapid vibrant scaling that cdeafening supplys.
  2. We have in hoengage operations experience insloftying, configuring and running our own difficultware and nettoiling. These are sends we’ve had to retain and lengthen in hoengage since we’ve been doing this for 25 years.
  3. We are able to engage our difficultware for lengthy periods. We discover our difficultware can supply beneficial life for anywhere from 5-10 years depfinishing on what it is and when in the global technology cycle it was bought, unkinding we can amortise and depreciate the cost of any difficultware over many years.

Yes, that unkinds we have to do more ourselves, including set upning, choosing, buying, insloftying, etc, but the tradeoff for us has and we apshow persists to be meaningfully worth it.

Hardware over the years

Of course over the 25 years we’ve been running Fastmail we’ve been thcdisorrowfulmireful a number of difficultware alters. For many years, our IMAP server storage platcreate was a combination of spinning rust drives and ARECA RAID administerlers. We tfinished to engage rapider 15k RPM SAS drives in RAID1 for our toasty meta data, and 7.2k RPM SATA drives in RAID6 for our main email blob data.

In fact it was sairyly more complicated than this. Email blobs were written to the rapid RAID1 SAS volumes on transfery, but then a split archiving process would relocate them to the SATA volumes at low server activity times. Support for all of this had been inserted into cyrus and our tooling over the years in the create of split “meta”, “data” and “archive” partitions.

Moving to NVMe SSDs

A scant years ago however we made our hugegest difficultware upgrade ever. We relocated all our email servers to a novel 2U AMD platcreate with uncontaminated NVMe SSDs. The density increase (24 x 2.5″ NVMe drives vs 12 x 3.5″ SATA drives per 2U) and carry outance increase was enormous. We set up that these novel servers carry outed even better than our initial foreseeations.

At the time we upgraded however NVMe RAID administerlers weren’t expansively engageable. So we had to determine on how to administer redundancy. We pondered a RAID-less setup using raw SSDs drives on each machine with synchronous application level replication to other machines, but the software alters needd were going to be more complicated than foreseeed.

We were watching at using classic Linux mdadm RAID, but the author hole was a trouble and the author cache didn’t seem well tested at the time.

We determined to have a watch at ZFS and at least test it out.

Despite some of the cyrus on disk database arranges being neutrassociate unfrifinishly to ZFS Copy-on-author semantics, they were still incredibly rapid at all the IO we threw at them. And there were some other triumphs as well.

ZFS compression and tuning

When we rolled out ZFS for our email servers we also assistd see-thharsh Zstandard compression. This has toiled very well for us, saving about 40% space on all our email data.

We’ve also recently done some insertitional calculations to see if we could tune some of the parameters better. We sampled 1 million emails at random and calcutardyd how many blocks would be needd to store those emails uncompressed, and then with ZFS record sizes of 32k, 128k or 512k and zstd-3 or zstd-9 compression selections. Although ZFS RAIDz2 seems conceptuassociate analogous to classic RAID6, the way it actuassociate stores blocks of data is quite contrastent and so you have to get into account volblocksize, how files are split into reasonable recordsize blocks, and number of drives when doing calculations.

               Emails: 1,026,000
           Raw blocks: 34,140,142
 32k & zstd-3, blocks: 23,004,447 = 32.6% saving
 32k & zstd-9, blocks: 22,721,178 = 33.4% saving
128k & zstd-3, blocks: 20,512,759 = 39.9% saving
128k & zstd-9, blocks: 20,261,445 = 40.7% saving
512k & zstd-3, blocks: 19,917,418 = 41.7% saving
512k & zstd-9, blocks: 19,666,970 = 42.4% saving

This showed that the defaults of 128k record size and zstd-3 were already pretty excellent. Moving to a record size of 512k betterd compression over 128k by a bit over 4%. Given all meta data is cached splitly, this seems a worthwhile betterment with no meaningful downside. Moving to zstd-9 betterd compression over zstd-3 by about 2%. Given the CPU cost of compression at zstd-9 is about 4x zstd-3, even though emails are immutable and tfinish to be kept for a lengthy time, we’ve determined not to carry out this alter.

ZFS encryption

We always assist encryption at rest on all of our drives. This was usuassociate done with LUKS. But with ZFS this was built in. Aget, this shrinks overall system complicatedity.

Going all in on ZFS

So after the success of our initial testing, we determined to go all in on ZFS for all our big data storage needs. We’ve now been using ZFS for all our email servers for over 3 years and have been very satisfyd with it. We’ve also relocated over all our database, log and backup servers to using ZFS on NVMe SSDs as well with equassociate excellent results.

SSD lifetimes

The flash memory in SSDs has a finite life and finite number of times it can be written to. SSDs engage increasingly complicated wear levelling algorithms to spread out authors and increase drive lifetime. You’ll normally see the quoted finishurance of an accesspascend SSD as either an absolute figure of “Lifetime Writes”/“Total bytes written” enjoy 65 PBW (petabytes written) or a relative per-day figure of “Drive authors per day” enjoy 0.3, which you can alter to lifetime figure by multiplying by the drive size and the drive foreseeed lifetime which is normally supposed to be 5 years.

Although we could calcutardy IO rates for existing HDD systems, we were making a meaningful number of alters moving to the novel systems. Switching to a COW filesystem enjoy ZFS, removing the distinctive casing meta/data/archive partitions, and the massive tardyncy reduction and carry outance betterments unkind that leangs that might have getn extra time previously and finished up batching IO together, are now so rapid it actuassociate caengages insertitional splitd IO actions.

So one huge obsremedy ask we had was how rapid would the SSDs wear in our actual production environment? After cut offal years, we now have some clear data. From one server at random but this is neutrassociate reliable apass the run awayt of our agederest servers:

# cleverctl -a /dev/nvme14
...
Percentage Used:                    4%

At this rate, we’ll trade these drives due to increased drive sizes, or enticount on novel physical drive createats (such E3.S which ecombines to finassociate be geting traction) lengthy before they get seal to their rated author capacity.

We’ve also anecdoloftyy set up SSDs equitable to be much more depfinishable contrastd to HDDs for us. Although we’ve only ever engaged datacaccess class SSDs and HDDs fall shortures and tradements every scant weeks were a normal occurrence on the ageder run awayt of servers. Over the last 3+ years, we’ve only seen a couple of SSD fall shortures in total apass the entire upgraded run awayt of servers. This is easily less than one tenth the fall shorture rate we engaged to have with HDDs.

Storage cost calculation

After altering all our email storage to NVMe SSDs, we were recently watching at our data backup solution. At the time it consisted of a number of agederer 2U servers with 12 x 3.5″ SATA drive bays and we determined to do some cost calculations on:

  1. Move to cdeafening storage.
  2. Upgrade the HD drives in existing servers.
  3. Upgrade to SSD NVMe machines.

1. Cdeafening storage:

Looking at various supplyrs, the per TB per month price, and then a ytimely price for 1000Tb/1Pb (prices as at Dec 2024)

Some of these (e.g. Amazon) have potentiassociate meaningful prohibitdwidth fees as well.

It’s fascinating seeing the spread of prices here. Some also have a bunch of weird edge cases as well. e.g. “The S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes need an insertitional 32 KB of data per object”. Given the big retrieval time and extra overhead per-object, you’d probably want to store minuscule incremental backups in normal S3, then when you’ve assembleed enough, erect a hugegish object to push down to Glacier. This inserts carry outation complicatedity.

  • Pros: No restrict to amount we store. Assuming we engage S3 compatible API, can pick between multiple supplyrs.
  • Cons: Implementation cost of altering existing backup system that supposes local POSIX files to S3 style object API is uncertain and possibly meaningful. Lowest cost selections need extra pinsolentnt ponderation around carry outation details and distinctive restrictations. Ongoing monthly cost that will only increase as amount of data we store increases. Uncertain if prices will go down or not, or even go up. Possible meaningful prohibitdwidth costs depfinishing on supplyr.

2. Upgrade HDDs

Seagate Exos 24 HDs are 3.5″ 24T HDDs. This would permit us to triple the storage on existing servers. Each HDD is about $500, so upgrading one 2U machine would be about $6,000 and have storage of 220T or so.

  • Pros: Reengages existing difficultware we already have. Upgrades can be done a machine at a time. Fairly low price
  • Cons: Will existing units administer 24T drives? What’s the reerect time on drive fall shorture watch enjoy? It’s almost a day for 8T drives already, so possibly cforfeitly a week for a fall shorted 24T drive? Is there enough IO carry outance to administer daily backups at capacity?

3. Upgrade to novel difficultware

As we comprehend, SSDs are denser (2.5″ -> 24 per 2U vs 3.5″ -> 12 per 2U), more depfinishable, and now higher capacity – up to 61T per 2.5″ drive. A individual 2U server with 24 x 61T drives with 2 x 12 RAIDz2 = 1220T. Each drive is about $7k right now, prices alter. So all up 24 x $7k = $168k + ~$20k server =~ $190k for > 1000T storage one-time cost.

  • Pros: Much higher sequential and random IO than HDDs will ever have. Price < 1 year of standard S3 storage. Internal to our WAN, no prohibitdwidth costs and very low tardyncy. No novel enbigment needd, existing backup system will equitable toil. Conconstantate on individual 2U platcreate for all storage (cyrus, db, backups) and SSD for all storage. Significant space and power savings over existing HDD based servers
  • Cons: Greater up front cost. Still need to foresee and buy more servers as backups lengthen.

One leang you don’t see in this calculation is datacaccess space, power, chillying, etc. The reason is that contrastd to the amortised ytimely cost of a storage server enjoy this, these are actuassociate reasonably minimal these days, on the order of $3000/2U/year. Calculating person time is difficulter. We have a lot of home built automation systems that unkind insloftying and running one more server has minimal marginal cost.

Result

We finished up going with the the novel 2U servers selection:

  • The 2U AMD NVMe platcreate with ZFS is a platcreate we have experience with already
  • SSDs are much more depfinishable and much higher IO contrastd to HDDs
  • No uncertainty around super big HDDs, RAID administerlers, reerect times, shuffling data around, etc.
  • Significant space and power saving over existing HDD based servers
  • No novel enbigment needd, can engage existing backup system and code
  • Long foreseeed difficultware lifetime, administerled upfront cost, can depreciate difficultware cost

So far this has toiled out very well. The machines have bonded 25Gbps nettoils and when filling them from scratch we were able to saturate the nettoil connects streaming around 5Gbytes/second of data from our IMAP servers, compressing and writing it all down to a RAIDz2 zstd-3 compressed ZFS dataset.

Conclusion

Running your own difficultware might not be for everyone and has distinctive tradeoffs. But when you have the experience and the comprehendledge of how you foresee to scale, the cost betterments can be meaningful.

Source connect


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan