Is replacing disks to grow a ZFS pool viable on FreeNAS?

Well, the answers to this question will undoubtedly be complex and varied so I will limit my discussion to home or small business systems. The response I give is based on my experiences in this area.

Let’s start by looking at a breakdown of the key hardware costs of my first FreeNAS server. The build was around the popular HP N40L microserver and included 16GB of ECC RAM and 5 x 3TB disks to create a 12TB RAID-Z1 volume.

Sketch - N40L build costs

The disks represent about 70% of the build cost. Replacing them to grow the pool size would result in the loss of this investment. No, it didn’t make sense in my situation to replace the disks with larger capacity disks. It did make sense to me to build a second FreeNAS system with larger disks, but what was I to do with the current system? Before I answer this question, let’s look at the build cost of the second system.

As time marched on, disk capacities increased and microservers became more powerful, but build costs were very similar. My second build consisted on an HP N54L this time, again with 16GB of ECC RAM, but with 5 x 4TB disks for a RAID-Z1 volume size of 16TB. A comparison of the AMD Turion II Neo N54L and N40L can be found here.

Sketch - N54L build cost

Again, the disks represented about 70% of the build cost. I had increased my pool capacity by 4TB, though my total capacity had increased by 16TB when both servers were considered together.

The question about what to do with the replaced server remained. The answer, of course, is to use this system to back up the primary server. However, the primary server has 16TB of available pool space compared with 12TB of available pool space on the backup server. The numbers don’t appear to fit.

It turns out that this is not as problematical as first appears. It is well documented that ZFS takes a performance hit if more than 80% of available pool capacity is used. In my case, that would be when the primary server had reached a pool threshold of 12.8TB. If I didn’t care about taking a performance hit on the backup server, the numbers begin to look pretty good. I just had to try to squeeze 12.8TB of data on the primary server into 12TB on the backup server.

Fast forward to a more recent time, a wiser FreeNAS user and a third system build. Several factors influenced the design of this third server:

  1.  In retrospect, using the optical drive bay on the HP microservers to accommodate a fifth disk was probably not a good idea. Other FreeNAS enthusiasts have established that the maximum drive size is limited to 4TB in this bay, therefore, limiting the overall RAID-Z1 pool capacity to 16TB.
  2. The processors in both the N40L and N54L weren’t powerful enough to handle Plex transcoding.

I still found the HP microservers excellent value for money and decided to look at the offering in their newer Generation 8 (G8) range, which replaced the NxxL microservers from the G7 range. I settled on an HP G8 with a Xeon E3-1220L v2 processor, again with 16GB ECC RAM, but this time with 4 x 6TB disks to give me an 18TB RAID-Z1 volume. A comparison of the E3-1220L v2 with the AMD Turion II Neo N54L can be found here.

Sketch - HP Costs 2

This time, because of the more expensive processor, the disks represent just 52% of the build cost. I had increased my pool capacity by just 2TB, though my total capacity had increased by 18TB when all three servers were considered together. The big advantage now though was that I could backup the full 18TB of the new primary server (14.4TB when the 80% limit is considered) across the two backup servers totalling 28TB without the backup servers having to take a performance hit.

The major challenge I faced with the new system build was migrating from a five disk system to a four disk system. A limitation of ZFS is that you cannot expand a pool by adding a disk. You can only grow a pool by replacing ALL disks that the pool is built on with higher capacity disks. Similarly, it’s not possible to remove a disk from the pool.

In order to move forward, I had to create a new pool on the higher capacity four-disk server, recreate all the datasets and migrate data from the old datasets on the lower capacity five-disk server to the new datasets. Once completed though, it allowed me the prospect of continuing to grow the pool up to disk size limits of the main drive bays in the G8 server. It isn’t clear what these are yet. Will they support the new 10TB disks? I haven’t found any evidence of anyone trying these disks in a G8 server. However, I have found evidence of 8TB disks being used.

This post and the previous post provide some clues on two pieces of the puzzle required to edge towards pool redundancy. More to follow in a future post.

References

  1. 80% disk capacity warning
  2. AMD Turion II Neo N40L vs N54L
  3. Maximum disk size on a N54L Proliant server
  4. AMD Turion II Neo N54L vs Intel Xeon E3-1220L v2
  5. RAID-Z1 or RAID-Z2 for FreeNAS?
  6. Review: Seagate Archive 8Tb 3.5″ Internal Hard Drive
  7. 80% capacity fill rule – How far past that is safe?

 

 

Keep Reading

PreviousNext

Comments

Leave a Reply