Developed by its community and owned by IXSYSTEMS®, FreeNAS® is the number one storage operating system in the world. They provide free and open-source enterprise grade network attached storage software. FreeNAS® uses the file system ZFS, which is not exclusive to FreeNAS® but is an extremely powerful file system, volume manager and software RAID controller in one.
ZFS has its own names for its software RAID implementations. They are functionally identical to normal RAID levels, with the only minor differences coming from ZFS's increased resiliency due to the nature of its architecture. This article outlines what every relevant RAID level does, and what its equivalent would be inside ZFS.
ZFS has functionally similar RAID levels as a traditional hardware RAID, just with different names and implementation. It uses smaller RAIDs in partitions called "VDevs" (virtual devices). When you join together multiple VDevs you make a "zpool" after which the VDevs cannot be removed.
RAID0 - Also known as Striping, RAID0 spreads your data across multiple drives to get the added speed of them all reading and writing together. It is the highest preforming, most space efficient RAID level, but gives no data security. In case of a drive failure, your whole storage pool could be lost! ZFS calls this a striped VDev and has slightly more protection against silent data corruption than a standard RAID0 because of its self-healing with checksums. It will still cause the pool to be lost if there is a disk failure though, so this is not a good level for anyone who doesn't mind a big risk of losing all their data.
RAID1 - This level mirrors data between drives to essentially have a local backup of all your data. On ZFS this would simply be done by mirroring between VDevs. With ZFS you can mirror them as many times as you want to help protect against drive failure, this would result in more and more wasted space for the extra protection of an extra possible drive failure per mirror. RAID1 is often used in conjunction with other RAID levels or for boot drives as its space efficiency is worse than the parity RAID levels.
RAID5 - In ZFS, RAID5 is called RAIDZ1. RAID5 uses a parity block which gives it the ability to rebuild the lost data if a drive fails. It also performs the same data striping across drives for the added performance benefit of RAID0 (the parity data is also striped across the disks, so you are not limited by the speed of a dedicated parity drive). It is one of the most popular RAID levels because of its combination of performance and data security. RAID5 protect your data against one drive failure, but if another goes out before rebuilding it the whole pool could be lost. RAIDZ1 has a benefit over RAID 5 as it has solved the write-hole phenomenon that usually plagues parity and striping RAID levels.
RAID6 - RAID 6 is similar to RAID5 but with two drives worth of parity instead of one. ZFS's equivalent is RAIDZ2. It is a fairly safe RAID level because it has the ability to withstand two drive failures and still rebuild, meaning if one fails you can still withstand another drive failure before or while rebuilding without losing your pool. Just like RAIDZ1, RAIDZ2 doesn't face the write-hole phenomenon that RAID6 normally does without some sort of solution implemented.
ZFS also has RAIDZ3, which is exactly what it sounds. 3 parity blocks instead of two means it can withstand 3 drive failures and still rebuild. It is used rarely when data security is a top priority.
RAID10 is just mirroring and striping used together. Drives get mirrored together, than across the top, each mirrored pair gets striped across the whole set to create a single virtual drive. It has the redundancy benefit from the RAID1's and the performance benefit of a RAID0. The only way a RAID10 really lacks in is efficient use of its capacity. All the data is stored twice, so there is only half the effective capacity of a RAID0 (just like RAID1!). RAID10 in ZFS is simply striping mirrored VDevs.
RAID01 is the same as a RAID10 but backwards... and worse. It stripes data across pools of disks than mirrors those pools. Performance is identical to a RAID10, but the chance of losing your pool is much higher. Also, performance when rebuilding is far worse, increasing the chance of a drive failure. As you add more disks to a RAID01, the chance of losing your storage pool with 2 disk failures is never better than 50%. With a RAID10, the more drives you add the closer to 0% chance it gets. RAID01 is never used over a RAID10 anymore.
RAID50 is essentially several separate RAID5 pools (with a drive worth of parity in each) and a RAID0 added across the top of all of them, striping data across the pools. You are able to withstand 1 drive failure for each of the RAID5 pools you have. This gives much greater redundancy than just a RAID5 by itself, but less data efficiency as you need a drive worth of parity per pool. Your whole pool can still fail if two drives from any of the RAID5's pools go down. RAID50 is seen by some as a middle ground in the performance/size trade-off between RAID10 and RAID5. RAID50 also decreases your capacity because it needs to use extra parity blocks for each of the RAID5 pools you're using. In ZFS this would be done by creating multiple volumes of RAIDZ1 and striping them across each other.
Similarly to a RAID50, it is multiple RAID6 pools, striped across each other. This may use a lot of capacity up as each RAID6 pool requires 2 drives worth of parity. It also provides a huge amount of redundancy, as 2 drives can fail out of any of the RAID6 pools and the total pool still be rebuilt. As long as 3 drives from the same pool don't fail, the data can still be rebuilt, giving a large amount of insulation from drive failures. In ZFS, RAID60 is the equivalent of multiple RAIDZ2 pools being striped across each other. This is a very space inefficient option, you are trading storage space for a large amount of redundancy.
One of the biggest issues that some may experience with "stripe and parity" raid levels (RAID5, RAID6, RAID50, and RAID60) is what's known as the "write hole" phenomenon. If power is lost during a write, the data is corrupted and cannot be rebuilt. This is due to inconsistencies between the data and its corresponding parity information. RAIDZ does not have this problem. It does have slower reads for its checksum, limited to the speed of one drive. There may be slowdowns when reading small chunks of data at random with RAIDZ.
To learn more about ZFS RAID check out our knowledge base or watch out Tech Tip series on YouTube.
Contact us to discuss your storage needs and to find out why the Storinator is right for your business.
Contact 45Drives