![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Solution Type Technical Instruction Sure Solution 2203416.1 : ZFSSA Storage Expansion Behavior
In this Document
Applies to:Oracle ZFS Storage ZS5-4 - Version All Versions and laterSun ZFS Storage 7320 - Version All Versions and later Sun ZFS Storage 7420 - Version All Versions and later Oracle ZFS Storage ZS3-2 - Version All Versions and later Oracle ZFS Storage ZS3-4 - Version All Versions and later 7000 Appliance OS (Fishworks) GoalThe goal of this document is to better understand:
Terminology:
Assumptions: When this document refers to disks, that means data disks and not log, cache, or spare devices. Although this information can be applied to data disks of any size, the examples assume data disks are 2TB.
SolutionTo add storage to a pool in the BUI, click the ADD button in the storage configuration screen. To add storage to a pool in the CLI, use the command 'add' in the 'configuration storage' context. If there are multiple pools, first specify the pool using the command 'set pool=poolname'.
Do not perform a pool configuration operation while a disk firmware upgrade is occurring. It is also highly recommended that the disks that are later added to the pool are of the same type, size and rotational speed as of the disks that were used to configure the pool initially.
Disclaimer: Disks added to a pool cannot be removed without destroying the pool.
Caution: The storage can only be added using the same profile that was used to configure the pool initially. If there is insufficient storage to configure the system with the current profile, some attributes can be sacrificed. For example, adding a single disk shelf to a double parity RAID-Z NSPF configuration makes it impossible to preserve NSPF characteristics. However, the disk shelf can still be added and create RAID-Z stripes within the disk shelf, sacrificing NSPF in the process.
After-effects of adding storage This topic is discussed to certain extent in "Effects of Adding Storage Expansion to Overcome a Full Data Pool (Doc ID 1918057.1)". ZFS will favor less full disks for writes, in this case recently-added disks. Given that ZFS is copy-on-write, over time the utilization of all disks in pool will even out. So for write-intensive I/O, a majority of the I/O will be directed to the recently-added disks. Improvement in iops can be seen as the balancing progresses. The balancing of utilization is entirely dependent on writes, so for read-mostly data, it may take longer to complete.
Limitations of expansion algorithm Hot spares are allocated as a percentage of total pool size irrespective of the profile chosen for each configuration step. This means that the current storage expansion algorithm to allocate spares and data disks doesn't take into account the existing number of spares and data disks in the pool and follows the same logic as if creating new pool. This is what can be considered as a spare-hungry algorithm.
Because it allocates hot spares in each configuration step, it is much more efficient to configure storage as a whole, rather than to add storage in small increments. This behavior is concisely captured in this table, giving us an insight into overhead in growing storage from 1 JBOD to 2 JBODS compared to configuring as one unit with 2 JBODS for different pool profiles.
Note: x/y in the following table represents : 'x' data disks & 'y' spare disks
There are ways to work around the 'spare-hungry' algorithm when adding storage. For each profile, there are a minimum number of disks required before the algorithm will use some disks as spares. Breaking up additions into small groups can be done to avoid allocating spares. Specific details about how such small groups should look are profile-dependent and are detailed in the following section.
Specific behavior of mirror and N-parity RAID when configuring and growing storage This section contains examples of expanding pools, describing its overhead and ways to work around the allocation of spares for different profiles.
Mirror Mirror here refers to a 2-way mirror. Data is duplicated within a disk pair, reducing the capacity to half, and can continue to serve data with up to 1 disk failure per disk pair. Minimum number of disks needed: 2
Example 1: Creating a mirrored pool with a full JBOD BD = 24
Example 2: Creating a mirrored pool with half JBOD and adding other half later BD = 12, AD = 12
With BD = 12 -> 2 spares and with AD = 12 -> 2 spares, combined will result in 4 spares. Note: This will result in two extra spares by adding half JBOD to existing pool with half JBOD compared to a full JBOD pool (example 1 vs example 2).
Example 3: Creating a mirrored pool with two full JBODS BD = 48
Example 4: Adding a full JBOD to mirrored pool (built with full JBOD) BD = 24, AD = 24
Note: This results in two extra spares in Example 4 compared to Example 3. Workaround: If you do not want to add/create/assign further spares when adding disks into mirrored pool: AD = N (should be even number) Break up N into combination of 6, 4 and 2 and add in iterations.
Example 5: How to add full JBOD to a mirror pool without resulting in further spares
Example 6: How to add 8 disks without resulting in further spares
option 1 : BD + (AD1 = 6) + (AD2 = 2)
Three-way Mirror Data is stored in 3 copies within a given three-way mirror, yielding a very high reliability and performance, Minimum number of disks needed: 3
Example 7: Creating 3-mirror pool with full JBOD BD = 24
Example 8: Creating 3-mirror pool with two full JBODS BD = 48
Example 9: Adding a full JBOD to 3-mirror pool (built with full JBOD) BD = 24, AD = 24
Note: This results in 3 more spares when adding a JBOD as shown in Example 9 compared to Example 8. Workaround: If you do not want to add/create/assign any further spares when adding disks into three-way mirrored pool: AD = N ( should be multiple of 3 ) Break up N into combination of 6 & 3 and add in iterations
Example 10: How to add full JBOD to three-way mirrored pool without resulting in further spares AD = 24 = 8 x 3 -> add in 8 iterations of 3 disks each -> [option 1] option 1 : BD + (AD1 = 3) + (AD2 = 3) + ................. + (AD8 = 3)
Single-Parity RAID Minimum number of disks needed : 4
Example 11: Creating single-parity RAID pool with full JBOD BD = 24
Example 12: Creating single-parity RAID pool with two full JBODs BD = 48
Example 13: Adding a full JBOD to a single-parity RAID pool built with full JBOD BD = 24, AD = 24
Workaround: If you do not want to add/create/assign further spares while adding disks into single-parity RAID pool AD = N Break up N into sets of 4 and add in iterations
Example 14: How to add full JBOD without resulting in further spares In this example, the pool is configured initially with one full JBOD: BD = 24 AD = 24 = 6 x 4 -> add in 6 iterations of 4 disks each (BD = 24) + (AD1 = 4) + (AD2 = 4) + (AD3 = 4) + (AD4 = 4) + (AD5 = 4) + (AD6 = 4)
Double-Parity RAID N+2 redundancy with distributed parity and logical blocks, can continue to serve data with up to 2 disk failures with in a RAID set. Minimum number of disks needed: 5
Example 15: Creating a raiz2 pool with one full JBOD BD = 24
Example 16: Creating double-parity RAID pool with two full JBODS BD = 48
Example 17: Adding one full JBOD to double-parity RAID pool (built with one full JBOD) BD = 24, AD = 24
Existing double-parity RAID pool will have 2 spares, 2 more spares will be added with AD = 24 and note that this doesn't result in any extra spares in Example 17 compared to Example 16. Workaround: If you do not want to add/create/assign further spares while adding disks into double-parity RAID pool: AD = N Break up N into combinations of 5, 6 and 7 and add in iterations Note: Each iteration will add new disk combination to the pool with the number of disks added in that iteration.
Example 18: How to add a full JBOD without resulting further spares AD = 24 = 4 x 6 -> add in 4 iterations, 6 disks each Lets say the existing double-parity RAID pool is as following : BD = 24
Now adding a full JBOD : BD = 24, AD = 24 AD = 24 = 4 x 6 -> add in 4 iterations, 6 disks each (BD = 24) + (AD1 = 6) + (AD2 = 6) + (AD3 = 6) + (AD4 = 6) -> will result to the following :
Similarly for half JBOD : AD = 12 = 7 + 5 -> add in 2 iterations, 7 and 5 disks each
Triple Parity RAID N+3 redundancy with distributed parity and logical blocks, can continue to serve data with up to 3 disk failures within a RAID set. Minimum number of disks needed : 9
Example 19: Creating triple-parity RAID with one full JBOD BD = 24
Example 20: Creating triple-parity RAID with two full JBODs BD = 48
Example 21: Adding a full JBOD to a triple-parity RAID pool built with full JBOD BD = 24, AD = 24
Note: Interestingly, this did not result in any extra spares in example 21 compared to example 20 but the pool layout will be different. There is no known workaround to avoid extra spares in the context of growing Triple parity RAID pool.
References<BUG:22756644> - TOO MANY SPARES SELECTED BY CONFIGURATION STORAGE ADD<BUG:26033684> - TOO FEW SPARES SELECTED BY CONFIGURATION STORAGE ADD FOR MIRROR <BUG:20071285> - ADD ABILITY TO SPECIFY THE NUMBER OF SPARES FOR A POOL Attachments This solution has no attachment |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|