![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||
Solution Type Technical Instruction Sure Solution 1918057.1 : Oracle ZFS Storage Appliance: Effects of Adding Storage Expansion to Overcome a Full Data Pool
In this Document
Created from <SR 3-9479292151> Applies to:Oracle ZFS Storage ZS3-4 - Version All Versions to All Versions [Release All Releases]Sun ZFS Storage 7420 - Version All Versions to All Versions [Release All Releases] Sun ZFS Storage 7320 - Version All Versions to All Versions [Release All Releases] Sun ZFS Storage 7120 - Version All Versions to All Versions [Release All Releases] Oracle ZFS Storage ZS3-BA - Version All Versions to All Versions [Release All Releases] 7000 Appliance OS (Fishworks) GoalWhen adding disk shelves to the NAS product we are often asked what effect will be for the current pool/s the new vdevs are added to, which are often nearly full:
Q1: Can you please confirm whether we’ll get data placement onto these new spindles in the background for our data pool ? Q2: Will it improve our IOPS for that pool ? Q3: Do we need to do anything to get data to re-balance onto these ?
SolutionA1: ZFS will preferentially* use the empty vdevs for data writes (not reads). So, over time, the vdevs will fill up and the "old" vdevs will empty until they are in balance - same level of used space. But this will ONLY happen on writes. If mostly reads are done, then it would be better to backup the data and restore it to balance the vdevs out as regards to space.
A2: IOPS will improve as the vdevs in the pool balance out. But there can be an issue if you add a small number of vdevs to the pool. This is because they will be preferred as per Answer 1 Example 1: Current pool has 40 vdevs and you add disk trays that add 10 vdevs to the pool. The writes will preferentially* use the 10 new vdevs, so write performance can be an issue (reads will be to the 'older' vdevs). Example 2: Current pool has 40 vdevs and you add disk trays that add 40 vdevs to the pool. The writes will preferentially* use the 40 new vdevs, so the write performance should be much better as there are many more vdevs used for the writes NOTE: Preferentially* does not mean NO writes to the existing vdevs
A3: ZFS is a copy-on-write filesystem, so with a dataset that is mixed read and writes, the balancing will take place in background. However if your workload is largely reads or reads only, then it would be better (or quicker) to backup the data and restore it ( it will then, of course, write across all drives). NOTE: Some workloads can use reads that are primarily from most recently written data, so the read performance will be affected as well until balance can be achieved
Addendum :- When adding extra storage JBODs you will also need to take into account if the current configuration has NSPF . This may mean more storage than the minimal amount needed to over come space full.
Checked for relevancy - 10-May-2018 Attachments This solution has no attachment |
||||||||||||||
|