![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||
Solution Type Problem Resolution Sure Solution 2306637.1 : Oracle ZFS Storage Appliance: Performance Issue during Dedupv1 to Dedupv2 Migration after upgrade to OS8.7
In this Document
Applies to:Sun ZFS Storage 7120 - Version All Versions and laterOracle ZFS Storage ZS5-2 - Version All Versions and later Oracle ZFS Storage Appliance Racked System ZS5-2 - Version All Versions and later Oracle ZFS Storage ZS4-4 - Version All Versions and later Oracle ZFS Storage Appliance Racked System ZS4-4 - Version All Versions and later 7000 Appliance OS (Fishworks) SymptomsIssue after upgrade from 2013.06.05.4.2,1-1.1 (2103.1.4.2) to 2013.06.05.7.4,1-1.1 (OS 8.7.4). NOTE: This issue can occur after deferred updates are applied to a system using dedup (version 1) after an upgrade to OS 8.7.x
No data access from the Exadata system to the ZFS Storage. Customer rebooted the head. The head took approx 4 hours to rejoin the cluster. Deduplication was enabled on pool 'pool-0'. Note: The deduplication algorithms have been changed in the OS 8.7.x Release ("deduplication version 2") [ 2013.1.6.x and below used "deduplication version 1" ]
Pool status : pool: pool-0 config: NAME STATE READ WRITE CKSUM errors: No known data errors DDT entries 109831612, size 2376 on disk bucket allocated referenced
ChangesApplication of optional deferred updates following upgrade to Oracle ZFS Storage Appliance Release OS 8.7.x
CauseBug 26513223 (Pool export hung during failback due to ddt1 -> ddt2 migration)
SolutionWork towards resolution of this issue is proceeding.
Note: Once the deferred updates are applied, there is no way to to stop the migration process. If the system is effected by this issue, disabling dedup on all shares will speed up the process during the DDT migration, and it can be enabled again afterwards, if appropriate. A cluster takeover will just continue on with the migration process, after re-starting the initial sequences.
Workflow to monitor the dedupv1 to dedupv2 conversion processing - monitor-ddt-conversion.akwf
There are several ways to check progress of dedup migration : CLI> confirm shell
zfssa> confirm shell echo '::spa|::print spa spa_ddm[]' | mdb -k
If dedup migration is taking place, a 'non-zero' value is seen in 'ddm_migrating_entries'. Dedup migration happening : > ::spa|::print spa spa_ddm[]
spa_ddm = { spa_ddm->ddm_algorithms = [ 0xfffff602d2629680, 0xfffff60302075248 ] spa_ddm->ddm_aholds = [ 0x1, 0x1 ] spa_ddm->ddm_max_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_active_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_md = 0xfffff6069edf55c0 spa_ddm->ddm_migrating_algorithm = 0 (DDA_TYPE_ZAP) spa_ddm->ddm_migrated_algorithm = -0t1 (DDA_TYPE_NONE) spa_ddm->ddm_migrating_entries = 0x67cb73d <<<<<<<< spa_ddm->ddm_migrated_entries = 0x4a5de9 spa_ddm->ddm_spa = 0xfffff60ccf721000 spa_ddm->ddm_os = 0xfffff602754ee3c0 spa_ddm->ddm_ddt_stat_object = 0x6b spa_ddm->ddm_sync_cost_ns = 0x1669 }
Dedup migration is NOT happening: > ::spa|::print spa spa_ddm[]
spa_ddm = { spa_ddm->ddm_algorithms = [ 0, 0xfffff602abf72608 ] spa_ddm->ddm_aholds = [ 0, 0x1 ] spa_ddm->ddm_max_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_active_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_md = 0 spa_ddm->ddm_migrating_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_migrated_algorithm = 0 (DDA_TYPE_ZAP) spa_ddm->ddm_migrating_entries = 0 <<<<<<<< spa_ddm->ddm_migrated_entries = 0 spa_ddm->ddm_spa = 0xfffff60340164000 spa_ddm->ddm_os = 0xfffff60274e913c0 spa_ddm->ddm_ddt_stat_object = 0x104 spa_ddm->ddm_sync_cost_ns = 0x30d40 }
NOTE (from RPE) : The 'ddm_migrating_entries' value is set once on the beginning of migration only. The 'ddm_migrated_entries' are increased after each entry is processed.
Work out the completion percentage using 'bc' : echo "obase=10;ibase=16;<MIGRATED>*64/<MIGRATING>" | bc obase - base of output value (10 = decimal)
spa_ddm = {
spa_ddm->ddm_algorithms = [ 0xfffff60088818e80, 0xfffff6009a7441c8 ] spa_ddm->ddm_aholds = [ 0x1, 0x1 ] spa_ddm->ddm_max_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_active_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_md = 0xfffff6015778c940 spa_ddm->ddm_migrating_algorithm = 0 (DDA_TYPE_ZAP) spa_ddm->ddm_migrated_algorithm = -0t1 (DDA_TYPE_NONE) spa_ddm->ddm_migrating_entries = 0x131a5ae <<<<<<<< Total entries to migrate spa_ddm->ddm_migrated_entries = 0x80990e <<<<<<<< Entries migrated so far ... spa_ddm->ddm_spa = 0xfffff6014dc7e000 spa_ddm->ddm_os = 0xfffff601d88d16c0 spa_ddm->ddm_ddt_stat_object = 0x2015 spa_ddm->ddm_sync_cost_ns = 0x30d40 } $ echo "obase=10;ibase=16;80990E*64/131A5AE" | bc => 42% completed ....
spa_ddm = {
{ spa_ddm->ddm_algorithms = [ 0xfffff60088818e80, 0xfffff6009a7441c8 ] spa_ddm->ddm_aholds = [ 0x1, 0x1 ] spa_ddm->ddm_max_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_active_algorithm = 1 (DDA_TYPE_XT) spa_ddm->ddm_md = 0xfffff6015778c940 spa_ddm->ddm_migrating_algorithm = 0 (DDA_TYPE_ZAP) spa_ddm->ddm_migrated_algorithm = -0t1 (DDA_TYPE_NONE) spa_ddm->ddm_migrating_entries = 0x131a5ae <<<<<<<< Total entries to migrate spa_ddm->ddm_migrated_entries = 0x12fd1a1 <<<<<<<< Entries migrated so far ... spa_ddm->ddm_spa = 0xfffff6014dc7e000 spa_ddm->ddm_os = 0xfffff601d88d16c0 spa_ddm->ddm_ddt_stat_object = 0x2015 spa_ddm->ddm_sync_cost_ns = 0x30d40 } $ echo "obase=10;ibase=16;12FD1A1*64/131A5AE" | bc => 99% completed ...
Further data collection : In shared shell, collect the following output a few times (about 15-20 minutes apart) then collect a new bundle. Check for non zero DDT entries : echo '::spa|::print spa spa_ddm[] ! grep ddm_migrating_entries' | mdb -k > ddt-entries(1-4).out echo '::stacks ' | mdb -k >stacks(1-4).out echo '::stacks -m zfs'|mdb -k > zfsstacks(1-4).out Save these outputs a few times with different names, suffix 1,2,3 etc.
Attachments This solution has no attachment |
||||||||||||||||||||
|