![]() | Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition | ||
|
|
![]() |
||||||||||||||||||||||||||
Solution Type Problem Resolution Sure Solution 1990157.1 : FS System: Changing the QoS Priority of a LUN on R6 Does Not Trigger a QoS Migration
This document explains the difference between R5 and R6 regarding the change of the Priority Level on an existing LUN and it provides a workaround to users who would like to change the number of Drive Groups. In this Document
Applies to:Oracle FS1-2 Flash Storage System - Version All Versions to All Versions [Release All Releases]Information in this document applies to any platform. SymptomsThis document explains the difference between R5 and R6 regarding the change of the Priority Level on an existing LUN and provides a workaround to users who would like to change the number of Drive Groups. With Oracle FS System Manager with the R6 software, changing the Quality of Service (QoS) priority on a LUN does not trigger a QoS migration on R6 (QoS migration => restripe of a LUN to use more or less Drive Groups). The new priority is only reflected at the Controller level: Queue priority, more CPU, cache. As such, changing the Priority Level of a LUN does not trigger a CmBackground task (visible by clicking on the Tasks button at the bottom right of the GUI) will only result in a ModifyLUN task. ChangesPreviously, with Axiom R5 systems, users may have been advised to change the QoS Priority to get more spindles in order to improve performance. With the FS1-2 and R6, the spindle count can also be determined by the number and type of Drive Groups when a Storage Profile is used. Custom Storage Profiles can also be created where the user can set the number of Drive Groups using the Stripe Width.
CauseWhenever a LUN is created, a geomap (the layout of the LUN across the Drive Enclosures) is assigned to it with a Geomap Priority and a number of Drive Groups. Neither of these values are changed if only the Priority Level of the LUN is changed, just the Queue Priority.
/cores_data/local/tools/pillar/build/builds/axiom/060107-034105/src/share/conman/RHEL-5-x86/CodViewer -f *.cod -V VolumeExtents -p geomap_VolumeExtents.txt -o txt
-bash-4.1$ cat geomap_VolumeExtents.txt Volume------------------------------------------------------------------------------------------------------------------- Name: TEST SUID: 0xa104a737dfcdbaa9 Is Clone: false Slat: no Primary VLUN--------------------------------------------------------------------------------------------------------- SUID: 0xa2020cf92b718383 Handle: 0xdc Owner SUID: 0xa104a737dfcdbaa9 BS Cache Mode: Write Back (1) Queuing Priority: Low (1) Type: Normal (0) Slat: no Geomap------------------------------------------------------------------------------------------------------- SUID: 0xa20c0cf92b71838b Handle: 0xd7 Storage Class: PERF HDD (2) Performance Band: Low (1) Num Stripe Children: 2 Growth Increment: 550502400 Disk Protection: RAID5S (5) Allocated Capacity: 100.5GB (107898470400 bytes) Maximum Capacity: 100.5GB (107898470400 bytes) Extent(s)---------------------------------------------------------------------------------------------------- Sc ExtHndl Logical MAUspan 512B Blks Enclosure ID LUN Index Physical LBA Mig ID S T RaidLevel ---- ------- --------------- ---------- ------------------ --- ----- ------------------ ------- - - -------- 0x00 0x01893 0x00000-0x000c4 0x0647d000 0x5080020001474dc4 1 00012 0x0000000271df7000 0x00000 N N RAID5S (5) 0x01 0x01894 0x00000-0x000c4 0x0647d000 0x5080020001474dc4 0 00001 0x0000000271df7000 0x00000 N N RAID5S (5)
Volume-------------------------------------------------------------------------------------------------------------------
Name: TEST SUID: 0xa104a737dfcdbaa9 Is Clone: false Slat: no Primary VLUN--------------------------------------------------------------------------------------------------------- SUID: 0xa2020cf92b718383 Handle: 0xdc Owner SUID: 0xa104a737dfcdbaa9 BS Cache Mode: Write Back (1) Queuing Priority: Premium (4) Type: Normal (0) Slat: no Geomap------------------------------------------------------------------------------------------------------- SUID: 0xa20c0cf92b71838b Handle: 0xd7 Storage Class: PERF HDD (2) Performance Band: Low (1) Num Stripe Children: 2 Growth Increment: 550502400 Disk Protection: RAID5S (5) Allocated Capacity: 100.5GB (107898470400 bytes) Maximum Capacity: 100.5GB (107898470400 bytes) Extent(s)---------------------------------------------------------------------------------------------------- Sc ExtHndl Logical MAUspan 512B Blks Enclosure ID LUN Index Physical LBA Mig ID S T RaidLevel ---- ------- --------------- ---------- ------------------ --- ----- ------------------ ------- - - -------- 0x00 0x01893 0x00000-0x000c4 0x0647d000 0x5080020001474dc4 1 00012 0x0000000271df7000 0x00000 N N RAID5S (5) 0x01 0x01894 0x00000-0x000c4 0x0647d000 0x5080020001474dc4 0 00001 0x0000000271df7000 0x00000 N N RAID5S (5)
SolutionThere is a workaround to increase/reduce the number of Drive Groups:
Note: each Storage Domain has the following option enabled by default: Automatic QoS Rebalancing. However, the geomap will not use more or less Drive Groups than the original Priority set on a LUN unless there was not enough Drive Groups at the time of the LUN creation (we use the concept of Stripe Child: the allocation manager creates a number of Stripe Children to match the Stripe Width and it tries to allocate one Stripe Child per Drive Group. But multiple Stripe Children for a LUN can reside within the same Drive Group in case of an insufficient number of Drive Groups or not enough free space in the Drive Groups).
Adding a new DE gives an option called Rebalance Volume Data (all the LUNs in the Storage Class in the Storage Domain) to prevent contention. This will not increase or reduce the number of Drive Groups for a LUN unless there was not enough Drive Groups at the time of the LUN creation (the above explanation about Stripe Child applies).
Attachments This solution has no attachment |
||||||||||||||||||||||||||
|