Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-1990157.1
Update Date:2016-12-01
Keywords:

Solution Type  Problem Resolution Sure

Solution  1990157.1 :   FS System: Changing the QoS Priority of a LUN on R6 Does Not Trigger a QoS Migration  


Related Items
  • Oracle FS1-2 Flash Storage System
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>Flash Storage>SN-EStor: FSx
  •  


This document explains the difference between R5 and R6 regarding the change of the Priority Level on an existing LUN and it provides a workaround to users who would like to change the number of Drive Groups.

In this Document
Symptoms
Changes
Cause
Solution


Applies to:

Oracle FS1-2 Flash Storage System - Version All Versions to All Versions [Release All Releases]
Information in this document applies to any platform.

Symptoms

This document explains the difference between R5 and R6 regarding the change of the Priority Level on an existing LUN and provides a workaround to users who would like to change the number of Drive Groups.  With Oracle FS System Manager with the R6 software, changing the Quality of Service (QoS) priority on a LUN does not trigger a QoS migration on R6 (QoS migration => restripe of a LUN to use more or less Drive Groups). The new priority is only reflected at the Controller level: Queue priority, more CPU, cache.  As such, changing the Priority Level of a LUN does not trigger a CmBackground task (visible by clicking on the Tasks button at the bottom right of the GUI) will only result in a ModifyLUN task.

Changes

Previously, with Axiom R5 systems, users may have been advised to change the QoS Priority to get more spindles in order to improve performance.  With the FS1-2 and R6, the spindle count can also be determined by the number and type of Drive Groups when a Storage Profile is used.  Custom Storage Profiles can also be created where the user can set the number of Drive Groups using the Stripe Width.

Conversely, uncheck the Use Storage Profile box and the number of Drive Groups will depend on the Priority Level used for the LUN:

 

PriorityNumber of Drive Groups
Archive / Low 2
Medium 3
High / Premium 4

 

 

 

 

 

 

 

Cause

Whenever a LUN is created, a geomap (the layout of the LUN across the Drive Enclosures) is assigned to it with a Geomap Priority and a number of Drive Groups.  Neither of these values are changed if only the Priority Level of the LUN is changed, just the Queue Priority.


Here is an example of a LUN with a Low Priority (on Drive Groups 1 & 12) that has been generated with the following command:

/cores_data/local/tools/pillar/build/builds/axiom/060107-034105/src/share/conman/RHEL-5-x86/CodViewer -f *.cod -V VolumeExtents -p geomap_VolumeExtents.txt -o txt

-bash-4.1$ cat geomap_VolumeExtents.txt
Volume-------------------------------------------------------------------------------------------------------------------
                         Name: TEST
                         SUID: 0xa104a737dfcdbaa9
                     Is Clone: false
                         Slat: no
    Primary VLUN---------------------------------------------------------------------------------------------------------
                         SUID: 0xa2020cf92b718383
                       Handle: 0xdc
                   Owner SUID: 0xa104a737dfcdbaa9
                BS Cache Mode: Write Back (1)
             Queuing Priority: Low (1)
                         Type: Normal (0)
                         Slat: no
            Geomap-------------------------------------------------------------------------------------------------------
                         SUID: 0xa20c0cf92b71838b
                       Handle: 0xd7
                Storage Class: PERF HDD (2)
             Performance Band: Low (1)
          Num Stripe Children: 2
             Growth Increment: 550502400
              Disk Protection: RAID5S (5)
           Allocated Capacity: 100.5GB (107898470400 bytes)
             Maximum Capacity: 100.5GB (107898470400 bytes)
            Extent(s)----------------------------------------------------------------------------------------------------
            Sc   ExtHndl Logical MAUspan 512B Blks     Enclosure ID    LUN Index    Physical LBA    Mig ID  S T RaidLevel
            ---- ------- --------------- ---------- ------------------ --- ----- ------------------ ------- - - --------
            0x00 0x01893 0x00000-0x000c4 0x0647d000 0x5080020001474dc4   1 00012 0x0000000271df7000 0x00000 N N RAID5S (5)
            0x01 0x01894 0x00000-0x000c4 0x0647d000 0x5080020001474dc4   0 00001 0x0000000271df7000 0x00000 N N RAID5S (5)


After changing the Priority Level from Low to Premium, we find out that it is still using the same extents on the same Drive Groups (2 Drive Groups instead of 4).  Also, the Queue Priority shows as Premium but the Geomap Priority (Performance Band) is still Low.

Volume-------------------------------------------------------------------------------------------------------------------
                         Name: TEST
                         SUID: 0xa104a737dfcdbaa9
                     Is Clone: false
                         Slat: no
    Primary VLUN---------------------------------------------------------------------------------------------------------
                         SUID: 0xa2020cf92b718383
                       Handle: 0xdc
                   Owner SUID: 0xa104a737dfcdbaa9
                BS Cache Mode: Write Back (1)
             Queuing Priority: Premium (4)
                         Type: Normal (0)
                         Slat: no
            Geomap-------------------------------------------------------------------------------------------------------
                         SUID: 0xa20c0cf92b71838b
                       Handle: 0xd7
                Storage Class: PERF HDD (2)
             Performance Band: Low (1)
          Num Stripe Children: 2
             Growth Increment: 550502400
              Disk Protection: RAID5S (5)
           Allocated Capacity: 100.5GB (107898470400 bytes)
             Maximum Capacity: 100.5GB (107898470400 bytes)
            Extent(s)----------------------------------------------------------------------------------------------------
            Sc   ExtHndl Logical MAUspan 512B Blks     Enclosure ID    LUN Index    Physical LBA    Mig ID  S T RaidLevel
            ---- ------- --------------- ---------- ------------------ --- ----- ------------------ ------- - - --------
            0x00 0x01893 0x00000-0x000c4 0x0647d000 0x5080020001474dc4   1 00012 0x0000000271df7000 0x00000 N N RAID5S (5)
            0x01 0x01894 0x00000-0x000c4 0x0647d000 0x5080020001474dc4   0 00001 0x0000000271df7000 0x00000 N N RAID5S (5)

 

 

Solution

There is a workaround to increase/reduce the number of Drive Groups:

  • The user can create a Storage Profile (System -> Storage Profiles) with a number of Drive Groups (Stripe Width) different to the original LUN and apply that Storage Profile to the LUN

    NOTE: do not use a predefined Storage Profile.  They use an Auto-Select Stripe Width unless you are changing the Storage Class as described in the next point.
     
  • Changing the Storage Class or Storage Domain of a LUN will not change the number of Drive Groups unless a Storage Profile is used:
    • The Stripe Width value needs to be Auto-Select or different to the current LUN.
    • If the Stripe Width is set to Auto-Select, the user must change the Priority Level without using one of these scenarios: from Archive -> Low or Low -> Archive or High -> Premium or Premium -> High as (from the above table) they use the same number of Drive Groups.



You should see a QoS migration in the Tasks:

Volume QoS Migration

 

 

Note: each Storage Domain has the following option enabled by default: Automatic QoS Rebalancing. However, the geomap will not use more or less Drive Groups than the original Priority set on a LUN unless there was not enough Drive Groups at the time of the LUN creation (we use the concept of Stripe Child: the allocation manager creates a number of Stripe Children to match the Stripe Width and it tries to allocate one Stripe Child per Drive Group. But multiple Stripe Children for a LUN can reside within the same Drive Group in case of an insufficient number of Drive Groups or not enough free space in the Drive Groups).
Adding a new DE gives an option called Rebalance Volume Data (all the LUNs in the Storage Class in the Storage Domain) to prevent contention. This will not increase or reduce the number of Drive Groups for a LUN unless there was not enough Drive Groups at the time of the LUN creation (the above explanation about Stripe Child applies).

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback