Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1611403.1
Update Date:2017-07-03
Keywords:

Solution Type  Technical Instruction Sure

Solution  1611403.1 :   How to Add the Additional Storage Space Created from Dynamic LUN Expansion to the Solaris Operating System  


Related Items
  • Sun Storage Flexline 380 Array
  •  
  • Sun Storage 6580 Array
  •  
  • Sun Storage 6780 Array
  •  
  • Sun Storage 2540-M2 Array
  •  
  • Sun Storage 2510 Array
  •  
  • Sun Storage 2540 Array
  •  
  • Sun Storage 6140 Array
  •  
  • Sun Storage 2530-M2 Array
  •  
  • Sun Storage 2530 Array
  •  
  • Sun Storage 6180 Array
  •  
  • Sun Storage 6540 Array
  •  
  • Sun Storage 6130 Array
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>Arrays>SN-DK: 6140_6180
  •  




In this Document
Goal
Solution
 Scenario 1: Simple Solaris disk device (LUN) with a UFS file system
 Scenario 2: zpool with ZFS file system
 Scenario 3: Solaris Volume Manager (SVM) metadevice with UFS file system
References


Applies to:

Sun Storage 6580 Array - Version All Versions to All Versions [Release All Releases]
Sun Storage Flexline 380 Array - Version Not Applicable to Not Applicable [Release N/A]
Sun Storage 6780 Array - Version All Versions to All Versions [Release All Releases]
Sun Storage 2540-M2 Array - Version Not Applicable to Not Applicable [Release N/A]
Sun Storage 2530-M2 Array - Version Not Applicable to Not Applicable [Release N/A]
Information in this document applies to any platform.

Goal

Oracle 2500, 2500-M2, and 6000 Storage Arrays provide a feature called dynamic LUN expansion. This feature allows you to grow your existing volume on the fly without affecting existing data or I/O.

Dynamic LUN expansion increases the capacity of the physical storage.  You must then make Solaris aware that the device has grown, and if a file system resides on the device, it must also be grown. If the device is being used as a raw device, for example for a database, please refer to documentation for the application on how to take advantage of the increased capacity.

This document guides you through dynamic LUN expansion and the steps in Solaris to increase the size of the device and file system for three common scenarios:

  • simple Solaris disk device (LUN) with a UFS file system
  • zpool with ZFS file system
  • Solaris Volume Manager metadevice with UFS file system

 

Solution

Scenario 1: Simple Solaris disk device (LUN) with a UFS file system

View the LUN's properties using Sun Storage Common Array Manager (CAM) command line and the format utility.

# /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume dsk-vol
Volume:        dsk-vol
WWN:           60:0A:0B:80:00:2F:BC:67:00:00:1A:E5:52:B2:9E:43
Virtual Disk:  2
Size:          300.000 GB
State:         Mapped
Status:        Online

# format

< . . . . >

99. c7t600A0B80002FBC6700001AE552B29E43d0 <SUN-LCSM100_F-0670 cyl 38398 alt 2 hd 256 sec 64> dsk-vol
/scsi_vhci/ssd@g600a0b80002fbc6700001ae552b29e43

  format> partition
  partition> print

      Part      Tag    Flag     Cylinders         Size            Blocks
        2     backup    wu       0 - 38397      299.98GB    (38398/0/0) 629112832

Here is the sequence of commands that originally created the UFS file system on this LUN:

# newfs /dev/rdsk/c7t600A0B80002FBC6700001AE552B29E43d0s2
# mkdir /dsk-vol
# mount /dev/dsk/c7t600A0B80002FBC6700001AE552B29E43d0s2 /dsk-vol
# df -k /dsk-vol
   Filesystem            kbytes    used   avail      capacity  Mounted on
   /dev/dsk/c7t600A0B80002FBC6700001AE552B29E43d0s2
                        309794700  262161 306434592  1%        /dsk-vol

The first step is to increase the capacity of the LUN. In this example, 50GB are added to the existing volume on the array. This command will only change the volume capacity on the array. The device and file system on the server remain unchanged.

Refer to the section entitled "Expanding Volume Capacity" in Chapter 4 of the Sun Storage Common Array Manager Array Administration Guide for information on using the browser user interface.

# /opt/SUNWstkcam/bin/sscs modify  -a 2540-fc --extend 50GB volume dsk-vol

# /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume dsk-vol
  Volume: dsk-vol
  WWN:           60:0A:0B:80:00:2F:BC:67:00:00:1A:E5:52:B2:9E:43
  Virtual Disk:  2
  Size:          350.000 GB
  State:         Mapped
  Status:        Online

You can now take the newly added space to the volume and allocate it to the disk device and file system on the server. You will use the format utility to add the space to the disk device, and the growfs command to add this space to the file system. Minimal downtime is required, as the file system needs to be temporarily unmounted while increasing the size of the disk device. Note that the file system is mounted when growfs is run.

  • You must put the starting cylinder, or sector (previously documented) back to its original value (if it has changed)
  • You cannot change the type of label. SMI needs to remain SMI. EFI needs to remain EFI. Anything larger than 2TB requires EFI.
  • You will use the $ feature of format to allocate as much space as possible to the slice.

For further details about the handling of dynamic LUN expansion in Solaris, please see <Document 1382180.1> Solaris Does Not Automatically Handle an Increase in LUN Size. If the expanded LUN is presented to the Solaris 11 Operating System, see <Document 1549604.1> How to Increase the Size of a Vdisk and Filesystem on a LDom Guest Domain.

# umount /dsk-vol
# format c7t600A0B80002FBC6700001AE552B29E43d0
     selecting c7t600A0B80002FBC6700001AE552B29E43d0: dsk-vol

  format> type
     AVAILABLE DRIVE TYPES:
     0. Auto configure
     < . . .  >  

  Specify disk type (enter its number)[19]: 0
     c7t600A0B80002FBC6700001AE552B29E43d0: configured with capacity of 349.98GB
     <SUN-LCSM100_F-0670 cyl 44798 alt 2 hd 256 sec 64> selecting c7t600A0B80002FBC6700001AE552B29E43d0

  format> partition
  partition> 2
    Enter partition id tag[backup]: <Enter>
    Enter partition permission flags[wu]: <Enter>
    Enter new starting cyl[0]: 0  (0 was the value prior to the expansion)
    Enter partition size[733970432b, 44798c, 44797e, 358384.00mb, 349.98gb]: $
  partition> label
    Ready to label disk, continue? yes
  partition> quit
  format> q

# mount /dev/dsk/c7t600A0B80002FBC6700001AE552B29E43d0s2 /dsk-vol
# growfs -M /dsk-vol /dev/rdsk/c7t600A0B80002FBC6700001AE552B29E43d0s2
# df -k  /dsk-vol
   Filesystem            kbytes    used   avail      capacity  Mounted on
   /dev/dsk/c7t600A0B80002FBC6700001AE552B29E43d0s2
                        361429635  262161 358069527     1%     /dsk-vol

The volume on the array has been expanded. Additional space has been given to the Solaris disk device. The UFS file system has been grown to take advantage of the added space. The operation is complete.

 

Scenario 2: zpool with ZFS file system

View the LUN's properties using Sun Storage Common Array Manager (CAM) command line and the format utility.

# /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume zfs-vol
  Volume:        zfs-vol
  WWN:           60:0A:0B:80:00:2F:BC:67:00:00:1A:E3:52:B2:9D:DB
  Virtual Disk:  1
  Size:          100.000 GB
  State:         Mapped
  Status:        Online

# format

< . . . .>

  98. c7t600A0B80002FBC6700001AE352B29DDBd0 <SUN-LCSM100_F-0670 cyl 51198 alt 2 hd 64 sec 64>  zfs-vol
      /scsi_vhci/ssd@g600a0b80002fbc6700001ae352b29ddb

   format> partition
   partition> print

      Part      Tag    Flag     First Sector    Size       Last Sector
        0       usr    wm           256        99.99GB     209698782

Here is the sequence of commands that originally created the zpool on this LUN: 

# zpool create zfsvol c7t600A0B80002FBC6700001AE352B29DDBd0
# zpool list zfsvol
    NAME     SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
    zfsvol  99.5G  79.5K  99.5G     0%  ONLINE  -
# df -k /zfsvol
  Filesystem            kbytes    used   avail capacity  Mounted on
  zfsvol               102703104      21 102703029     1%    /zfsvol

The first step is to increase the capacity of the LUN. In this example, 50GB are added to the existing volume on the array. This command will only change the volume capacity on the array. The device and file system on the server remain unchanged.

Refer to the section entitled "Expanding Volume Capacity" in Chapter 4 of the Sun Storage Common Array Manager Array Administration Guide for information on using the browser user interface.

# /opt/SUNWstkcam/bin/sscs modify  -a 2540-fc --extend 50GB volume zfs-vol

# /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume zfs-vol
  Volume:        zfs-vol
  WWN:           60:0A:0B:80:00:2F:BC:67:00:00:1A:E3:52:B2:9D:DB
  Virtual Disk:  1
  Size:          150.000 GB
  State:         Mapped
  Status:        Online

Two procedures are available for expanding a zpool. The simple approach is to set the zpool autoexpand property to ON. The alternate method using format to repartition the disk device should only be used in the case of an older version of ZFS that does not support autoexpand.

For further details about the handling of dynamic LUN expansion in Solaris, please see <Document 1382180.1> Solaris Does Not Automatically Handle an Increase in LUN Size. If the expanded LUN is presented to the Solaris 11 Operating System, see <Document 1549604.1> How to Increase the Size of a Vdisk and Filesystem on a LDom Guest Domain.

# zpool get  autoexpand zfsvol
    NAME    PROPERTY    VALUE   SOURCE
    zfsvol  autoexpand  off     default
# zpool set autoexpand=on zfsvol
# zpool online -e zfsvol c7t600A0B80002FBC6700001AE352B29DDBd0
# zpool export zfsvol
# zpool import zfsvol
# zpool list zfsvol
    NAME     SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
    zfsvol   150G  97.5K   149G     0%  ONLINE  -


ALTERNATE METHOD:

# NOINUSE_CHECK=1
# export NOINUSE_CHECK
# format -e c7t600A0B80002FBC6700001AE352B29DDBd0
   selecting c7t600A0B80002FBC6700001AE352B29DDBd0: zfs-vol
  format> type
    AVAILABLE DRIVE TYPES:
    0. Auto configure
    1. other
  Specify disk type (enter its number)[1]: 0
    c7t600A0B80002FBC6700001AE352B29DDBd0: configured with capacity of 150.00GB
    <SUN-LCSM100_F-0670-150.00GB>
  format> partition
  partition> 0
    Enter partition id tag[usr]: <Enter>
    Enter partition permission flags[wm]: <Enter>
    Enter new starting Sector[34]: 256  (256 was the value prior to the expansion)
    Enter partition size[314556349b, 314556604e, 153591mb, 149gb, 0tb]: $
  partition> lab
    [0] SMI Label
    [1] EFI Label
  Specify Label type[1]: 1
  Ready to label disk, continue? yes
  partition> quit
  format> quit
# zpool online -e zfsvol c7t600A0B80002FBC6700001AE352B29DDBd0
# zpool list zfsvol
   NAME     SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
   zfsvol   150G   147K   149G     0%  ONLINE  -
# df -k /zfsvol
   Filesystem            kbytes    used   avail capacity  Mounted on
   zfsvol               154312704      21 154312629     1%    /zfsvol

 The volume on the array has been expanded. Additional space has been given to the zpool. The operation is complete.

 

 

Scenario 3: Solaris Volume Manager (SVM) metadevice with UFS file system

In the example below the SVM volume is grown by expanding the underlying physical LUN.

 

With SVM there is an alternative to growing SVM volume by adding a new LUN to the SVM Volume.

For an example refer to <2018654.1> Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/raid5/soft partition by Adding a New LUN

 

 

View the LUN's properties using Sun Storage Common Array Manager (CAM) command line and the format utility. 

# /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume  svm-vol
  Volume: svm-vol
  WWN:                             60:0A:0B:80:00:2F:BC:5D:00:00:1A:C9:52:B3:29:4C
  Virtual Disk:                    2
  Size:                            200.000 GB
  State:                           Mapped
  Status:                          Online

# format

< . . . . >

  97. c7t600A0B80002FBC5D00001AC952B3294Cd0 <SUN-LCSM100_F-0670 cyl 51198 alt 2 hd 128 sec 64>  svm-vol
      /scsi_vhci/ssd@g600a0b80002fbc5d00001ac952b3294c

   format> partition
   partition> print
      Part      Tag    Flag     Cylinders         Size            Blocks
        2     backup    wu       0 - 51197      199.99GB    (51198/0/0) 419414016

 Here is the sequence of commands that originally created the metadevice and UFS file system on this LUN:

# metainit d100 1 1 c7t600A0B80002FBC5D00001AC952B3294Cd0s2
# newfs /dev/md/rdsk/d100
# mkdir /svm-vol
# mount /dev/md/dsk/d100 /svm-vol
# df -k /svm-vol
  Filesystem           kbytes     used   avail     capacity  Mounted on
  /dev/md/dsk/d100     206532277  204809 204262146 1%        /svm-vol

The first step is to increase the capacity of the LUN. In this example, 50GB are added to the existing volume on the array. This command will only change the volume capacity on the array. The device and file system on the server remain unchanged.

Refer to the section entitled "Expanding Volume Capacity" in Chapter 4 of the Sun Storage Common Array Manager Array Administration Guide for information on using the browser user interface.

# /opt/SUNWstkcam/bin/sscs modify  -a 2540-fc --extend 50GB volume svm-vol

# /opt/SUNWstkcam/bin/sscs list -a 2540-fc volume  svm-vol
  Volume:        svm-vol
  WWN:           60:0A:0B:80:00:2F:BC:5D:00:00:1A:C9:52:B3:29:4C
  Virtual Disk:  2
  Size:          250.000 GB
  State:         Mapped
  Status:        Online

You can now take the newly added space to the volume and allocate it to the disk device and file system on the server. You will use the format utility to add the space to the disk device, and the growfs command to add this space to the file system. Minimal downtime is required, as the file system needs to be temporarily unmounted while increasing the size of the disk device. Note that the file system is mounted when growfs is run.

Because a Solaris Volume Manager metadevice is involved, you must also delete and recreate the metadevice to acquire the new size. Doing so has no effect on the data.

    

Refer to a similar example in <Document 2018655.1> Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/soft partition by Expanding an Existing LUN.

         

  • You must put the starting cylinder, or sector (previously documented) back to its original value (if it has changed)
  • You cannot change the type of label. SMI needs to remain SMI. EFI needs to remain EFI. Anything > 2TB requires EFI.
  • You will use the $ feature of format to allocate as much space as possible to the slice.

 

For further details about the handling of dynamic LUN expansion in Solaris, please see <Document 1382180.1> Solaris Does Not Automatically Handle an Increase in LUN Size. If the expanded LUN is presented to the Solaris 11 Operating System, see <Document 1549604.1> How to Increase the Size of a Vdisk and Filesystem on a LDom Guest Domain.

Document metadevice information and metaclear it:

# metastat -p d100
    d100 1 1 /dev/dsk/c7t600A0B80002FBC5D00001AC952B3294Cd0s2
# umount /svm-vol
# metaclear d100
    d100: Concat/Stripe is cleared

Expand the Solaris device:

# format c7t600A0B80002FBC5D00001AC952B3294Cd0
     selecting c7t600A0B80002FBC5D00001AC952B3294Cd0: svm-vol
   format> type
     AVAILABLE DRIVE TYPES:
     0. Auto configure
     < . . . . >
   Specify disk type (enter its number)[19]: 0
     c7t600A0B80002FBC5D00001AC952B3294Cd0: configured with capacity of 249.99GB
     <SUN-LCSM100_F-0670 cyl 63998 alt 2 hd 128 sec 64>
   format> partition
   partition> 2
       Enter partition id tag[backup]: <Enter>
       Enter partition permission flags[wu]: <Enter>
       Enter new starting cyl[0]:  (0 was the value prior to the expansion)
       Enter partition size[524271616b, 63998c, 63997e, 255992.00mb, 249.99gb]: $
    partition> label
    Ready to label disk, continue? yes
    partition> quit
    format> quit

Recreate the metadevice:

# metainit d100 1 1 /dev/dsk/c7t600A0B80002FBC5D00001AC952B3294Cd0s2
    d100: Concat/Stripe is setup

Mount and grow the file system:

# mount /dev/md/dsk/d100 /svm-vol
# growfs -M /svm-vol /dev/md/rdsk/d100
# metastat d100
    d100: Concat/Stripe
    Size: 524271616 blocks (249 GB)
    Stripe 0:
    Device                                             Start Block  Dbase   Reloc
    /dev/dsk/c7t600A0B80002FBC5D00001AC952B3294Cd0s2          0     No      Yes
# df -k /svm-vol
    Filesystem            kbytes    used   avail     capacity  Mounted on
    /dev/md/dsk/d100     258167212  256009 255845881    1%     /svm-vol

The volume on the array has been expanded. Additional space has been given to the Solaris disk device. The metadevice has been recreated to pick up the new size. The UFS file system has been grown to take advantage of the added space. The operation is complete.

References

<NOTE:1549604.1> - How to Increase the Size of a Vdisk and Filesystem on a LDom Guest Domain
<NOTE:1382180.1> - Solaris Does Not Automatically Handle an Increase in LUN Size
<NOTE:2018655.1> - Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/soft partition by Expanding an Existing LUN
<NOTE:2018654.1> - Solaris Volume Manager (SVM) How to 'growfs' an UFS on Top of a SVM mirror/concat/raid5/soft partition by Adding a New LUN

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback