Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-1669607.1
Update Date:2014-06-11
Keywords:

Solution Type  Technical Instruction Sure

Solution  1669607.1 :   How to Use the Common Array Manager Host Supportdata to Locate the Chassis and Slot Position of a Failed J4000 Disk Drive  


Related Items
  • Sun Storage J4500 Array
  •  
  • Sun Storage J4400 Array
  •  
  • Sun Storage J4200 Array
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>Arrays>SN-DK: J4xxx JBOD
  •  




In this Document
Goal
Solution
References


Created from <SR 3-8894266351>

Applies to:

Sun Storage J4400 Array - Version All Versions and later
Sun Storage J4500 Array - Version All Versions and later
Sun Storage J4200 Array - Version All Versions and later
Information in this document applies to any platform.

Goal

J4000 arrays are typically managed with Common Array Manager, and are configured with ZFS. The mass J4000 storage, combined with the redundancy (and file system integrity) of ZFS makes this a popular configuration. Configurations may combine many J4000 arrays into one (or several) ZFS storage pools. With so many disk drives under ZFS control, it may become difficult to determine the J4000 chassis/slot position of a disk drive which has failed in the ZFS storage pool. This document provides a simple one-step procedure to correlate all J4000 disk locations to their respective device names in ZFS.

In the case where the J4000 disks are not under zfs control, use <Document 1603886.1> How to find the slot number for a disk in a J4000 Series Array with an Oracle Explorer and an Array Support Data.

Document 1679803.1 How to Use Explorer to Collect Supportdata for the Sun Storage 2500, 2500-M2, 6000 and J4000 arrays explains how to collect all data files in one simple step.

Solution

In the following example, there are 4 J4400 arrays attached to a single server. All of the respective disk drives are configured into a striped, raidz2, zpool. A single disk has failed in the pool. Below is an example of the parsed data.

# zpool list
   NAME     SIZE  ALLOC   FREE  CAP    HEALTH  ALTROOT
   sybase  43.5T  19.5T  24.0T  44%  DEGRADED  -

# zpool status
   pool: sybase
   state: DEGRADED
   status:One or more devices are faulted in response to persistent errors.
          Sufficient replicas exist for the pool to continue functioning in a degraded state.
   action: Replace the faulted device, or use 'zpool clear' to mark the device repaired.
   scan: resilvered 1.10T in 5h57m with 0 errors on Wed Apr 16 17:58:24 2014

        NAME                       STATE     READ WRITE CKSUM
        sybase                     DEGRADED     0     0     0
          raidz2-0                 DEGRADED     0     0     0
            c1t5000CCA21EF50882d0  ONLINE       0     0     0
            c1t5000C5001A2F7AA4d0  FAULTED      4    40     0  too many errors
            c1t5000C5001A3EC886d0  ONLINE       0     0     0
            c1t5000C5001A3F2BA6d0  ONLINE       0     0     0
            c1t5000C5001A3F1B76d0  ONLINE       0     0     0
            c1t5000C5001A3F4D67d0  ONLINE       0     0     0
            c1t5000C5001A3F452Dd0  ONLINE       0     0     0
            c1t5000CCA39CF41059d0  ONLINE       0     0     0
          raidz2-1                 ONLINE       0     0     0
            c1t5000C5001A3E8DEEd0  ONLINE       0     0     0
            c1t5000C5001A3F1778d0  ONLINE       0     0     0
            c1t5000C5001A3E9B14d0  ONLINE       0     0     0
            c1t5000CCA39CD37E18d0  ONLINE       0     0     0
            c1t5000CCA396E3D4E6d0  ONLINE       0     0     0
            c1t5000C5001A537129d0  ONLINE       0     0     0
            c1t5000C5001A5360C9d0  ONLINE       0     0     0
            c1t5000CCA39CF413FDd0  ONLINE       0     0     0
          raidz2-2                 ONLINE       0     0     0
            c1t5000C5001A33DD7Dd0  ONLINE       0     0     0
            .................................................

 

The next step will be to locate the disk drive in the J4400 chassis. The problem is that there are 4 J4400's to search through. Traditionally, you would collect the supportdata for each J4400 (4 total) and review the dataStore.txt file of each to determine the chassis/slot location. See <Document 1002514.1> Collecting Sun Storage Common Array Manager Support Data for Arrays.

An alternative to this approach is to collect the single, host supportdata file. See <Document 1021091.1> Collecting Sun Storage Common Array Manager Host Support Data. After you unzip and tar out the supportdata, you will see a file called topology.txt. This file contains all the mapping information needed to locate the chassis/slot position of the failed drive. By searching for c1t5000C5001A2F7AA4d0, you will find the following.

FWR_NODE_CONTROLLER "mpt:0"
    FWR_NODE_CHASSIS "SUN Storage J4400" "0826QCK019"
        FWR_NODE_ENCLOSURE "SUN Storage J4400 rev 3R53 fwv 3R53 wwn 50016360002416bd" (expander 0 SIM0)
        .........................
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A33DD7Dd0s2: ATA SEAGATE ST31000N (s/n "9QJ6DH2A"), bay 4
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3E8333d0s2: ATA SEAGATE ST31000N (s/n "9QJ6ENAR"), bay 7
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3E8DEEd0s2: ATA SEAGATE ST31000N (s/n "9QJ6EN6E"), bay 12
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3E997Cd0s2: ATA SEAGATE ST31000N (s/n "9QJ6EZQD"), bay 2
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3E9B14d0s2: ATA SEAGATE ST31000N (s/n "9QJ6EZZP"), bay 14
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3EC886d0s2: ATA SEAGATE ST31000N (s/n "9QJ6F0A7"), bay 22
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3ED7C0d0s2: ATA SEAGATE ST31000N (s/n "9QJ6EZT1"), bay 6
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F1778d0s2: ATA SEAGATE ST31000N (s/n "9QJ6EPJ1"), bay 13
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F1B76d0s2: ATA SEAGATE ST31000N (s/n "9QJ6EN5H"), bay 16
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F20B2d0s2: ATA SEAGATE ST31000N (s/n "9QJ6F0FJ"), bay 0
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F2805d0s2: ATA SEAGATE ST31000N (s/n "9QJ6EQ5D"), bay 1
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F2BA6d0s2: ATA SEAGATE ST31000N (s/n "9QJ6EPVE"), bay 23
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F452Dd0s2: ATA SEAGATE ST31000N (s/n "9QJ6EPCT"), bay 18
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F4D67d0s2: ATA SEAGATE ST31000N (s/n "9QJ6DX69"), bay 17
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A3F69E3d0s2: ATA SEAGATE ST31000N (s/n "9QJ6DX27"), bay 3
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A2F7AA4d0s2: ATA SEAGATE ST31000N (s/n "9QJ6FRCE"), bay 9
        FWR_NODE_DISK  /dev/rdsk/c1t5000C5001A5360C9d0s2: ATA SEAGATE ST31000N (s/n "9QJ6L4DC"), bay 10
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA216D92D15d0s2: ATA HITACHI HUA7210S (s/n "GTF002PAHTBWXF"), bay 21
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA21EF50882d0s2: ATA HITACHI HUA7210S (s/n "GTA070PBKSNMPE"), bay 20
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA396E3D4E6d0s2: ATA HITACHI H7210CA3 (s/n "JPW9K0J82JUKDL"), bay 8
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA39CCDC961d0s2: ATA HITACHI H7210CA3 (s/n "JPW9K0N10ZA5VL"), bay 5
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA39CD37E18d0s2: ATA HITACHI H7210CA3 (s/n "JPW9K0N11BW9JL"), bay 15
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA39CF41059d0s2: ATA HITACHI H7210CA3 (s/n "JPW9K0N13PJJBL"), bay 19
        FWR_NODE_DISK  /dev/rdsk/c1t5000CCA39CF413FDd0s2: ATA HITACHI H7210CA3 (s/n "JPW9K0N13PKHEL"), bay 11

 

  • All 4 J4400 topologies are contained within this file. Only 1 is shown for simplicity.
  • This is also a great method to check firmware revision levels for SIM boards.
  • Physically locate the J4400 chassis by the S/N (0826QCK019) which is etched into the front, left hand side mounting bracket, of the chassis.
  • Once the drive has been located, proceed with <Document  1309131.1> How to Replace a J4400 Array HDD:ATR:2609:1

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback