Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2389345.1
Update Date:2018-04-20
Keywords:

Solution Type  Technical Instruction Sure

Solution  2389345.1 :   ODA : When I finish With the Replacement of One Particular Disk, How Long Should I Wait Before Replacing The Next One?  


Related Items
  • Oracle Database Appliance
  •  
Related Categories
  • PLA-Support>Eng Systems>Exadata/ODA/SSC>Oracle Database Appliance>DB: ODA_EST
  •  




In this Document
Goal
Solution
 Example about replacing several disks in ODA
 Community Discussions ODA


Created from <SR 3-17204679512>

Applies to:

Oracle Database Appliance - Version All Versions to All Versions [Release All Releases]
Information in this document applies to any platform.

Goal

When I finish with the replacement of one particular disk, how long should I wait before replacing the next one?

Do I have to wait until all rebalance operations finish?

Solution

Please check the next example about replacing several disks, you will see the time you need to wait between each disk:

Example about replacing several disks in ODA

1) We need to replace the next disks in the next order:

a) pd_02 /dev/sdn  HDD ONLINE (faulty)
b) pd_03 /dev/sdaj HDD ONLINE (faulty)
c) pd_06 /dev/sdaa HDD ONLINE (faulty)
d) pd_08 /dev/sdan HDD ONLINE (faulty)
e) pd_12 /dev/sdao HDD ONLINE (faulty)
f) pd_18 /dev/sdad HDD ONLINE (faulty)
g) pd_20 /dev/sdaq SSD ONLINE (faulty)
h) pd_23 /dev/sdag SSD ONLINE (faulty)

Note: The above disks are just hypothetical disks and do not represent your real disks.

2) The disks can be replaced as follows (one at the time):

A) pd_02 affected disk:

A.1) Identify the affected disk as follows:

# oakcli locate disk pd_02 on ### orange led

A.2) Replaced the disk

# oakcli locate disk pd_02 off ### green led

A.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

A.4) Check the new 2 disk partitions (p1 and p2) are created:

# oakcli stordiag pd_02 | grep -i mapper

A.5) And then, check those 2 partitions were added back to the RECO and DATA diskgroup respectively:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step A.4>%p1';

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step A.4>%p2';

A.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup DATA rebalance power 32;

SQL> alter diskgroup RECO rebalance power 32;

A.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

B) pd_03 affected disk:

B.1) Identify the affected disk as follows:

# oakcli locate disk pd_03 on ### orange led

B.2) Replaced the disk

# oakcli locate disk pd_03 off ### green led

B.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

B.4) Check the new 2 disk partitions (p1 and p2) are created:

# oakcli stordiag pd_03 | grep -i mapper

B.5) And then, check those 2 partitions were added back to the RECO and DATA diskgroup respectively:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step B.4>%p1';

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step B.4>%p2';

B.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup DATA rebalance power 32;

SQL> alter diskgroup RECO rebalance power 32;

B.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

C) pd_06 affected disk:

C.1) Identify the affected disk as follows:

# oakcli locate disk pd_06 on ### orange led

C.2) Replaced the disk

# oakcli locate disk pd_06 off ### green led

C.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

C.4) Check the new 2 disk partitions (p1 and p2) are created:

# oakcli stordiag pd_06 | grep -i mapper

C.5) And then, check those 2 partitions were added back to the RECO and DATA diskgroup respectively:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step C.4>%p1';

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step C.4>%p2';

C.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup DATA rebalance power 32;

SQL> alter diskgroup RECO rebalance power 32;

C.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

D) pd_08 affected disk:

D.1) Identify the affected disk as follows:

# oakcli locate disk pd_08 on ### orange led

D.2) Replaced the disk

# oakcli locate disk pd_08 off ### green led

D.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

D.4) Check the new 2 disk partitions (p1 and p2) are created:

# oakcli stordiag pd_08 | grep -i mapper

D.5) And then, check those 2 partitions were added back to the RECO and DATA diskgroup respectively:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step D.4>%p1';

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step D.4>%p2';

D.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup DATA rebalance power 32;

SQL> alter diskgroup RECO rebalance power 32;

D.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

E) pd_12 affected disk:

E.1) Identify the affected disk as follows:

# oakcli locate disk pd_12 on ### orange led

E.2) Replaced the disk

# oakcli locate disk pd_12 off ### green led

E.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

E.4) Check the new 2 disk partitions (p1 and p2) are created:

# oakcli stordiag pd_12 | grep -i mapper

E.5) And then, check those 2 partitions were added back to the RECO and DATA diskgroup respectively:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step E.4>%p1';

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step E.4>%p2';

E.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup DATA rebalance power 32;

SQL> alter diskgroup RECO rebalance power 32;

E.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

F) pd_18 affected disk:

F.1) Identify the affected disk as follows:

# oakcli locate disk pd_18 on ### orange led

F.2) Replaced the disk

# oakcli locate disk pd_18 off ### green led

F.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

F.4) Check the new 2 disk partitions (p1 and p2) are created:

# oakcli stordiag pd_18 | grep -i mapper

F.5) And then, check those 2 partitions were added back to the RECO and DATA diskgroup respectively:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step F.4>%p1';

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step F.4>%p2';

F.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup DATA rebalance power 32;

SQL> alter diskgroup RECO rebalance power 32;

F.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

G) pd_20 affected disk:

G.1) Identify the affected disk as follows:

# oakcli locate disk pd_20 on ### orange led

G.2) Replaced the disk

# oakcli locate disk pd_20 off ### green led

G.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

G.4) Check the new disk partition (p1) is created:

# oakcli stordiag pd_20 | grep -i mapper

G.5) And then, check that partition was added back to the REDO diskgroup:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step G.4>%p1';

G.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup REDO rebalance power 32;

G.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation;

 

H) pd_23 affected disk:

H.1) Identify the affected disk as follows:

# oakcli locate disk pd_23 on ### orange led

H.2) Replaced the disk

# oakcli locate disk pd_23 off ### green led

H.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

H) pd_23 affected disk:

H.1) Identify the affected disk as follows:

# oakcli locate disk pd_23 on ### orange led

H.2) Replaced the disk

# oakcli locate disk pd_23 off ### green led

H.3) Wait few minutes (1 to 3 minutes) and check the new disk status:

# oakcli show disk

H.4) Check the new disk partition (p1) is created:

# oakcli stordiag pd_23 | grep -i mapper

H.5) And then, check that partition was added back to the REDO diskgroup:

SQL> select path from v$asm_disk where path like'/dev/mapper/<partitions from step H.4>%p1';

H.6) Set the rebalance operation = 32 to expedite it as follows:

SQL> alter diskgroup REDO rebalance power 32;

H.7) Wait until the rebalance operation completes to continue with the next disk:

SQL> select * from gv$asm_operation; 

 

 


  

Community Discussions ODA

Still have questions? Use the communities window below to search for similar discussions or start a new discussion on this subject. (Window is the live community not a screenshot)

Click here to open in main browser window


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback