A device within a ZFS pool experienced too many I/O errors. Run 'zpool status -x' to determine exactly which device failed and why:
# zpool status -x
pool: test
state: DEGRADED
status: The number of I/O errors associated with a ZFS device exceeded acceptable levels.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-FD
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c0t0d0 ONLINE 0 0 0
c0t0d1 FAULTED 103 0 0 too many errors
errors: No known data errors.
If the pool has available hot spares, then a hot spare will have been substituted.
Please make sure the device that has been marked as "FAULTED" by ZFS is indeed not visible to Solaris (e.g format, cfgadm etc can not probe the device).
If the device has been diagnosed to be available to Solaris, then run 'zpool clear' to clear the errors and the associated status. Solaris 11 and above allows 'zpool clear -f' to clear associated FMA faults. Please refer to man page of zpool (1M) for more details.
If the errors persist even after running 'zpool clear', the device may be diagnosed as faulty. In that case proceed to replace the device as described below, or contact your service provider.
Please Note that for Virtualized Environments (LDOMs, Solaris running under Oracle VM), the faulted device may not map to physical disk and might need additional diagnostics to determine if the issues with the device are temporary.
If the device does map to physical device then to repair the pool, replace the physical device in the system and issue a 'zpool replace' command:
# zpool replace test c0t0d1
To replace the device with a different device, specify the replacement device as the second argument to 'zpool replace':
# zpool replace test c0t0d1 c0t0d2
This will begin resilvering data to the new device. Use 'zpool status' to monitor resilvering progress. When the resilvering completes, any hot spares will be removed and the pool will return to the healthy state.