Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-72-2028786.1
Update Date:2018-05-30
Keywords:

Solution Type  Problem Resolution Sure

Solution  2028786.1 :   Oracle ZFS Storage Appliance: Appliance nodes synchronized to NTP do not maintain the correct time  


Related Items
  • Sun ZFS Storage 7320
  •  
  • Oracle ZFS Storage ZS3-2
  •  
  • Oracle ZFS Storage ZS3-4
  •  
  • Sun ZFS Storage 7420
  •  
  • Oracle ZFS Storage ZS4-4
  •  
  • Sun ZFS Storage 7120
  •  
Related Categories
  • PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
  •  




In this Document
Symptoms
Cause
Solution
References


Created from <SR 3-10523215530>

Applies to:

Sun ZFS Storage 7120 - Version All Versions to All Versions [Release All Releases]
Sun ZFS Storage 7320 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-2 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS4-4 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-4 - Version All Versions to All Versions [Release All Releases]
7000 Appliance OS (Fishworks)

Symptoms

Customer claims that the appliance does not maintain the proper time.

The offset below from the ntpq output is given in milliseconds, so the time is off by over 3 minutes from the 4 NTP servers.

Normally the reach would also be 377, which is an octal representation of the binary 11111111 - an eight bit value that gets set every time the appliance node successfully reaches the NTP server and sets the time.

Note that the ntpq command can be run from any unix/linux host and pointed to the appliance, so the customer can run it.


# ntpq -p 10.145.229.130
    remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================

time-server1.qu .GPS.            1 u   60   64    1    0.412  -183113   0.000
time-server2.qu .GPS.            1 u   58   64    1    0.368  -183113   0.000
time-server3.qu .GPS.            1 u   58   64    1    0.451  -183113   0.000
time-server4.qu .GPS.            1 u   60   64    1   13.172  -183113   0.000


# ntpq 10.145.229.130
ntpq> associations

ind assid status  conf reach auth condition  last_event cnt
===========================================================
 1 51374  9014   yes   yes  none    reject   reachable  1
 2 51375  9014   yes   yes  none    reject   reachable  1
 3 51376  9014   yes   yes  none    reject   reachable  1
 4 51377  9014   yes   yes  none    reject   reachable  1


ntpq> rl 51374
associd=51374 status=9014 conf, reach, sel_reject, 1 event, reachable,   # sel_reject set
srcadr=time-server1.xxxxxxxxx.com, srcport=123, dstadr=10.43.153.238,
dstport=123, leap=00, stratum=1, precision=-19, rootdelay=0.000,
rootdisp=0.259, refid=GPS,
reftime=d8ee2883.2750f220  Fri, May  1 2015 16:29:55.153,
rec=d8ee293b.573a10dc  Fri, May  1 2015 16:32:59.340, reach=001,
unreach=78402, hmode=3, pmode=4, hpoll=7, ppoll=6, headway=2269764,
flash=400 peer_dist, keyid=0, offset=-183087.042, delay=0.389,              # reason is flash=400 peer_dist (peer distance)
dispersion=7937.501, jitter=0.000, xleave=0.034,
@ filtdelay=     0.39    0.00    0.00    0.00    0.00    0.00    0.00    0.00,
filtoffset= -183087    0.00    0.00    0.00    0.00    0.00    0.00    0.00,
@ filtdisp=      0.00 16000.0 16000.0 16000.0 16000.0 16000.0 16000.0 16000.0

 

Check the routing.    Are there multiple default routes?

netstat -rn or netstat-rn.out in the bundle:

Routing Table: IPv4
   Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.43.212.1          UG       51     695157 vnic1
default              10.43.216.1          UG       33    1508804 igb0

 


Cause

 The cause is documented in BUG 20683411 - ntpd and multiple default route constantly resets state and never sets sys.peer

 

Solution

This is a workaround until the bug gets fixed in appliance code.

 

Set a /32 route to the NTP server - preferably through the BUI management interface.

 

 

***Checked for relevance on 30-MAY-2018***

References

<NOTE:1961049.1> - Oracle ZFS Storage Appliance: The Time on the ZFS Storage Appliance Fluctuates from the NTP Server
<BUG:20683411> - NTPD AND MULTIPLE DEFAULT ROUTE CONSTANTLY RESETS STATE AND NEVER SETS SYS.PEER

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback