Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-71-2334716.1
Update Date:2017-12-08
Keywords:

Solution Type  Technical Instruction Sure

Solution  2334716.1 :   DSR SDS Provisioning Only Reaching DP of one SR and Not to DP of Other DSR  


Related Items
  • Oracle Communications Diameter Signaling Router (DSR)
  •  
Related Categories
  • PLA-Support>Sun Systems>CommsGBU>Global Signaling Solutions>SN-SND: Tekelec DSR
  •  




In this Document
Goal
Solution
 Workaround:
References


Created from <SR 3-16188738301>

Applies to:

Oracle Communications Diameter Signaling Router (DSR) - Version DSR 5.0 to DSR 7.4.0 [Release DSR 5.0 to DSR 7.0]
Tekelec

Goal

IMSI provisioned SDS are only propagating to on DSR DP. Calls for provisioned IMSI are failing when other DSR is used because the FABR lookup fails.

The IMSI query shows that it is in DP of only one DSR.

Solution

Replication link between NOAM and both SOAMs is down:

***************************************************************************

* cm.timelimit 60 soapstat -w

***************************************************************************

   nodeId           Soap State dir    dSeq  dTime  updTime info

   sds-a                Active To        0   0.00 18:10:42

   sds-a               Standby From      0   0.00 18:10:33

  sds-qs                Active To        0   0.00 18:10:40

sdsSO-dsr3-b            Active To        0   0.00 18:10:45

sdsSO-a            Active To        0   0.00 18:10:37

    so-a        DownConnecting To        0   0.00 18:10:45

    so-b        DownConnecting To        0   0.00 18:10:45

***************************************************************************

* cm.timelimit 60 irepstat -w

***************************************************************************

-- Policy 0 ActStb [DbReplication]

AA To   sds-a     Active              0   0.00 1%R 3%S 0.04%cpu 33B/s

AA To   sds-qs    Active              0   0.00 1%R 3%S 0.05%cpu 33B/s

AB To   sdsSO-a   Active              0   0.00 1%R 3%S 0.06%cpu 49B/s

-- To   so-a      DownConnecting      0   0.00

-- To   so-b      DownConnecting      0   0.00

SO-a was rebooted on 2017-10-13 12:11:40. After reboot, prod.dbup was aborted with following error and since then SO-a is in down state:

==== 2017-10-13 12:19:27 ====

               ...prod.dbup  (RUNID=00)...

               ...getting current state...

Current state:  DbDown  (database on disk but not loaded)



************** !!!!!!!!!!!!!!!!!!! *******************

***

*** prod.dbup ABORTING: bad system date.

       Last started: 2014-09-15 06:41:48 UTC

       Current:      2017-10-13 19:19:27 UTC

***    NOTE: manual recovery may be required

***   + A potentially problematic date change such as time going backwards

***     or time going far into the future was detected.

***

************** !!!!!!!!!!!!!!!!!!! *******************

-         so-b was rebooted on 2017-11-07 22:14:00. After reboot, prod.dbup was also aborted with same error. SO-b is down since then:

==== 2017-11-07 22:48:54 ====

               ...prod.dbup  (RUNID=00)...

               ...getting current state...

Current state:  DbDown  (database on disk but not loaded)



************** !!!!!!!!!!!!!!!!!!! *******************

***

*** prod.dbup ABORTING: bad system date.

       Last started: 2014-09-15 08:23:13 UTC

       Current:      2017-11-08 06:48:54 UTC

***    NOTE: manual recovery may be required

***   + A potentially problematic date change such as time going backwards

***     or time going far into the future was detected.

***

************** !!!!!!!!!!!!!!!!!!! *******************

Every time “prod.dbup” is performed, Comcol saves the timestamp when it was done. On the next “prod.dbup”, Comcol will perform 2 checks:

1.       The previously stored timestamp should not be greater than current timestamp. [Makes sense, because in that case, system time will be invalid]
2.       If the previously stored timestamp is older than 3 years, Comcol will abort prod.dbup and throw same error. [Why is this check needed and why exactly for 3 years?]

From prod.dbup output (see above), the "Last Started" date is more than 3 years from the current date and hence the SO are never starting.

This is related to a bug where if the database is up for more than three years, the system upon reboot considers it a wrong date.
Bug 27058381 - prod.start returns error when CM system running for more than 3 years is restart
Fixed in DSR version 7.4

When prod.start /prod.dbup is attempted an error check is done on DB that is being loaded. This includes making sure that that the DB is not more than three years old.

If system is running for more than three years, the error check returns error.

************** !!!!!!!!!!!!!!!!!!! *******************
***
*** prod.start ABORTING: bad system date.
Last started: 2014-04-30 01:10:03 UTC
Current:      2017-06-28 09:17:07 UTC
***    NOTE: manual recovery may be required
***   + A potentially problematic date change such as time going backwards
***     or time going far into the future was detected.
***
************** !!!!!!!!!!!!!!!!!!! *******************
Current state:  DbDown  (database on disk but not loaded)

Workaround:

Run following commands on both SOAMs:

re.timestamp
prod.start
prod.dbup

Check after this that the replication is working.

 

References

<BUG:27058381> - PROD.START RETURNS ERROR WHEN CM SYSTEM RUNNING FOR MORE THAN 3 YEARS IS RESTART

Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback