Sun Microsystems, Inc.  Oracle System Handbook - ISO 7.0 May 2018 Internal/Partner Edition
   Home | Current Systems | Former STK Products | EOL Systems | Components | General Info | Search | Feedback

Asset ID: 1-79-1400414.1
Update Date:2018-05-09
Keywords:

Solution Type  Predictive Self-Healing Sure

Solution  1400414.1 :   JuRoPA-JSC at Juelich Research Centre  


Related Items
  • Sun Datacenter InfiniBand Switch 648
  •  
Related Categories
  • PLA-Support>Sun Systems>SAND>Network>SN-SND: Sun Network Infiniband
  •  
  • _Old GCS Categories>Sun Microsystems>Operating Systems>Solaris Network
  •  




Oracle Confidential (PARTNER). Do not distribute to customers.
Reason: Solaris and Network Domain (SaND) Product Page, internal content

Applies to:

Sun Datacenter InfiniBand Switch 648 - Version Not Applicable and later
Information in this document applies to any platform.

Purpose

This document contains the support details of the JuRoPA-JSC at Juelich Research Centre.

Details

JuRoPA-JSC at Juelich Research Centre


 

 

Background

 

 

        Jülich Supercomputing Centre provides supercomputer resources, IT tools, methods and knowhow for the Research Centre Jülich and for European users through the John von Neumann Institute for Computing.

 

http://www.fz-juelich.de/jsc/juropa

 

        Bull and Sun were chosen to cooperate in building the general purpose supercomputer together with the JuRoPA-2 consortium partners (Intel, ParTec and FZJ).

 

        The Sun equipment is housed in 23 SB6048 racks and includes 2208 Dual Socket Nehalem EP processor Blades, 92 QNEMs, and 6 SDS 648 (M9) Infiniband switches.

 

        All the infiniband links run at QDR speed

 

Call Flow

 

        So far the SRs received on the Network Queue have originated from Bull GMBH as they are the prime contractor.

 

        e.g.
SR03 71476428 -Sev2- BULL GMBH
SR unassigned for group EM-RSD-SYS-NETWORK
Summary: silver: m9switch.: MTS57_E7_ZE/U1/P30 JuRoPA: Symbol errors (NEW),,,see customer email

 

 

        There may be references to "MTS 3600" or "Mellanox Shark" as there are some Mellanox Shark MTS3600 Infiniband switches that connect to the Sun SDS 648 (M9) switches.

 

People

 

        Sun Escalation Manager for Jülich : N/A

 

        Service Delivery Manager :

Rolf Burkhardt (rolf.burkhardt@oracle.com)

 

        Involved Service Teams:

 

        - Bull :

service@bull.de

 

        - ParTec :

support@par-tec.com

 

        - Sun Onsite :

fzj_onsite@sun.com

 

Equipment

 

        Infifniband Switches:

 

        There are two racks each containing three M9 (SDS 648) switches, and a further two racks each containing sixteen 36 port MTS 3600 switches. These last two racks are referred to as "virtual" M9s (vM9) and are not to be confused with the MTS 3600s which act as leaf-switches in front of the sixty Bull compute nodes

 

        M9-1, M9-3 and M9-5 are in rack A7 and M9-2, M9-4 and M9-6 are in rack B7

 

        Each M9 Line card provides 24 CXP (12X) connectors giving a total of 72 4X ports on each LC

 

        M9-1 to M9-6 each have four Line cards (0 to 3) with connections to the QNEMs and three Line cards (4 to 6) cabled to Mellanox MTS 3600 leaf-switches

 

        M9-5 and M9-6 have only four Line cards (0 to 3) with connections to the QNEMs

 

        M9-7 and M9-8 are the vM9s and consist of two racks with sixteen Mellanox MTS 3600 36-port (4X) switches in each.

 

        On each QNEM ports A12-A14 are looped back to ports B0-B3 via a single 12X cable to provide additional bandwidth between the two switch asics inside the QNEM

 

        The external connections from each QNEM are six 12X ports cabled to the M9s, and two 12X ports cabled to the vM9s using 12X to 4X splitter cables.

 

        Each Sun Blade processor has an internal connection to a QNEM which provides the switching on to the infiniband main network.

 

Site Details

 


Attachments
This solution has no attachment
  Copyright © 2018 Oracle, Inc.  All rights reserved.
 Feedback