Asset ID: |
1-79-1518892.1 |
Update Date: | 2017-06-26 |
Keywords: | |
Solution Type
Predictive Self-Healing Sure
Solution
1518892.1
:
Oracle Fabric Interconnect :: VMotion Performance Best Practices
Related Items |
- Oracle Fabric Interconnect F1-15
- Oracle Fabric Interconnect F1-4
|
Related Categories |
- PLA-Support>Sun Systems>SAND>Network>SN-SND: Oracle Virtual Networking
|
In this Document
Applies to:
Oracle Fabric Interconnect F1-15 - Version All Versions to All Versions [Release All Releases]
Oracle Fabric Interconnect F1-4 - Version All Versions to All Versions [Release All Releases]
Information in this document applies to any platform.
Checked for relevance on 06/26/2014
Purpose
To ensure the best VMotion performance on ESX/ESXi servers when using Xsigo vnics or PVI vnics, use the following guidelines.
Details
1) Note if VMotions are between virtual-to-virtual hosts or virtual-to-physical (non Xsigo connected hosts. If all host's VMotions are virtual-to-virtual, make sure to utiilize vnic-to-vnic switching where possible.
If VMotions are going out over the network in the case of virtual-to-physical VMotions, try to make sure VMotion goes through as few of hops as possible. The more hops VMotion goes through, the higher the latency.
2) When possible, use Jumbo Frames - MTU 9000 on upstream switch ports, Xsigo Ethernet ports, Xsigo vnics, the ESX Server vSwitch that VMotion resides on, and the VMotion vmkernel interface (vmk#).
In ESX / ESXi versions below 4.1 you must set MTU from ESX / ESXi CLI. In ESXi 5.0 you can set it in vSphere server directly on the vSwitch/dvSwitch and VMotion vmkernel portgroup.
3) VMware customers have reported issues with having more than one vmkernel interface on the same standard vSwitch. This means having iSCSI or NFS vmkernel interface, plus a VMotion vmkernel interface on the same standard vSwitch could lead to performance issues, connectivity drops etc.... Please see this VMware end-user document for more information:
http://vmtoday.com/2012/02/vsphere-5-networking-bug-2-affects-management-network-connectivity/
Please note that the above issue has not been encountered when using dvSwitches.
Additionally for multi-nic (vnic) VMotion support which is only supported in ESXi 5.0 and above, if you want to use standard vSwitches, you can put each VMotion PVI vnic or standard VMotion vnics on seperate standard vSwitches - each containing a VMotion vmkernel interface with unique IP on the same subnet, this configuration does work.
Also in ESXi 5.0 you can VMotion across VMware vSphere clusters, but not across VMware vSphere Datacenters. In older versions of ESX/vSphere, you could not VMotion across clusters.
Also please reference this VMware VMotion performance Whitepaper:
http://www.vmware.com/files/pdf/vmotion-perf-vsphere5.pdf
Specifically see the vMotion Best Practices on page 22.
Attachments
This solution has no attachment