Cisco N1000v Switches

cisco distributed vSwitches

21 July 2017   15 min read

A cisco vswitch that can be used instead of the default VMware DvS to have a similar environment to that of Cisco physical NXOS switches. The control and packet communication can either be carried over VLANs in Layer2 mode or IP addresses in Layer3 mode. The default and Cisco recommended solution is L3 mode.


Table Of Contents



Virtual Supervisor Module (VSM): Is the ‘brains’ or the engine on a Cisco switch. This is an .ovf that is deployed as a VM into your cluster. Would ideally have two for HA purposes.

Virtual Ethernet Module (VEM): Is installed on the ESXi hosts either manually or through the Update Manager if it is installed (cant on vCSA). Think of it as a supervisor module in a c6500 chassis.

The VSM is linked to the N1Kv (nexus DvS) by the vc extension-key, which VSM certificate is in the VSM plug-in extension (xml). The N1Kv name shown within vcentre is the same as the VSM hostname.

If deployed in Layer 2 mode the following VLANs are also needed:

  • Management VLAN: Used to establish and maintain the connection between the VSM and vCenter and admin config (cli). Needs to be on a routable VLAN that can communicate with the vCenter Server.
  • Control VLAN: The control plane, provides the HA communication between VSMs and serves as the heartbeat/configuration channel to the VEMs (pushes config to VEMs). Usually, if the VSM is not detecting a ESXi host VEM, a Layer 1/2 issue is blocking the communication on this VLAN specifically.
  • Packet VLAN: Is used for any data plane traffic that needs to be processed by the control plane. This typically means CDP, LACP and IGMP traffic transiting a VEM.
  • System VLANs: Must be added on control, packet and uplink port-profiles. They will always remain in a forwarding state so forward traffic even before a VEM is programmed by the VSM. This is why certain system profiles require them (Control/Packet in L2 or the VMK in L3) as these VLANs need to be forwarding in order for the VEM to talk to the VSM.

To be able to communicate all VSMs and VEMs need to be on the same Control and Packet VLAN, hence being L2-adjacent. Can use the same VLAN for both, but ideally in production should be separate VLANs.

Initial VSM and VEM Install

1. Deploy the OVA download from Cisco (I used Nexus1000v.5.2.1.SV3.2.1)
Choose a name, location, ESX host, datastore and the networks for control, packet (irrelevant if in L3 mode) and management.
Define the VSM domain ID (for HA use same ID primary and secondary), Admin password and IP, subnet and gateway. All of these properties can be changed after the install.

2. Load the VSM plug-in extension (xml) into vCentre
This holds the extension-key and certificate that allow the VSM to talk to vCentre and link it to the N1Kv.

Go to the IP of the VSM in a browser and download the cisco_nexus_1000v_extension.xml. Log into the vCentre using the vSphere client, in top menu, plug-ins » Manage Plug-Ins, right click new plug-in, choose the downloaded xml file and click Register Plug-In.

The extension-key in the plug-in should be the same as that on the VsM.

show vmware vc extension-key

3. Add the VEM software onto the ESX host
VMware Update Manager (VUM) can automatically select the correct VEM software to be installed when the host is added to the DvS, but if you run Linux based vCSA vCentre cant install VUM. Therefore need to do it manually the cli.

Download the VEM software from the VSM (http://vsm_ip) and copy it over to the ESX host

scp name_cisco-vem.vib root@esx_host_ip:/tmp

Make sure VEM is not already installed, if not install it.

vem show version
esxcli software vib install -v /tmp/name_cisco-vem.vib

If you get the following error, move a copy of the file to /var/log/vmware and run the cmd again.

[VibDownloadError]
('Cisco_bootbank_cisco-vem-v300-esx_5.2.1.3.2.1.0-6.0.1.vib', '', "[Errno 4] IOError: urlopen error [Errno 2] No such file or directory: '/var/log/vmware/Cisco_bootbank_cisco-vem-v300-esx_5.2.1.3.2.1.0-6.0.1.vib'")
url = Cisco_bootbank_cisco-vem-v300-esx_5.2.1.3.2.1.0-6.0.1.vib
Please refer to the log file for more details.

If you get this dependency error add -f to the end of the cmd to force it.

[DependencyError]
VIB Realtek_bootbank_r8152_2.06.0-4 violates extensibility rule checks: [u'(line 30: col 0) Element vib failed to validate content']
Please refer to the log file for more details.

Show commands to see the extension-key and verify and status

show vmware vc extension-key
vemcmd show version
vem status –v

Layer2 mode Installation

This is the legacy model, Cisco recommends that Layer3 mode is used instead (see next section for Layer3 mode install details).

1.  Create the Control VLAN and Packet VLAN on intermittent devices
These VLANs need to be are Layer2 adjacent across all ESXi hosts. Can use the same VLAN for both, but ideally in production should be separate VLANs.

vlan 101
name control_packet_vl101

2.  Change the Server Virtualization Switch (SVS) domain to L2 mode
On the 1000v change the mode and set the VLANs to be used by the control and packet interfaces.

svs-domain
 domain id 1
 control vlan 999
 packet vlan 999
 svs mode L2 show svs domain

The hostname is what the DVS is called in vCentre, if you change this it is automatically updated within vCentre.

hostname N1Kv

3. Configure the connection between the VSM and vCentre
Once connected it will create the N1Kv within networking on the vCentre with a name that matches the VSM hostname.

svs connection stesworld                                               Local so can be any name
 remote ip address 10.10.10.7                      vCentre IP address
 protocol vmware-vim
 vmware dvs datacenter-name DC1                   Must be exact match of the DC name in vCentre
 connect

show svs connections                                 Show settings and state of the connection

4. Create VLANs and vEthernet port-profiles for the control and packet interface
Must also have the system vlan cmd or else when you add the ESXi host the VEM module wont show on the VMs.

vlan 101
 name control_packet_vl101

port-profile type vethernet control_packet
 vmware port-group
 switchport mode access
 switchport access vlan 101
 no shutdown
 state enabled
 system vlan 101                                   Control VLAN

5. Create the uplink port-profile type Ethernet
This is a trunk containing all of the VLANs (must create the L2 VLANs) you want to trunk to ESXi hosts. This will be used to link the physical uplinks to the N1Kv.

port-profile type ethernet sys-uplink                                Can be any name, used sys-uplink
 switchport mode trunk
 switchport trunk allowed vlan 10,20,30
 vmware port-group name              Optional alternative name for it to show in vCentre
 no shutdown
 state enabled                                     If disable removes it from vCentre
 system vlan 101                                   Control VLANs

show run port-profile sys-uplink
show port-profile status

9. Assign ESXi Hosts and uplinks to the N1Kv
Must do it with the vSphere software client connecting to the vCentre due to a bug with the vCentre web client. Right click N1Kv, Add and manage hosts » Add hosts. Select the ESXi host, physical adapters and choose the relevant Port-profile. It will only show Port-profiles of the type Ethernet.

For the VEM module to show up on the VSM you must have L2 adjacency between the VEM and VSM. If they are on different ESX hosts ensure that control and packet VLANs are configured properly on all devices (switches) in between VEM and VSM.

Layer3 mode Installation

Rather than having a control and packet interface (VLAN) the VMware ESXi management network (provided by the initial VMkernal port) is used for the VEM to VSM communication link. All VEM/VSM communication is essentially encapsulated over TCP/IP using the management network.

When migrating an ESXi host to the DvS it is imperative that the management VMK be assigned to a port-profile that has a special capabilities flag enabled (capability l3control). Without this flag the VEM will not be detected by the VSM and the ESX host migration to the DvS will fail.

Layer3 mode is now the default and recommended way when deploying the N1Kv. Some advantages are:

  • Simpler to set up as you don’t need to worry about port-groups and trunking of control / packet VLANs
  • Tshooting the control plane is a lot easier as you can use IP-based tools (e.g. ping, traceroute)
  • Can manage VEMs that reside in another VLAN or subnet
  • Better integration with other products using Layer3
  • Need a port-profile with Layer3 capabilities for ERSPAN anyway, so you could re-use the existing one
  • Most of the work to enable support for VXLAN and vPath is already done

System VLANs will always remain in a forwarding state so forward traffic even before a VEM is programmed by the VSM. This is why certain system profiles require them (Control/Packet in L2 or the VMkernal port in L3) as these VLANs need to be forwarding IN ORDER for the VEM to talk to the VSM. The system vlan cmd must be added on both the VMK and uplink port-profiles.

Before deploying you must make two decisions:

  • Which L3 interface you want to use on the VSM (control0 or mgmt0).
  • Whether you use the mgmt VMK interface or set up a dedicated VMK interface for L3 control traffic.

1. Setup the Server Virtualization Switch (SVS) domain
Change the VSM to L3 mode and set the L3 interface (these are default settings). In L3 mode the control and packet VLAN are irrelevant. If L3 interface mode is set to control0 the control and packet traffic is handled by the first vmnic whereas if it is set to mgmt0 this is handled by the second vmnic.

svs-domain
domain id 1
no control vlan
no packet vlan
svs mode L3 interface mgmt0 or control0 Whether uses 1st or 2nd vmnic for control & packet traffic

show svs domain

The hostname is what the DvS is called in vCentre, if changed it automatically updates the name within vCentre.

hostname N1Kv

2. Configure the connection between the VSM and vCentre
Once connected it creates the N1Kv within networking in vCentre with a name matching the VSM hostname.

svs connection stesworld                                               Local so can be any name
 remote ip address 10.10.10.7                      vCentre IP address
 protocol vmware-vim
 vmware dvs datacenter-name DC1                   Must be exact match of the DC name in vCentre
 connect

show svs connections                                 Show settings and state of the connection

3. Create the ESXi management vmk vEthernet port-profile (type veth)
Needs to be set as L3 capable and also requires a L2 VLAN creating (same VLAN as VMK IP address).

vlan 10
 name mgmt

port-profile type vEth mgmt-l3
 capability l3control
 vmware port-group
 switchport mode access
 switchport access vlan 10
 no shut
 state enabled
 system vlan 10                                     Is the VMK VLAN

show run port-profile mgmt-l3

4. Create the uplink port-profile (type Ethernet)
This is a trunk containing all of the VLANs (must create the L2 VLANs) you want to trunk to ESXi hosts. This will be used to link the physical uplinks to the N1Kv.

port-profile type ethernet sys-uplink                                  Can be any name, used sys-uplink
 switchport mode trunk
 switchport trunk allowed vlan 10,20,30
 vmware port-group name              Optional alternative name shown in vCentre
 no shutdown
 state enabled                                     If disable removes it from vCentre
 system vlan 101                                   The VMK VLAN

show run port-profile sys-uplink
show port-profile status

5. Assign ESXi Hosts and uplinks to the N1Kv
Must use the vSphere software client due to a bug with the vCentre web client. Right click N1Kv, Add and manage hosts » Add hosts. Select the ESXi host, physical adapters and choose the relevant Port-profile. It will only show Port-profiles of the type Ethernet.

If have an existing VMK to migrate on the next screen do the same for the VMK, it will only show Port-profiles of the type mgmt-l3.
If not, create a new VMK by going to ESX host » networking » select the 1000v switch » manage Virtual Adapters » Add » New virtual adaptor. For the VEM module to show up on the VSM the VMkernal port must have been migrated and the system VLAN set on the port-profiles.

The VSM will be automatically moved over to the ESX host when the host and VMK are added. The initial ports of the N1Kv DvS are assigned as follows:
veth1 - VMkernal port
veth2 – N1Kv Primary
veth3 – N1Kv Secondary
veth4 – Starts adding VMs from here when move to N1Kv DvS

Verification

To verify the state and health on the ESX host.

vem status                                                         Should now see the new N1Kv
vem status –v                              Shows the ports on N1Kv, so uplink, control & packet
vemcmd show port vlans       Shows ports and vlans used (T for trunk, A for Access)
vemcmd show trunk                 See VLANs allowed on the trunk to the VEM

To verify the state and health on the N1K.

show module
show module vem missing
show svs neighbors                      See all VEMs and VSMs and the primary MAC

Troubleshoot

On the ESX host make sure that it can talk to the VSM, it will advise about the connection and configuration.

vem-health check

Can get MAC address by looking on vCentre for the MAC of first NIC on the VSM, or run this cmd.

vemcmd show card                                                   It is the Primary VSM MAC

If don’t see the VEM on the VSM try stopping and starting the VSM, is non-disruptive.

vem stop
vem start
vem status                            Ensure the VEM is loaded running

Adding new client VLANs and Port-profiles

For any VLANs that you want to be available on this DvS you need to add the VLAN and vethernet port-profile on the N1Kv. As long as they are enabled any port-profiles created will show up immediately as a port-group in vCentre.

vlan 20
 name data

port-profile type vethernet data
 vmware port-group
 switchport mode access
 switchport access vlan 20
 no shutdown
 state enabled                                       If disabled will be removed from vCentre

Creating a High Availability pair of N1Kv

Change the role of the current N1Kv from standalone to primary, the default is standalone (changing to secondary requires a reboot).

system redundancy role standalone or primary or secondary

When deploying the secondary N1Kv OVF under the wizard Select configuration and choose Nexus 1000v Secondary. Within settings you only need to configure domain ID and password.
NOTE: You can only log into secondary from console and can’t see any of the configuration.

show redundancy status
show module
reload module 1 or 2                                Reload active or standby supervisor (VSM)
system switchover                                    Force switchover, will drop a few pings
show running-config diff                        Show difference from running to start config

Show commands

Cisco commands

show redundancy status
show module
show module uptime
show module vem counters

show interfaces brief                                 Show all interfaces connected
show interface mac-address                    MACs associated to all UP ports
show port-profile brief                             All port-profiles and number of interfaces used
show port-profile usage                            Lists all vEth per port-profile
show port-profile expand-interface  Lists all vEth per port-profile & the config (vlan)
show port-profile virtual usage           Table of vEth with connected VMs & NICs
show int virtual port-mapping              Table of vEth to DVport mappings
show msp internal info                             Lists lots of info in regards to the Port-group

ESX commands

vemcmd show card                                                   Info about VEM, including domain, switch name, slot num
vemcmd show port           List of the ports on this VEM & VMs or VMKs connected
vemcmd show port vlans       List of the ports on this VEM and the VLANs associated
vemcmd show stats           General stats (bytes sent/receive, etc) for each port
vemcmd show vlan           VLAN info and which port is associated with each one
vemcmd show l2 all           MACs and other info for all bridge domains and VLANs
vemcmd show packets     Traffic stats for broadcast / unicast / multicast
vemcmd show arp all       ARP info (could be useful to show VTEP info)

Deleting a N1Kv DvS

The VSM is linked to the N1Kv by the vc extension-key which is in the xml file along with the VSM certificate details.
In vCentre the extension-key can be seen under the summary of the DvS or through https://vcentre_ip/mob » content » ExtensionManager » more

show vmware vc extension-key                                       To see on the VSM

You should NEVER delete a VSM without disconnecting it from the vCenter. If you do so the N1Kv will become orphaned and vCenter will not allow you to remove it.

1. Migrate all VMs off the N1Kv and move any VMkernal ports. From vCentre right click the N1Kv and choose Migrate Virtual Machine networking, then use wizard to migrate all VMs on a VLAN to another DvS VLAN. Any VM Kernal ports on the N1Kv will also need to be migrated or deleted. To do so go to ESX host » Configuration » vSphere Distributed Switch » Manage Virtual Adaptors, select the VMK and choose the option required.

2. Remove the ESX host from the N1Kv. Connect to vCentre using vSphere and got to Networking » N1Kv switch » Hosts tab, right click on an ESXi you want to remove and choose Remove from vSphere Distributed Switch*.

3. Disconnect VSM from vCenter. This will remove the N1Kv from vCentre and ensure you do not end up with an orphaned Switch.

show svs connections                                        Check connection, needs to be connected to proceed
svs conn VC
 no vmware dvs
This will remove the DVS from the vCenter Server and any associated port-groups. Do you really want to proceed(yes/no)? yes
 no connect

4. Uninstall the VEM from the ESXi hosts

vem status                                                                     Check VEMs status
vem version –v                                                    Get the vem version
esxcli software vib list | grep cisco             Get the vem version for next cmd
esxcli software vib remove --maintenance-mode -n cisco-vem-v300-esx
vem status

5. Remove the VSM plug-in extension (xml) and the VEM VM. You can get the key from the VSM if have already deleted the switch:

show vmware vc extension-key                                                    On the N1Kv

Go to https://vcentre_ip/mob » content » ExtensionManager  » Unregister Extension, enter the extension-key and click Invoke Method.
Refresh the browser and you should no longer see the extension or the plug-in in vSphere software client.
You can now safely delete the VSM vm and VMkernel Control interface (L3 Only) if one was created.

Recover from an Orphaned Switch

If you don’t follow the above steps and delete the VSM without disconnecting it from the vCenter (and don’t have a backup) you are stuck with an orphaned switch. To be able to cleanly delete it you must first add it backin.

1. Create a new VSM VM with a hostname and extension-key the matches the old one. Reboot the VSM for changes to take effect.

hostname dvs-name
vmware vc extension-key the_key

2. As the certificate won’t match unregister the old VSM and register the new VSM plug-in extension (xml).\ Go to https://vcentre_ip/mob/?moid=ExtensionManager » Unregister Extension, enter the ext-key and click Invoke Method. Get the new VSM plug-in extension (as the VSM has new cert) and install it as a plug-in. This cant be done in the vCentre client or vSphere client connecting directly to host. MUST use the vSphere software client and connect to the vCentre.

3. Connect the VSM to the vCentre by creating the SVS connection.

svs conn VC
 protocol vmware-vim
 remote ip address 10.10.10.7
 vmware dvs datacenter-name DC1
 connect

You can now go ahead and delete the DvS and VSM as per above method.