OpenStack/UCS Mechanism Driver for ML2 Plugin Liberty

From DocWiki

Jump to: navigation, search

Contents

Neutron UCS Manager ML2 Mechanism Driver

The Cisco UCS Manager ML2 plugin in the Liberty release now supports configuration of multiple UCS Managers. What this means is that one instance of the plugin can configure multiple UCS servers controlled by multiple UCS Managers within the same OpenStack cloud. Just as in Kilo, the plugin can configure the following features on multiple UCS Managers:

  1. Configure Cisco VM-FEX on SR-IOV capable Cisco NICs.
  2. Configure SR-IOV capable virtual functions on Cisco and Intel NICs.
  3. Configure the UCS Servers to support Neutron virtual ports attached to Nova Vms.


The UCS Manager Plugin talks to the UCS Manager application running on a Fabric Interconnect. The UCS Manager is part of an ecosystem for UCS Servers that consists of Fabric Interconnects and in some cases FEXs. For further information, please refer to:http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-infrastructure-ucs-manager-software/116741-troubleshoot-ucsm-00.html

SR-IOV/VM-FEX Support

This feature allows Cisco and Intel (only SR-IOV capable) physical functions (NICs) to be split into several virtual functions. These virtual functions can be configured in either the Direct or Macvtap modes. In both these modes, the virtual functions can be assigned to Nova Vms, causing traffic from the Vms to bypass the hypervisor and sent directly to the Fabric Interconnect. This feature increases traffic throughput to the VM and reduces CPU load on the UCS Servers. In "Direct" SR-IOV/VM-FEX mode, these benefits are the greatest with the limitation that live migration of the VM to a different Server cannot be supported.

Neutron Virtual Port Support

This feature configures the NIC on the UCS Server to allow traffic to/from the Vms attached to Neutron virtual ports.

The ML2 UCS Manager driver does not support configuration of UCS Servers, whose service profiles are attached to Service Templates. This is to prevent that same VLAN configuration to be pushed to all the service profiles based on that template. The plugin can be used after the Service Profile has been unbound from the template.

Both the above features also configure the trunk ports from the Fabric Interconnect to the TOR switch to carry traffic from the VMs.

Note: This software is provided "as is," and in no event does Cisco warrant that the software is error free or that customer will be able to operate the software without problems or interruptions.


Prerequisites

UCS Manager Plugin driver support requires the following OS versions and packages:

  • One of two supported OSes:
    • RHEL 6.1 or above
    • Ubuntu 14.04 or above
  • One of the following Cisco NICs with VM-FEX support or Intel NICs with SR-IOV support:
    • Cisco UCS VIC 1240 (vendor_id:product_id is 1137: 0071)
    • Intel 82599 10 Gigabit Ethernet Controller (vendor_id:product_id is 8086:10ed)
  • The Cisco UCS Python SDK version 0.8.2 or lower

Get the Cisco UCS Python SDK at: https://communities.cisco.com/docs/DOC-37174

Download the tar file and follow the installation instructions.

Configuration on UCS Manager for SR-IOV ports

1. To be able to assign Cisco VM-FEX ports to Nova VMs, the SR-IOV capable Cisco VICs should be configured with a Dynamic vNIC profile. VNICdynamic.PNG


2. After that associate the desired Ethernet port on the Service profile with the Dynamic vNIC policy created in step 1.

Service.PNG


On the compute host which has this SR-IOV capable Cisco VIC:

  1. Add "intel_iommu=on" to "GRUB_CMDLINE_LINUX" in /etc/sysconfig/grub [in RHEL] or /etc/default/grub [in Ubuntu]
  2. Regenerate grub.conf by running : grub2-mkconfig -o /boot/grub2/grub.cfg on BIOS systems or grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg on UEFI systems.
  3. Reboot the compute host.
  4. After the host is up make sure that IOMMU is activated by running : dmesg | grep -iE "dmar|iommu" . The output should include:
    [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/devstack--38--vg-root ro quiet intel_iommu=on
    [ 0.000000] Intel-IOMMU:enabled
  5. Then run the command : "lspci –nn | grep Cisco" on the same compute host to make sure the SR-IOV capable PF have been split into VFs as specified by the Dynamic vNIC connection policy on the UCS Manager. The output should contain several lines that look like this:
    0a:00.1 Ethernet controller [0200]: Cisco Systems Inc VIC SR-IOV VF [1137:0071] (rev a2)

Procedure

In this release, with the increase in number of UCS servers being supported by the plugin via multi-UCS Managers, providing the host to service profile mapping for each server would definitely be tedious. To make the configuration of the plugin, easier, it now learns the hostname to service profile mapping directly from the UCS Manager. The only condition being that the cloud admin should make sure that the "Name" field of the Service Profile on the UCS Manager, contains the hostname of the server as the hypervisor sees it. In the Liberty release, all of the UCSM plugin code has been moved to the networking-cisco repository at https://git.openstack.org/cgit/openstack/networking-cisco/. The Install scripts can be modified to pull this code using: git clone https://github.com/openstack/networking-cisco".

To support UCS Manager Plugin driver configuration on a controller node, add the following configuration settings under the ml2_cisco_ucsm heading in the ml2_conf_cisco.ini file, as shown in the following example:

Example:

# SR-IOV and VM-FEX vendors supported by this plugin
# xxxx:yyyy represents vendor_id:product_id
# This config is optional.
# supported_pci_devs='2222:3333', '4444:5555'

# UCSM information for multi-UCSM support.
# The following section can be repeated for the number of UCS Managers in
# the cloud.
# UCSM information format:
# [ml2_cisco_ucsm_ip:1.1.1.1]
# ucsm_username = username
# ucsm_password = password


On the node hosting the VM, add following entry to the <code>nova.conf</code> file:
pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071","address":"*:0a:00.*","physical_network":"physnet1"}

Where:

vendor_id is the vendor number of the VM-FEX or SR-IOV supporting NIC on the compute node (1137 in this example).

product_id is the product number of the VM-FEX or SR-IOV supporting NIC on the compute node (0071 in this example).

address is optional. If address is omitted, the driver will allow the connection at any address.

physical_network is optional, but strongly recommended. If physical_network is omitted, the driver will allow the connection on any network. This is not desirable behavior.

Note: The PCI device (NIC) vendor ID and product ID must be configured to exactly match the configuration on the compute node, otherwise the driver will not attempt to connect to the VF. There will be no warning of this failure.

  • Ensure that the physical network is configured in ml2_conf.ini

PHYSICAL_NETWORK=physnet1 ML2_VLAN_RANGES=physnet1:800:900

UCS Manager Plugin Driver Configuration for DevStack

To use the UCS Manager Plugin driver with the DevStack configuration, do the following additional configuration steps:

  • Set the following variable in the localrc file:
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,cisco_ucsm
  • In local.conf, set the PCI passthrough whitelist and define the physical network:
pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071","physical_network":"physnet1"}
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:800:900

UCS Manager TripleO Configuration

To use the UCS Manager Plugin driver with TripleO, add the following to tripleo-heat-templates/environments/neutron-ml2-cisco-nexus-ucsm.yaml:


resource_registry:
  OS::TripleO::AllNodesExtraConfig: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-bridge.yaml
 
parameter_defaults:
  NeutronMechanismDrivers: 'openvswitch,cisco_ucsm'
  NetworkUCSMIp: '1.1.1.1'
  NetworkUCSMUsername: 'user'
  NetworkUCSMPassword: 'password'
  NetworkUCSMHostList: '54:A2:74:CC:73:51:Serviceprofile1, 84:B8:02:5C:00:7A:Serviceprofile2'
 
parameters:
  controllerExtraConfig:
    neutron::server::api_workers: 0
    neutron::agents::metadata::metadata_workers: 0
    neutron::server::rpc_workers: 0


Limitations

1. Service Profiles cannot be bound to a template. (If the Service Profiles are created from a template they need to be "unbound" from the template.)

2. If the UCS Manager domain contains 2 Fabrics, both Fabric's would be configured identically.

3. If the plugin is failing with the exception from UCS Manager that the the maximum number of sessions allowed on the UCSM has been reached, please change Maximum Sessions Per User for the Web Session Limits to 256 to match the Maximum Sessions limit on the UCSM.

Maxsessions.png

Rating: 0.0/5 (0 votes cast)

Personal tools