OpenStack/sandbox

From DocWiki

Jump to: navigation, search

Contents

Overview

This page

head

Headline text

Headline

Headline

Headline
Headline

How are you?

|| *{+}Name{+}* || *{+}Title{+}* || *{+}Contact Information{+}* ||

|||| you can span multiple columns by || starting each cell ||

|||| you can span multiple columns by || starting each cell ||


||row1-column1|row1-column2||row2-column1|row2-column2||

||row1-column1|row1-column2 row2-column1|row2-column2||

cd
!
sudo
  1. New line
    1. next line
      1. next line

The following table lists the minimum requirements for the Cisco UCS servers that you use for the nodes in your OpenStack cluster:

Server/Node Recommended Hardware Notes
Build node Processor: 64-bit x86

Server or VM with:

  • Memory: 4 GB (RAM)
  • Disk space: 20 GB
The build node must also have Internet connectivity to be able to download Cisco OSI modules and Puppet manifests.

To ensure that the build node can build and communicate with the other nodes in your cluster, it must also have a network interface on the same network as the management interfaces of the other OpenStack cluster servers.

A minimal build node (for example, a VM with 4 GB of RAM and a 20-GB disk) is sufficient for a test install. However, because the build node acts as the puppet master, caches client components, and logs all installation activity you might need a more powerful machine with more disk space for larger installs.

Control node Processor: 64-bit x86

Memory: 12 GB (RAM) Disk space: 1 TB (SATA or SAS) Network: Two 1-Gbps network interface cards (NICs)

A quad core server with 12 GB of RAM is sufficient for a minimal control node.

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.

Compute node Processor: 64-bit x86

Memory: 128 GB (RAM) Disk space: 300 GB (SATA) Volume storage: Two disks with 2 TB (SATA) for volumes attached to the compute nodes Network: Two 1-Gbps NICs

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.
HA proxy (load balance) node Processor: 64-bit x86

Memory: 12 GB (RAM) Disk space: 20 GB (SATA or SAS) Network: One 1-Gbps NIC

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.
Swift storage proxy node Processor: 64-bit x86

Memory: 12 GB (RAM) Disk space: 300 GB (SATA or SAS) Network: Two 1-Gbps NICs

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.
Swift storage node Processor: 64-bit x86

Memory: 32 GB (RAM) Disk space: 300 GB (SATA) Volume storage:

  • For rack-mount servers, either 24 disks with 1 TB (SATA) or 2 disks with 3 TB (SATA) depending upon the model
  • For blade servers, two disks with 1 TB (SATA) for combined base OS and storage

Network: Two 1-Gbps NICs

Three or more storage nodes are needed.

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.

Neutron ML2 Driver For Cisco Nexus Devices


Overview

The Cisco Nexus ML2 mechanism driver implements the ML2 Plugin Mechanism Driver API. The Cisco Nexus ML2 mechanism driver manages multiple types of Cisco Nexus switches.

Prerequisites

Nexus switch support requires the following OS versions and packages:

  • Cisco NX-OS 5.2.1 (Delhi) Build 69 or later.
  • Paramiko library, the SSHv2 protocol library for Python
  • One of two supported OSes:
    • RHEL 6.1 or above
    • Ubuntu 11.10 or above
  • Package: python-configobj-4.6.0-3.el6.noarch (or later)
  • Package: python-routes-1.12.3-2.el6.noarch (or later)
  • Package: pip install mysql-python
  • The ncclient v0.4.2 Python library for NETCONF clients. See the following for instructions on how to download the modified library. For more information on ncclient, see http://ncclient.grnet.gr/.
  • TripleO with Nexus and UCSM is supported in the RHEL OSP7


Get the ncclient library by using the pip package manager at your shell prompt:

pip install ncclient == 0.4.2

Your Nexus switch must be configured as described in the next section, Nexus Switch Setup.

Nexus Switch Setup

  • Your Nexus switch must be connected to a management network separate from the OpenStack data network. The plugin communicates with the switch over this network to set up your data flows.
  • The switch must have ssh login enabled.
  • Each compute host on the cloud must be connected to the switch using an interface dedicated solely to OpenStack data traffic.
  • The switch must be a known host on the controller node before the ML2 Nexus mechanism driver tries to configure the switch. To ensure the switch is a known host, manually log in to the switch from the controller node (using ssh) before creating instances.
  • All other switch configuration not listed in this section, for example configuring interfaces with no shutdown and switchport mode trunk, must be performed by the switch administrator.

Directory Structure

The Cisco Nexus mechanism driver code is located in the following directory:

<neutron_install_dir>/neutron/neutron/plugins/ml2/drivers/cisco/nexus

The Cisco Nexus mechanism configuration template is located at:

<neutron_install_dir>/neutron/etc/neutron/plugins/ml2/ml2_conf_cisco.ini

In both cases, <neutron_install_dir> is the directory where the Neutron project is installed. This is often the home directory of the username assigned to Neutron.

Configuration

VLAN Configuration

To configure the Cisco Nexus ML2 mechanism driver, do the following:

Create a configuration file using the syntax template neutron/etc/neutron/plugins/ml2/ml2_conf_cisco.ini.

Add the Nexus switch information to a configuration file. Include the following information (see the example below):

  • The IP address of the switch
  • The hostname and port of the node that is connected to the switch
  • The switch port that host is connected to
  • The Nexus switch credential username and password


Include the configuration file on the command line when the neutron-server is started. You can configure multiple switches as well as multiple hosts per switch.

# Use section header 'ml2_mech_cisco_nexus:' followed by the IP address of the Nexus switch.
[ml2_mech_cisco_nexus:1.1.1.1]
# Hostname and port used on the switch for this compute host.
# Where 1/2 indicates the "interface ethernet 1/2" port on the switch.
compute-1=1/2
# Port number where the SSH will be running at the Nexus Switch. Default is 22 so this variable
# only needs to be configured if different.
# ssh_port=22
# Provide the Nexus log in information
username=admin
password=mySecretPasswordForNexus

TripleO Configuration

The Cisco specific implementation is deployed by modifying the tripleO environment file environments/neutron-ml2-cisco-nexus-ucsm.yaml and updating the contents with the deployment specific content. Note that with TripleO deployment the server names are not known before deployment, so the mac address of the server must be used in place of the server name.

Descriptions of the parameters can be found at https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml


resource_registry:
  OS::TripleO::AllNodesExtraConfig: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml
 
parameter_defaults:
  NeutronMechanismDrivers: 'openvswitch,cisco_nexus'
  NetworkNexusConfig: {
    "N9K-9372PX-1": {
        "ip_address": "1.1.1.1", 
        "nve_src_intf": 0, 
        "password": "mySecretPasswordForNexus", 
        "physnet": "datacentre", 
        "servers": {
            "54:A2:74:CC:73:51": {
                "ports": "1/2"
            }
        }, 
        "ssh_port": 22, 
        "username": "admin"
    }
  }
  NetworkNexusManagedPhysicalNetwork: datacentre
  NetworkNexusVlanNamePrefix: 'q-'
  NetworkNexusSviRoundRobin: 'false'
  NetworkNexusProviderVlanNamePrefix: 'p-'
  NetworkNexusPersistentSwitchConfig: 'false'
  NetworkNexusSwitchHeartbeatTime: 0
  NetworkNexusSwitchReplayCount: 3
  NetworkNexusProviderVlanAutoCreate: 'true'
  NetworkNexusProviderVlanAutoTrunk: 'true'
  NetworkNexusVxlanGlobalConfig: 'false'
  NetworkNexusHostKeyChecks: 'false'
  NeutronNetworkVLANRanges: 'datacentre:2000:2500'
  NetworkNexusVxlanVniRanges: '0:0'
  NetworkNexusVxlanMcastRanges: '0.0.0.0:0.0.0.0'
 
parameters:
  controllerExtraConfig:
    neutron::server::api_workers: 0
    neutron::agents::metadata::metadata_workers: 0
    neutron::server::rpc_workers: 0

Virtual Port Channel (vPC) Configuration

The Cisco mechanism plugin supports multi-homes hosts in a vPC setup. A typical vPC setup is illustrated in the following diagram:
Multi Homed vPC hardware configuration

Prerequisites

  • The vPC interconnect must be set up as described in this document: NXOS vPC configuration. The Cisco plugin will not set up vPC interconnect channels between switches.
  • The data interfaces on the host must be bonded. This bonded interface must be attached to the external bridge.


Plugin Configuration

Configure vPC in the plugin with multiple connections per host. For example, if host 1 is connected to two Nexus switches 1.1.1.1 and 2.2.2.2 over portchannel2:


[ml2_mech_cisco_nexus:1.1.1.1]
# Hostname and port used of the node
host1=port-channel:2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus

[ml2_mech_cisco_nexus:2.2.2.2]
# Hostname and port used of the node
host1=port-channel:2
# Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default)
ssh_port=22
# Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored.
username=admin
password=mySecretPasswordForNexus

Specify the EtherType (portchannel, etherchannel, etc.) for the vPC setup.

Note: If you do not specify the ethertype, the plugin assumes an EtherType of Ethernet.

No configuration change is required for non-vPC configurations. Non-vpc setups are not affected by this feature.

TripleO Configuration

The Cisco specific implementation is deployed by modifying the tripleO environment file environments/neutron-ml2-cisco-nexus-ucsm.yaml and updating the contents with the deployment specific content. Note that with TripleO deployment the server names are not known before deployment, so the mac address of the server must be used in place of the server name.

Descriptions of the parameters can be found at https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml

resource_registry:
  OS::TripleO::AllNodesExtraConfig: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml
 
parameter_defaults:
  NeutronMechanismDrivers: 'openvswitch,cisco_nexus'
  NetworkNexusConfig: {
    "N9K-9372PX-1": {
        "ip_address": "1.1.1.1", 
        "nve_src_intf": 0, 
        "password": "mySecretPasswordForNexus", 
        "physnet": "datacentre", 
        "servers": {
            "54:A2:74:CC:73:51": {
                "ports": "port-channel:2"
            }
        }, 
        "ssh_port": 22, 
        "username": "admin"
    }
    "N9K-9372PX-2": {
        "ip_address": "2.2.2.2", 
        "nve_src_intf": 0, 
        "password": "mySecretPasswordForNexus", 
        "physnet": "datacentre", 
        "servers": {
            "54:A2:74:CC:73:AB": {
                "ports": "port-channel:2"
            }
        }, 
        "ssh_port": 22, 
        "username": "admin"
    }
  }

  NetworkNexusManagedPhysicalNetwork: datacentre
  NetworkNexusVlanNamePrefix: 'q-'
  NetworkNexusSviRoundRobin: 'false'
  NetworkNexusProviderVlanNamePrefix: 'p-'
  NetworkNexusPersistentSwitchConfig: 'false'
  NetworkNexusSwitchHeartbeatTime: 0
  NetworkNexusSwitchReplayCount: 3
  NetworkNexusProviderVlanAutoCreate: 'true'
  NetworkNexusProviderVlanAutoTrunk: 'true'
  NetworkNexusVxlanGlobalConfig: 'false'
  NetworkNexusHostKeyChecks: 'false'
  NeutronNetworkVLANRanges: 'datacentre:2000:2500'
  NetworkNexusVxlanVniRanges: '0:0'
  NetworkNexusVxlanMcastRanges: '0.0.0.0:0.0.0.0'
 
parameters:
  controllerExtraConfig:
    neutron::server::api_workers: 0
    neutron::agents::metadata::metadata_workers: 0
    neutron::server::rpc_workers: 0

VXLAN Overlay Configuration

Prerequisites

The Cisco Nexus ML2 driver will not configure those features described in the “Considerations for the Transport Network” section of http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/vxlan/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_VXLAN_Configuration_Guide.pdf. You must perform such configuration yourself before configuring the plugin for VXLAN. Do all of the following that are relevant to your installation:

  • Configure a loopback IP address
  • Configure IP multicast, PIM, and rendezvous point (RP) in the core
  • Configure the default gateway for VXLAN VLANs on external routing devices
  • Configure VXLAN related feature commands: "feature nv overlay" and "feature vn-segment-vlan-based"
  • Configure NVE interface and assign loopback address

Procedure

To support VXLAN configuration on a top-of-rack Nexus switch, add the following configuration settings:

  • Configure an additional setting named physnet under the ml2_mech_cisco_nexus section header, as shown in the following example.

Example:

[ml2_mech_cisco_nexus:192.168.1.1]
# Where physnet1 is a physical network name listed in the ML2 VLAN section header [ml2_type_vlan].
physnet=physnet1
  • Configure the VLAN range in the ml2_type_vlan section as shown in the following example. The ml2_type_vlan section header format is defined in the neutron/etc/neutron/plugins/ml2/ml2_conf.ini file.

Example:

[ml2_type_vlan]
network_vlan_ranges = physnet1:100:109
  • Configure the network VNI ranges and multicast ranges in the ml2_type_nexus_vlan section, as shown in the following example.

The section header [ml2_type_nexus_vxlan] is defined in the neutron/etc/neutron/plugins/ml2/ml2_conf.ini file to provide VXLAN information required by the Nexus switch.

Example:

[ml2_type_nexus_vxlan]
# Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
vni_ranges=50000:55000

# Multicast groups for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. Comma separated
# list of min:max ranges of multicast IP's 
# NOTE: must be a valid multicast IP, invalid IP's will be discarded
mcast_ranges=225.1.1.1:225.1.1.2

VXLAN Overlay Configuration in DevStack

The instructions at https://wiki.openstack.org/wiki/Sandbox/CML2MP#Configuring_Devstack_for_the_Cisco_Nexus_Mechanism_Driver describe how to configure DevStack with the Cisco Nexus mechanism driver. To use VXLAN with the DevStack configuration, do the following additional configuration step:

In addition to the standard local.conf settings, use the following local.conf file example to configure the Nexus switch for VXLAN Terminal End Point (VTEP) support.

[[local|localrc]]
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,cisco_nexus
Q_ML2_PLUGIN_TYPE_DRIVERS=nexus_vxlan,vlan
Q_ML2_TENANT_NETWORK_TYPE=nexus_vxlan
ML2_VLAN_RANGES=physnet1:100:109
ENABLE_TENANT_TUNNELS=False
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[agent]
minimize_polling=True
tunnel_types=

[ml2_mech_cisco_nexus:192.168.1.1]
ComputeHostA=1/10
username=admin
password=secretPassword
ssh_port=22
physnet=physnet1

[ml2_mech_cisco_nexus:192.168.1.2]
ComputeHostB=1/10
NetworkNode=1/11
username=admin
password=secretPassword
ssh_port=22
physnet=physnet1

[ml2_type_nexus_vxlan]
vni_ranges=50000:55000
mcast_ranges=225.1.1.1:225.1.1.2

[ml2_type_vlan]
network_vlan_ranges = physnet1:100:109

If the DevStack deployment is using Neutron code from the upstream repository, to download the Cisco mechanism driver code from the stackforge repository add these two settings to the local.conf file.

enable_service net-cisco
enable_plugin networking-cisco https://github.com/stackforge/networking-cisco

Configuration for Non-DHCP Agent Enabled Network Node Topologies

If a DHCP Agent is not running on the network node then the network node physical connection to the nexus switch must be added to all compute hosts that require access to the network node. As an example if the network node is physically connected to nexus switch 192.168.1.1 port 1/10 then the following configuration is required.

[ml2_mech_cisco_nexus:192.168.1.1]
ComputeHostA=1/8,1/10
ComputeHostB=1/9,1/10
username=admin
password=secretPassword
ssh_port=22
physnet=physnet1

[ml2_mech_cisco_nexus:192.168.1.2]
ComputeHostC=1/10
username=admin
password=secretPassword
ssh_port=22
physnet=physnet1

TripleO configuration

The Cisco specific implementation is deployed by modifying the tripleO environment file environments/neutron-ml2-cisco-nexus-ucsm.yaml and updating the contents with the deployment specific content. Note that with TripleO deployment the server names are not known before deployment, so the mac address of the server must be used in place of the server name.

Descriptions of the parameters can be found at https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml

resource_registry:
  OS::TripleO::AllNodesExtraConfig: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml
 
parameter_defaults:
  NeutronMechanismDrivers: 'openvswitch,cisco_nexus'
  NetworkNexusConfig: {
    "N9K-9372PX-1": {
        "ip_address": "192.168.1.1", 
        "nve_src_intf": 0, 
        "password": "secretPassword", 
        "physnet": "physnet", 
        "servers": {
            "54:A2:74:CC:73:51": {
                "ports": "1/10"
            }
        }, 
        "ssh_port": 22, 
        "username": "admin"
    }
    "N9K-9372PX-2": {
        "ip_address": "192.168.1.2", 
        "nve_src_intf": 0, 
        "password": "secretPassword", 
        "physnet": "physnet", 
        "servers": {
            "54:A2:74:CC:73:AB": {
                "ports": "1/10"
            }
           "54:A2:74:CC:73:CD": {
                "ports": "1/11"
            }
        }, 
        "ssh_port": 22, 
        "username": "admin"
    }
  }

  NetworkNexusManagedPhysicalNetwork: datacentre
  NetworkNexusVlanNamePrefix: 'q-'
  NetworkNexusSviRoundRobin: 'false'
  NetworkNexusProviderVlanNamePrefix: 'p-'
  NetworkNexusPersistentSwitchConfig: 'false'
  NetworkNexusSwitchHeartbeatTime: 0
  NetworkNexusSwitchReplayCount: 3
  NetworkNexusProviderVlanAutoCreate: 'true'
  NetworkNexusProviderVlanAutoTrunk: 'true'
  NetworkNexusVxlanGlobalConfig: 'false'
  NetworkNexusHostKeyChecks: 'false'
  NeutronNetworkVLANRanges: 'physnet1:100:109'
  NetworkNexusVxlanVniRanges: '50000:55000'
  NetworkNexusVxlanMcastRanges: '225.1.1.1:225.1.1.2'
 
parameters:
  controllerExtraConfig:
    neutron::server::api_workers: 0
    neutron::agents::metadata::metadata_workers: 0
    neutron::server::rpc_workers: 0

Configuring Devstack for the Cisco Nexus Mechanism Driver

VLAN Configuration

For general Devstack configuration, see the ML2 main page at https://wiki.openstack.org/wiki/Neutron/ML2#ML2_Configuration.

As described in the ML2 main page, set the devstack localrc variable Q_ML2_PLUGIN_MECHANISM_DRIVERS to the required mechanism drivers. For the Cisco Nexus MD the required drivers are:

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,cisco_nexus

Make the Nexus switch configuration accessible by adding the following to the devstack localrc file:

# CONF_PATH can be any valid directory path on the devstack system.
Q_PLUGIN_EXTRA_CONF_PATH=(/home/openstack)
Q_PLUGIN_EXTRA_CONF_FILES=(ml2_conf_cisco.ini)

Create the file /home/openstack/ml2_conf_cisco.ini and add the Nexus switch information. The configuration file syntax is described in the #Configuration section above.


Neutron UCS Manager ML2 Mechanism Driver

The Cisco UCS Manager ML2 plugin in the Liberty release now supports configuration of multiple UCS Managers. What this means is that one instance of the plugin can configure multiple UCS servers controlled by multiple UCS Managers within the same OpenStack cloud. Just as in Kilo, the plugin can configure the following features on multiple UCS Managers:

  1. Configure Cisco VM-FEX on SR-IOV capable Cisco NICs.
  2. Configure SR-IOV capable virtual functions on Cisco and Intel NICs.
  3. Configure the UCS Servers to support Neutron virtual ports attached to Nova Vms.


The UCS Manager Plugin talks to the UCS Manager application running on a Fabric Interconnect. The UCS Manager is part of an ecosystem for UCS Servers that consists of Fabric Interconnects and in some cases FEXs. For further information, please refer to:http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-infrastructure-ucs-manager-software/116741-troubleshoot-ucsm-00.html

SR-IOV/VM-FEX Support

This feature allows Cisco and Intel (only SR-IOV support) physical functions (NICs) to be split into several virtual functions. These virtual functions can be configured in either the Direct or Macvtap modes. In both these modes, the virtual functions can be assigned to Nova Vms, causing traffic from the Vms to bypass the hypervisor and sent directly to the Fabric Interconnect. This feature increases traffic throughput to the VM and reduces CPU load on the UCS Servers. In "Direct" SR-IOV/VM-FEX mode, these benefits are the greatest with the limitation that live migration of the VM to a different Server cannot be supported.

Neutron Virtual Port Support

This feature configures the NIC on the UCS Server to allow traffic to/from the Vms attached to Neutron virtual ports.

The ML2 UCS Manager driver does not support configuration of UCS Servers, whose service profiles are attached to Service Templates. This is to prevent that same VLAN configuration to be pushed to all the service profiles based on that template. The plugin can be used after the Service Profile has been unbound from the template.

Both the above features also configure the trunk ports from the Fabric Interconnect to the TOR switch to carry traffic from the VMs.

Note: This software is provided "as is," and in no event does Cisco warrant that the software is error free or that customer will be able to operate the software without problems or interruptions.


Prerequisites

UCS Manager Plugin driver support requires the following OS versions and packages:

  • One of two supported OSes:
    • RHEL 6.1 or above
    • Ubuntu 14.04 or above
  • One of the following Cisco NICs with VM-FEX support or Intel NICs with SR-IOV support:
    • Cisco UCS VIC 1240 (vendor_id:product_id is 1137: 0071)
    • Intel 82599 10 Gigabit Ethernet Controller (vendor_id:product_id is 8086:10ed)
  • The Cisco UCS Python SDK version 0.8.2 or lower

Get the Cisco UCS Python SDK at: https://communities.cisco.com/docs/DOC-37174

Download the tar file and follow the installation instructions.

Limitations

  1. Service Profiles cannot be bound to a template. (If the Service Profiles are created from a template they need to be "unbound" from the template.)
  2. Supports a maximum of 2 vNICs to carry tenant traffic and they have to be named eth0 and eth1 on the UCS Manager.

Veth.png


3. If the UCS Manager domain contains 2 Fabrics, both Fabric's would be configured identically.

Configuration on UCS Manager

1. To be able to assign Cisco VM-FEX ports to Nova VMs, the SR-IOV capable Cisco VICs should be configured with a Dynamic vNIC profile. VNICdynamic.PNG


2. After that associate the desired Ethernet port on the Service profile with the Dynamic vNIC policy created in step 1.

Service.PNG


On the compute host which has this SR-IOV capable Cisco VIC:

  1. Add "intel_iommu=on" to "GRUB_CMDLINE_LINUX" in /etc/sysconfig/grub [in RHEL] or /etc/default/grub [in Ubuntu]
  2. Regenerate grub.conf by running : grub2-mkconfig -o /boot/grub2/grub.cfg on BIOS systems or grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg on UEFI systems.
  3. Reboot the compute host.
  4. After the host is up make sure that IOMMU is activated by running : dmesg | grep -iE "dmar|iommu" . The output should include:
    [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.13.0-24-generic root=/dev/mapper/devstack--38--vg-root ro quiet intel_iommu=on
    [ 0.000000] Intel-IOMMU:enabled
  5. Then run the command : "lspci –nn | grep Cisco" on the same compute host to make sure the SR-IOV capable PF have been split into VFs as specified by the Dynamic vNIC connection policy on the UCS Manager. The output should contain several lines that look like this:
    0a:00.1 Ethernet controller [0200]: Cisco Systems Inc VIC SR-IOV VF [1137:0071] (rev a2)

Procedure

In this release, with the increase in number of UCS servers being supported by the plugin via multi-UCS Managers, providing the host to service profile mapping for each server would definitely be tedious. To make the configuration of the plugin, easier, it now learns the hostname to service profile mapping directly from the UCS Manager. The only condition being that the cloud admin should make sure that the "Name" field of the Service Profile on the UCS Manager, contains the hostname of the server as the hypervisor sees it. In the Liberty release, all of the UCSM plugin code has been moved to the networking-cisco repository at https://git.openstack.org/cgit/openstack/networking-cisco/. The Install scripts can be modified to pull this code using: git clone https://github.com/openstack/networking-cisco".

To support UCS Manager Plugin driver configuration on a controller node, add the following configuration settings under the ml2_cisco_ucsm heading in the ml2_conf_cisco.ini file, as shown in the following example:

Example:

# SR-IOV and VM-FEX vendors supported by this plugin
# xxxx:yyyy represents vendor_id:product_id
# This config is optional.
# supported_pci_devs=['2222:3333', '4444:5555']

# UCSM information for multi-UCSM support.
# The following section can be repeated for the number of UCS Managers in
# the cloud.
# UCSM information format:
# [ml2_cisco_ucsm_ip:1.1.1.1]
# ucsm_username = username
# ucsm_password = password


On the node hosting the VM, add following entry to the <code>nova.conf</code> file:
pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071","address":"*:0a:00.*","physical_network":"physnet1"}


Where:

vendor_id is the vendor number of the VM-FEX or SR-IOV supporting NIC on the compute node (1137 in this example).

product_id is the product number of the VM-FEX or SR-IOV supporting NIC on the compute node (0071 in this example).

address is optional. If address is omitted, the driver will allow the connection at any address.

physical_network is optional, but strongly recommended. If physical_network is omitted, the driver will allow the connection on any network. This is not desirable behavior.

Note: The PCI device (NIC) vendor ID and product ID must be configured to exactly match the configuration on the compute node, otherwise the driver will not attempt to connect to the VF. There will be no warning of this failure.

  • Ensure that the physical network is configured in ml2_conf.ini

PHYSICAL_NETWORK=physnet1 ML2_VLAN_RANGES=physnet1:800:900

UCS Manager Plugin Driver Configuration for DevStack

To use the UCS Manager Plugin driver with the DevStack configuration, do the following additional configuration steps:

  • Set the following variable in the localrc file:
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,cisco_ucsm
  • In local.conf, set the PCI passthrough whitelist and define the physical network:
pci_passthrough_whitelist = {"vendor_id":"1137","product_id":"0071","physical_network":"physnet1"}
PHYSICAL_NETWORK=physnet1
ML2_VLAN_RANGES=physnet1:800:900

UCS Manager TripleO Configuration

To use the UCS Manager Plugin driver with TripleO, add the following to tripleo-heat-templates/environments/neutron-ml2-cisco-nexus-ucsm.yaml:


resource_registry:
  OS::TripleO::AllNodesExtraConfig: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/all_nodes/neutron-ml2-cisco-nexus-ucsm.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /usr/share/openstack-tripleo-heat-templates/net-config-bridge.yaml
 
parameter_defaults:
  NeutronMechanismDrivers: 'openvswitch,cisco_ucsm'
  NetworkUCSMIp: '1.1.1.1'
  NetworkUCSMUsername: 'user'
  NetworkUCSMPassword: 'password'
  NetworkUCSMHostList: '54:A2:74:CC:73:51:Serviceprofile1, 84:B8:02:5C:00:7A:Serviceprofile2'
 
parameters:
  controllerExtraConfig:
    neutron::server::api_workers: 0
    neutron::agents::metadata::metadata_workers: 0
    neutron::server::rpc_workers: 0


Table Example

The following table lists the minimum requirements for the Cisco UCS servers that you use for the nodes in your OpenStack cluster:

Server/Node Recommended Hardware Notes
Build node Processor: 64-bit x86

Server or VM with:

  • Memory: 4 GB (RAM)
  • Disk space: 20 GB
The build node must also have Internet connectivity to be able to download Cisco OSI modules and Puppet manifests.

To ensure that the build node can build and communicate with the other nodes in your cluster, it must also have a network interface on the same network as the management interfaces of the other OpenStack cluster servers.

A minimal build node (for example, a VM with 4 GB of RAM and a 20-GB disk) is sufficient for a test install. However, because the build node acts as the puppet master, caches client components, and logs all installation activity you might need a more powerful machine with more disk space for larger installs.

Control node Processor: 64-bit x86

Memory: 12 GB (RAM) Disk space: 1 TB (SATA or SAS) Network: Two 1-Gbps network interface cards (NICs)

A quad core server with 12 GB of RAM is sufficient for a minimal control node.

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.

Compute node Processor: 64-bit x86

Memory: 128 GB (RAM) Disk space: 300 GB (SATA) Volume storage: Two disks with 2 TB (SATA) for volumes attached to the compute nodes Network: Two 1-Gbps NICs

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.
HA proxy (load balance) node Processor: 64-bit x86

Memory: 12 GB (RAM) Disk space: 20 GB (SATA or SAS) Network: One 1-Gbps NIC

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.
Swift storage proxy node Processor: 64-bit x86

Memory: 12 GB (RAM) Disk space: 300 GB (SATA or SAS) Network: Two 1-Gbps NICs

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.
Swift storage node Processor: 64-bit x86

Memory: 32 GB (RAM) Disk space: 300 GB (SATA) Volume storage:

  • For rack-mount servers, either 24 disks with 1 TB (SATA) or 2 disks with 3 TB (SATA) depending upon the model
  • For blade servers, two disks with 1 TB (SATA) for combined base OS and storage

Network: Two 1-Gbps NICs

Three or more storage nodes are needed.

These are minimum requirements. Memory, disk, and interface speed will be greater for larger clusters.

Rating: 4.0/5 (1 vote cast)

Personal tools