OpenStack: Havana: 2-Role Nexus
This article describes how to use the Cisco OpenStack Installer (Cisco OSI) to setup a working OpenStack cluster (using the Havana Release), which utilizes Nexus ToR switches in VLAN mode.
Before you begin
Please read the release notes for the release you are installing. The release notes contain important information about limitations, features, and how to use snapshot repositories if you're installing an older release.
Setting Up the Nexus Switch
The following items need to be considered before proceeding with the Cisco Nexus Plugin deployment on the OpenStack Havana release.
- Your Nexus switch must be connected to a management network separate from the OpenStack data network. By default, we use the eth0 interface for this purpose. The plugin communicates with the switch over this network to set up your data flows.
- The switch must have SSH login and XML API enabled.
- Each compute host on the cloud should be connected to a port on the switch over a dedicated interface just for OpenStack data traffic. By default we use eth1 for this.
- The link from the node running the DHCP agent (by default, this is the OpenStack control node) should be a trunk link trunking all VLANs used for OpenStack traffic.
- Inter switch links should be trunk links trunking all VLANs.
Figure 1 Can be used to visualize the layout of the OpenStack compute nodes and the attached Nexus switches.
Figure 1:Nexus Plugin Diagram
Build Node Configuration
- Follow the build node setup instructions from the Havana installation guide. After install.sh completes you will need to configure the data model to reflect your topology.
- Update /etc/puppet/data/role_mappings.yaml to reflect your short hostname for build, control and compute roles.
build-server: build control-server01: controller compute-server01: compute compute-server02: compute
Configuration and Customization
At this point, the data model has been installed and any required customizations should be made to the data model. Listed below are specific changes required for the nexus plugin, this is in addition to the regular changes made to user.common.yaml in a 2_node setup. Refer to this link for a regular 2_role setup: Openstack:Havana-Openstack-Installer.
# supports gre or vlan tenant_network_type: vlan
In /etc/puppet/data/hiera_data/tenant_network_type/vlan.yaml, set the network_vlan_ranges to the vlan range you want to use. Before specifying the VLAN address ranges in the $network_vlan_ranges parameter, please log into the switch to confirm that the specified address range is indeed free to use
quantum::plugins::ovs::network_vlan_ranges: physnet1:500:600 quantum::agents::ovs::bridge_mappings: - "physnet1:br-ex" neutron::agents::ovs::bridge_mappings: - "physnet1:br-ex" neutron::plugins::ovs::network_vlan_ranges: physnet1:500:600 quantum::plugins::ovs::tenant_network_type: vlan neutron::plugins::ovs::tenant_network_type: vlan quantum::agents::ovs::enable_tunneling: false neutron::agents::ovs::enable_tunneling: false
In /etc/puppet/data/class_groups/network_controller.yaml, uncomment these two lines
- "coe::nexus" - "neutron::plugins::cisco"
In /etc/puppet/data/data_mappings/common.yaml, uncomment the "Cisco plugin support" section.
# Cisco plugin support nexus_plugin: - neutron::plugins::cisco::nexus_plugin vswitch_plugin: - neutron::plugins::cisco::vswitch_plugin nexus_credentials: - coe::nexus::nexus_credentials nexus_config: - coe::nexus::nexus_config
In /etc/puppet/data/hiera_data/user.common.yaml, set the core_plugin, subplugins, switch(es) credentials and config
neutron::core_plugin: 'neutron.plugins.cisco.network_plugin.PluginV2' # The Nexus sub-plugin to use. nexus_plugin: 'neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin' # The vswitch sub-plugin to use with the Cisco plugin. vswitch_plugin: 'neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2' # The credentials to use to log into each Nexus switch in your topology. # This should be an array of strings of the following form (switch IP # address, username, and password separated by forward slashes): # - 'switch1 IP address/username/password' # - 'switch2 IP address/username/password' nexus_credentials: - '10.121.XXX.XX/username/password' # Switch configuration information. This is a hash for each switch in # your topology. The format for each entry is: # 'switch_name': # The switch's hostname or other unique identifier # 'ip_address': '192.168.2.81' # The IP of the switch # 'username': 'admin' # The username to log in with # 'password': 'password' # The password to log in with # 'ssh_port': 22 # The port NETCONF messages should be sent to via SSH # 'servers': # 'server1': '1/1' # Hostname of a server and the port it is connected to # 'server2': '1/2' nexus_config: 'n3k-1': 'ip_address': '10.121.XXX.XX' 'username': 'username' 'password': 'password' 'ssh_port': 22 'servers': 'compute-server01': '1/8' 'compute-server02': '1/9'
The sample config files can be found here https://github.com/CiscoSystems/coi-sample-configs/tree/havana/2_role_nexus
Example Nexus ToR Configuration
interface Ethernet1/1 description to N7k-agg-1 switchport mode trunk switchport trunk allowed vlan 13,500-600 interface Ethernet1/2 description to N7k-agg-2 switchport mode trunk switchport trunk allowed vlan 13,500-600 interface Ethernet1/5 description to control-server Management switchport mode trunk switchport trunk allowed vlan 13 speed 1000 interface Ethernet1/6 description to control-server Data switchport mode trunk switchport trunk allowed vlan 13,500-600 speed 1000 interface Ethernet1/7 description to compute-server01 Management switchport mode trunk switchport trunk allowed vlan 13 speed 1000 interface Ethernet1/8 description to compute-server01 Data switchport mode trunk speed 1000
The Nexus example configuration above has a trunk link configured to each upstream Nexus 7000 aggregation layer switch that includes the VLAN ranges that are defined in the /etc/puppet/data/hiera_data/tenant_network_type/vlan.yaml (500-600). In this example the control node ("control-server") has two physical interfaces just like the compute nodes do. One for the management interface (VLAN 13) and the other for the data interface which is a trunk interface that is preconfigured with the same VLANs being used as the provider network ranges (500-600). This is needed if the DHCP agent is being deployed on the control server as it will need connectivity to assign address to instances on the compute nodes that are attached to VLANs 500-600. Also notice that the compute nodes have two physical interfaces and the data interface does not have a switchport trunk allowed command configured. This command will be implemented by the Cisco Nexus plugin when the first instance is launched for each VLAN defined in the /etc/puppet/data/hiera_data/tenant_network_type/vlan.yaml file.
Run puppet apply to get your build node configured.
puppet apply -v /etc/puppet/manifests/site.pp
Once puppet finishes your build node should be ready to serve as a puppet master and cobbler server
Control and Compute Nodes
Now that our build server is up, lets bring up the control and compute nodes. If you’re using cobbler you should be able to provision this via PXE.
If you like to do a manual setup,
- Install Ubuntu 12.04 on the control and compute nodes.
- Make sure the hostnames match the ones defined in role-mapping.yaml
- Install git
apt-get install git
- Clone Cisco Openstack Installer Repository and run setup
cd /root && git clone -b havana https://github.com/CiscoSystems/puppet_openstack_builder && cd puppet_openstack_builder && git checkout h.2
- Export your build server IP address
cd install_scripts export build_server=10.121.13.17
- Now run the setup script to get your node ready to run right version of puppet. This script also makes sure you have the right hostname in /etc/hosts etc
- Now your control and compute nodes are ready to run puppet. Begin the control/compute build by running puppet agent
Puppet agent –td –server=build-server.domain.name –pluginsync
After puppet runs finish you should have a successful openstack install.
You can verify that all of the OpenStack Nova services were installed and running correctly by checking the Nova service list:
root@control-server:~# nova-manage service list Binary Host Zone Status State Updated_At nova-consoleauth all-in-one internal enabled :-) 2014-03-11 17:34:17 nova-scheduler all-in-one internal enabled :-) 2014-03-11 17:34:16 nova-conductor all-in-one internal enabled :-) 2014-03-11 17:34:13 nova-compute all-in-one nova enabled :-) 2014-03-11 17:34:13 nova-cert all-in-one internal enabled :-) 2014-03-11 17:34:17
You can verify if the OpenStack quantum services are up:
source openrc neutron agent-list +--------------------------------------+--------------------+------------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+------------------+-------+----------------+ | 02b41613-b8d1-4cb9-bc6c-3a37ffdb2285 | Loadbalancer agent | control-server | :-) | True | | 08254493-6f86-436e-b33b-76628ad60ff5 | Open vSwitch agent | control-server | :-) | True | | 72123130-e958-4517-bdc5-6815cfc45652 | DHCP agent | control-server | :-) | True | | ab21842e-95c6-4acf-a1b0-6b70582d05cc | Open vSwitch agent | compute-server01 | :-) | True | | eb39c702-4843-454a-9f0a-01688b8d39de | L3 agent | control-server | :-) | True | +--------------------------------------+--------------------+------------------+-------+----------------+
You can connect into the OpenStack Dashboard by entering:
using username admin and password Cisco123.
Quantum/Neutron Commands for Network/Subnet Creation
The neutron-server service by default just loads the /etc/neutron/plugins/cisco/cisco_plugins.ini when it starts, we need the ovs_neutron_plugin.ini config file also loaded. So stop the service needs to be stopped and restarted with the additional config file
service neutron-server stop python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/cisco/cisco_plugins.ini --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini &
Create a provider network for VLAN 500. Name the network whatever you want (here we used "vlan500"). The provider network type is "vlan". The provider physical network is physnet1 on eth1. The provider segmentation id of "500" simply refers to VLAN 500 defined in the config file. Finally, we defined that this is an external network with an upstream router.
neutron net-create vlan500 --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 500 --shared --router:external=True
Create a subnet for VLAN 500. Name the subnet whatever you want (here we used "subnet500"). Note: You don't have to actually enter a name for the subnet. Since our upstream aggregation layer switches are using HSRP (VRRP/GLBP are other options) and the addresses used on those switches are .1 = standby address, .2 = N7k-agg-1, .3 = N7k-agg-2, we need to create an allocation pool range that begins after those addresses. By default, the addresses for OpenStack use would begin at .1 and this would cause an IP address conflict. In this example we begin at .10 and end at .250 to account for any other management addresses that may be used. "vlan500" is the network name that we are attaching the subnet to. The subnet range is 192.168.250.0/24 and the DNS server address being assigned to the instances attached to the network is 10.121.12.10
neutron subnet-create --name subnet500 --allocation-pool start=192.168.250.10,end=192.168.250.254 vlan500 192.168.250.0/24 --dns_nameservers list=true 10.121.12.10
NOTE:Make sure you manually log(ssh) into the switch from the control and the compute nodes before creating instances. This is required as the switch needs to be a known host on the servers, before the plugin tries to login to the switch and create vlans.
The Nexus Plugin in Action
The Nexus Top of Rack switch has been configured, the Quantum/Neutron network and subnet have been defined and after you launch an instance and attach it to the network, we can see that the Cisco Nexus plugin has used the $nexus_config and $nexus_credentials information to login to the Cisco Nexus switch and defined a VLAN 500 and modified the appropriate interface for that VLAN.
vlan 500 name p-500 interface Ethernet1/8 description to compute-server01 Data switchport mode trunk switchport trunk allowed vlan 500 speed 1000
The Cisco Nexus switch configuration would be changed each time a new instance is created for a newly defined VLAN as the following example shows for VLAN 600:
vlan 500 name p-500 vlan 600 name p-600 interface Ethernet1/8 description to compute-server01 Data switchport mode trunk switchport trunk allowed vlan 500,600 speed 1000