OpenStack: Icehouse: 2-Role Vlan

From DocWiki

Jump to: navigation, search

Contents

Before you begin

Please read the release notes for the release you are installing. The release notes contain important information about limitations, features, and how to use snapshot repositories if you're installing an older release.

Testbed Setup

The following items need to be considered before proceeding with the setup. Make sure the control and the compute node have two usable network interfaces. The first interface acts as the management and public interface for the server. The second interface is the data traffic interface, on which neutron will be forwarding traffic on the different vlans.

Build Node Configuration

To create the build node, follow the build node setup instructions from the Cisco OpenStack Installer Deployment Guide.

After the install.sh script completes, do the following:

  • Update /etc/puppet/data/role_mappings.yaml to reflect your short hostnames for build, control and compute roles.

Example:

build-server: build
control-server01: controller
compute-server01: compute
compute-server02: compute
  • Configure the data model to reflect your topology, as described next.

Configuration and Customization

At this point, the data model has been installed and any required customizations should be made to the data model. Listed below are specific changes required for setting up the network , this is in addition to the regular changes made to user.common.yaml in a 2_node setup. Refer to this link for a regular 2_role setup: OpenStack:_Icehouse:_2-Role#Build_Node_Configuration.

In /etc/puppet/data/global_hiera_params/common.yaml

# supports gre or vlan
tenant_network_type: vlan

In /etc/puppet/data/hiera_data/tenant_network_type/vlan.yaml, set the network_vlan_ranges to the vlan range you want to use. Make sure the vlans set are indeed free to use, are forwarded by the physical switch in your testbed. There are multiple network_vlan_ranges and bridge_mappings variable in the file, make sure you set the one for ml2 as shown belown.

neutron::plugins::ml2::network_vlan_ranges:
  - 'physnet1:500:600'
# ML2 Agent
neutron::agents::ml2::ovs::bridge_mappings:
  - "physnet1:br-ex"


In /etc/puppet/data/hiera_data/user.common.yaml, set the core_plugin, subplugins, switch(es) credentials and config

# An array of Neutron service plugins to enable.
service_plugins:
# For ML2, uncomment this line
  - 'neutron.services.l3_router.l3_router_plugin.L3RouterPlugin'
  - 'neutron.services.loadbalancer.plugin.LoadBalancerPlugin'
  - 'neutron.services.firewall.fwaas_plugin.FirewallPlugin'
  - 'neutron.services.vpn.plugin.VPNDriverPlugin'

# ml2 hack. Current packages have ml2 enabled, even if you do not
# use the driver. This param corrects the configuration files.
# This is enabled by default. If you are using ml2, change to false.
disableml2: false

# For ML2, uncomment this line
neutron::core_plugin: 'ml2'

Based on the type of deployment you have, make the following changes in the specific scenario file found at data/scenarios/(2_role, 3_role, all_in_one, compressed_ha, full_ha, swift). For the HA deployments, the change shown below needs to be made on additional class groups as well, there are comments in the files for the same. Make those additonal changes as well

  controller:
    classes:
      - coe::base
      - "nova::%{rpc_type}"
    class_groups:
      - glance_all
      - keystone_all
      - cinder_controller
      - nova_controller
      - horizon
      - ceilometer_controller
      - heat_all
      - "%{db_type}_database"
#      - network_controller
# For ML2, Uncomment this and comment above line
      - network_controller_ml2
      - test_file
  compute:
    classes:
      - coe::base
      - cinder::setup_test_volume
    class_groups:
#      - nova_compute
# For ML2, Uncomment this and comment above line
      - nova_compute_ml2
      - cinder_volume
      - ceilometer_compute

The sample config files can be found here https://github.com/CiscoSystems/coi-sample-configs/tree/havana/2_role_nexus


The Nexus example configuration above has a trunk link configured to each upstream Nexus 7000 aggregation layer switch that includes the VLAN ranges that are defined in the /etc/puppet/data/hiera_data/tenant_network_type/vlan.yaml (500-600). In this example the control node ("control-server") has two physical interfaces just like the compute nodes do. One for the management interface (VLAN 13) and the other for the data interface which is a trunk interface that is preconfigured with the same VLANs being used as the provider network ranges (500-600). This is needed if the DHCP agent is being deployed on the control server as it will need connectivity to assign address to instances on the compute nodes that are attached to VLANs 500-600. Also notice that the compute nodes have two physical interfaces and the data interface does not have a switchport trunk allowed command configured. This command will be implemented by the Cisco Nexus plugin when the first instance is launched for each VLAN defined in the /etc/puppet/data/hiera_data/tenant_network_type/vlan.yaml file.

Run puppet apply to get your build node configured.

puppet apply -v /etc/puppet/manifests/site.pp

Once puppet finishes your build node should be ready to serve as a puppet master and cobbler server

Control and Compute Nodes

Now that our build server is up, lets bring up the control and compute nodes. If you’re using cobbler you should be able to provision this via PXE.

If you like to do a manual setup,

  • Install Ubuntu 12.04 on the control and compute nodes.
  • Make sure the hostnames match the ones defined in role-mapping.yaml
  • Install git
 apt-get install git 
  • Clone Cisco Openstack Installer Repository and run setup
 
cd /root && git clone -b icehouse https://github.com/CiscoSystems/puppet_openstack_builder && cd puppet_openstack_builder && git checkout i.1
  • Export your build server IP address
cd install_scripts
export build_server=10.121.13.17
  • Now run the setup script to get your node ready to run right version of puppet. This script also makes sure you have the right hostname in /etc/hosts etc
bash setup.sh
  • Now your control and compute nodes are ready to run puppet. Begin the control/compute build by running puppet agent
 Puppet agent –td –server=build-server.domain.name –pluginsync 

After puppet runs finish you should have a successful openstack install.

Verification

You can verify that all of the OpenStack Nova services were installed and running correctly by checking the Nova service list:

root@control-server:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth all-in-one                           internal         enabled    :-)   2014-03-11 17:34:17
nova-scheduler   all-in-one                           internal         enabled    :-)   2014-03-11 17:34:16
nova-conductor   all-in-one                           internal         enabled    :-)   2014-03-11 17:34:13
nova-compute     all-in-one                           nova             enabled    :-)   2014-03-11 17:34:13
nova-cert        all-in-one                           internal         enabled    :-)   2014-03-11 17:34:17

You can verify if the OpenStack quantum services are up:

source openrc 

neutron agent-list
+--------------------------------------+--------------------+------------------+-------+----------------+
| id                                   | agent_type         | host             | alive | admin_state_up |
+--------------------------------------+--------------------+------------------+-------+----------------+
| 02b41613-b8d1-4cb9-bc6c-3a37ffdb2285 | Loadbalancer agent | control-server   | :-)   | True           |
| 08254493-6f86-436e-b33b-76628ad60ff5 | Open vSwitch agent | control-server   | :-)   | True           |
| 72123130-e958-4517-bdc5-6815cfc45652 | DHCP agent         | control-server   | :-)   | True           |
| ab21842e-95c6-4acf-a1b0-6b70582d05cc | Open vSwitch agent | compute-server01 | :-)   | True           |
| eb39c702-4843-454a-9f0a-01688b8d39de | L3 agent           | control-server   | :-)   | True           |
+--------------------------------------+--------------------+------------------+-------+----------------+

You can connect into the OpenStack Dashboard by entering:


http://ip-of-your-control-server

using username admin and password Cisco123.


Create a provider network for VLAN 500. Name the network whatever you want (here we used "vlan500"). The provider network type is "vlan". The provider physical network is physnet1 on eth1. The provider segmentation id of "500" simply refers to VLAN 500 defined in the config file. Finally, we defined that this is an external network with an upstream router.

neutron net-create vlan500 --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 500 --shared --router:external=True

Create a subnet for VLAN 500. Name the subnet whatever you want (here we used "subnet500"). Note: You don't have to actually enter a name for the subnet. Since our upstream aggregation layer switches are using HSRP (VRRP/GLBP are other options) and the addresses used on those switches are .1 = standby address, .2 = N7k-agg-1, .3 = N7k-agg-2, we need to create an allocation pool range that begins after those addresses. By default, the addresses for OpenStack use would begin at .1 and this would cause an IP address conflict. In this example we begin at .10 and end at .250 to account for any other management addresses that may be used. "vlan500" is the network name that we are attaching the subnet to. The subnet range is 192.168.250.0/24 and the DNS server address being assigned to the instances attached to the network is 10.121.12.10

neutron subnet-create --name subnet500 --allocation-pool start=192.168.250.10,end=192.168.250.254 vlan500 192.168.250.0/24 --dns_nameservers list=true 10.121.12.10

NOTE:Make sure you manually log(ssh) into the switch from the control and the compute nodes before creating instances. This is required as the switch needs to be a known host on the servers, before the plugin tries to login to the switch and create vlans.

Rating: 0.0/5 (0 votes cast)

Personal tools