OpenStack: Icehouse All-in-One

From DocWiki

Jump to: navigation, search

This document describes how to deploy three OpenStack networking scenarios based on a single server acting as a build, control, and compute node (the all-in-one scenario). all-in-one is one of several scenarios supported by the Cisco OpenStack Installer (Cisco OSI). For more information about Cisco's OpenStack solution and the Cisco OpenStack Installer, see the Cisco DocWiki OpenStack home page.

Contents

Overview

This document describes how to deploy three networking scenarios based on the all-in-one scenario:

  • Model 1: On a single server, deploy an all-in-one node using a tenant network model.
  • Model 2: Add an additional compute node to the all-in-one node from Model 1, still using a tenant network model.
  • Model 3: Modify the all-in-one and compute nodes from Model 2 to use a provider network model.

The figures below illustrate the network topology of these three deployments:

Figure 2. Model 1: AIO Per-Tenant Router with Private Networks Diagram.

AIO-H2.jpg









Figure 2. Model 2: AIO & Additional Compute Node using Per-Tenant Router with Private Networks Diagram.

AIO-Compute-H2.jpg









Figure 3. Model 3: AIO & Additional Compute Node using Provider Network Extensions with VLANs Diagram.

AIO-Compute-VLAN-H2.jpg









Model 1

Model 1 comprises an all-in-one (AIO) configuration with a per-tenant router, as shown in Figure 1. For a description of the network model for this scenario, see "Per-tenant Routers with Private Networks" in the Neutron Use Cases chapter of the community Networking in OpenStack training guide (the last scenario described in the chapter).

Creating An All-In-One Node

This section describes how to install and configure an all-in-one node.

Prerequisites

  • The AIO node is running Ubuntu Linux 14.04 LTS, deployed on physical baremetal hardware (i.e. Cisco UCS) or as a Virtual Machine (i.e. VMware ESXi). For performance reasons, a baremetal install is recommended for any non-trivial application. See OpenStack:_Installing_Ubuntu_Linux for instructions to complete an installation of Ubuntu from an ISO image onto a standalone server.
  • The AIO node has access to two physically or logically (VLAN) separated IP networks. One network is used to provide connectivity for OpenStack API endpoints, Open vSwitch (OVS) GRE endpoints (especially important if multiple compute nodes are added to the AIO deployment), and OpenStack/UCS management. The second network is used by OVS as the physical bridge interface and by Neutron as the public network.
  • You are using hostnames for the various OpenStack roles that match those in the /root/puppet_openstack_builder/data/role_mappings.yaml file. If you are not using the default hostnames then you must add your custom hostname and role to the /root/puppet_openstack_builder/data/role_mappings.yaml file before running the installation script.

Procedure

Step 1: Configure the two network interfaces required for the AIO scenario.

a) Edit the /etc/network/interfaces file on the AIO node to look like the following:
# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.80.140
        netmask 255.255.255.0
        network 192.168.80.0
        broadcast 192.168.80.255
        gateway 192.168.80.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 8.8.8.8 8.8.4.4
        dns-search example.com

auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        up ip link set $IFACE promisc on
        down ifconfig $IFACE 0.0.0.0 down
b) Reboot the AIO node:
reboot

Step 2: Retrieve the installation packages.

a) On the AIO node, become root:
sudo su - 
b) If your environment includes a proxy server, configure the package manager to use the proxy. For apt, add the following to the /etc/apt/apt.conf.d/00proxy file:
Acquire::https::proxy https://your-proxy.address.com:443/
c) If necessary, configure git with your proxy: git config --global https.proxy https://your-proxy.address.com:443#
d) Install git:
apt-get install -y git
e) Clone the Cisco OpenStack Installer repository:
cd /root && git clone -b icehouse https://github.com/CiscoSystems/puppet_openstack_builder && cd puppet_openstack_builder && git checkout i.0

Step 3: Configure the AIO installation. A generic AIO installation should require no configuration, but see the notes below if you need to customize your AIO node.

You can install the AIO scenario using the default values supplied with the installer. If you have non-standard interface definitions, hostnames, proxies, or other parameters, you can change the baseline AIO configuration.

If you want to modify the AIO configuration, we recommend that you do so before running the install.sh script. For the AIO install, all changes to the configuration are picked up and installed during the run of the install.sh script. Two example customizations of the AIO Model 1 setup are as follows:

  • Using an interface other than eth0 for SSH/Management. Export the default_interface value to the correct interface definition. For example, to use eth1, enter the following:
    export default_interface=eth1
  • Using an interface other than eth1 on your node for external (public) access. Export the external_interface value to the correct interface definition. For example, to use eth2, enter the following:
    export external_interface=eth2

Note: You can modify configuration parameters after running the install.sh script, but be aware of the following:

  • The install.sh script does a Puppet catalog run and will reflect changes in the YAML files. If you change parameters after running the install.sh script, you must run puppet apply on the all-in-one node to pick up the changes.
  • If you run the install.sh script before customizing the installation, you might have to clean up some artifacts of the original parameter settings. For example, you might need to reconfigure SSL certificates, delete misconfigured network endpoints, and so on.
  • The installer creates the directory /etc/puppet and installs the configuration files there. Therefore, you must make changes in the /root/puppet_openstack_builder/data directory before you run the install. After you run the install, you make changes in the /etc/puppet/data directory.

Step 4: Run the installation script.

a) On the AIO node command line, export 'cisco' as the vendor:
export vendor=cisco
b) Export the AIO scenario:
export scenario=all_in_one
c) Change the directory to where the install script is located and start the installation (this can take several minutes depending on your Internet connection):
cd ~/puppet_openstack_builder/install-scripts && ./install.sh

If the install script and Puppet run finish successfully, the system returns to the shell prompt with the Notice message "Finished catalog run".

Running the install command as given above generates the /var/log/puppet_openstack_builder_install.log file that contains all the output that was sent to the terminal window. It contains information that can be useful should you need to debug the installation.

Note: The installation script generates Warning messages. These warnings are mostly harmless. If you see an error message, or a warning message that you suspect is significant, see OpenStack:_Troubleshooting for help.

Post Installation Steps

  • Verify that all of the OpenStack Nova services were installed and running correctly by checking the Nova service list. Enter:
nova-manage service list

The system should list services similar to the following:

Binary Host Zone Status State Updated_At nova-consoleauth all-in-one internal enabled  :-) 2014-03-11 17:34:17 nova-scheduler all-in-one internal enabled  :-) 2014-03-11 17:34:16 nova-conductor all-in-one internal enabled  :-) 2014-03-11 17:34:13 nova-compute all-in-one nova enabled  :-) 2014-03-11 17:34:13

nova-cert all-in-one internal enabled  :-) 2014-03-11 17:34:17
  • Open the OpenStack Dashboard.
    1. In your browser, enter:
      http://<ip-of-your-aio>
    2. In the login interface, enter username admin and password Cisco123

Starting a Tenant Instance

At this point you can configure and install a tenant instance on the all-in-one node. See Booting a Tenant Instance.

Model 2

In the Model 2 example, you add a compute node in addition to the compute capability of the AIO node.

The AIO node will act as the "networking node". All traffic into and out of the instances running on the second (compute) node will traverse a GRE tunnel between the br-tun<code> interfaces on the AIO node and the compute node.

Adding a Compute Node

To implement Model 2, you create and run a compute node, and add it to the networks you created on the AIO node.

For an illustration of the node and network configuration of the finished setup, see the following illustration: Figure 2.

Procedure

To build the compute node, follow the instructions on this page: Adding A Compute Node.

Model 3: Using a VLAN Network

This section describes how to set up a Provider Network Extensions with VLANs networking model, using the all-in-one and compute nodes that you built for models 1 and 2.

The steps for setting up the Neutron Networking for Model 3 are similar to those for Model 2, with these differences:

  • There is no concept of a 'private' network
  • In this example, a Neutron router is not being used. Instead, the instance will map directly to the network that is logically representing the VLAN networks that were previously trunked into OVS.

Prerequisites

This section assumes that you have created an OpenStack all-in-one node and a separate compute node as described in Models 1 and 2 above. These nodes will have the following features:

  • The AIO node and compute nodes are built on Ubuntu 14.04 LTS.
  • Two physically or logically (VLAN) separated IP networks on both the AIO and the compute node, with /etc/network/interface files as shown below:
# The primary network interface for all-in-one
auto eth0
iface eth0 inet static
        address 192.168.80.140
        netmask 255.255.255.0
        network 192.168.80.0
        broadcast 192.168.80.255
        gateway 192.168.80.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 8.8.8.8 8.8.4.4
        dns-search example.com

auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        up ip link set $IFACE promisc on
        down ifconfig $IFACE 0.0.0.0 down
# The primary network interface for compute-server01
auto eth0
iface eth0 inet static
        address 192.168.80.141
        netmask 255.255.255.0
        network 192.168.80.0
        broadcast 192.168.80.255
        gateway 192.168.80.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 8.8.8.8 8.8.4.4
        dns-search example.com

auto eth1
iface eth1 inet manual
        up ifconfig $IFACE 0.0.0.0 up
        up ip link set $IFACE promisc on
        down ifconfig $IFACE 0.0.0.0 down
  • You are using hostnames for the various OpenStack roles that match those in the /root/puppet_openstack_builder/data/role_mappings.yaml file. See the notes on custom interfaces and hostnames in the Model 1 and 2 sections above.
  • You have configured the top of rack (ToR) switch and any upstream network devices for VLAN trunking for the range of VLANs you will use in the setup. The examples below will use VLANs 5 and 6 and they are being trunked from the Data Center Aggregation layer switches into the ToR and then directly into the 'eth1' interface on both the AIO and the compute nodes. See Figure 3 for the topology layout.

Procedure

Step 1: Set the bridge_uplinks, VLAN ranges and bridge mappings in the /etc/puppet/data/hiera_data/user.yaml file.

These settings instructs Puppet to do the following:

  1. Associate the "br-ex" bridge with the physical "eth1" interface on both the AIO node and the compute node and identify the pair as the OVS bridge uplinks to be used for trunks to the ToR.
  2. Identify the user-defined name of the OVS network name "physnet1" and the VLAN ranges (5-6).
  3. Create an OVS mapping between the "physnet1" network name and the external bridge "br-ex".
neutron::agents::ovs::bridge_uplinks: br-ex:eth1
neutron::plugins::ovs::network_vlan_ranges: physnet1:5:6
neutron::agents::ovs::bridge_mappings:
  - "physnet1:br-ex"

Step 2: Set the network_type and tenant_network_type in the /etc/puppet/data/global_hiera_params/common.yaml file.

These settings do the following:

  1. Change the network_type from the default of "per-tenant router" to the "provider-router" type (indicating that there is an external upstream router, namely the Data Center Aggregation layer switches).
  2. Change the tenant_network_type from the "gre" (the default) to "vlan".
network_type: provider-router
tenant_network_type: vlan 

Step 3: Modify the /etc/puppet/data/hiera_data/network_type/provider-router.yaml file as follows:

neutron::agents::dhcp::use_namespaces: false

This setting disables IP namespaces so that you can segregate tenant networks via VLANs and upstream networking policies.

Note: If you are using custom interface mappings on either/both of your nodes then you need to modify the default interface definitions for the OVS settings there were defined above. For example, if the compute-server01 node is using 'eth2' as its external interface (the one VLANs are trunked to) then you can create a custom hostname-based yaml file and modify the OVS settings as follows:

/etc/puppet/data/hiera_data/hostname/compute-server01.yaml
external_interface: eth2
neutron::agents::ovs::bridge_uplinks: br-ex:eth2
neutron::plugins::ovs::network_vlan_ranges: physnet1:5:6
neutron::agents::ovs::bridge_mappings:
  - "physnet1:br-ex"

Step 4: On the AIO node, run Puppet:

puppet apply -v /etc/puppet/manifests/site.pp

Step 5: On the AIO node, restart the Neutron and OVS services:

cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done
cd /etc/init.d/; for i in $( ls openvswitch-* ); do sudo service $i restart; done

Step 6: On the compute node, re-run the Puppet agent:

puppet agent -td --server=all-in-one.example.com --pluginsync

Step 7: Log onto the OpenStack Dashboard:

  • Open the OpenStack Dashboard.
    1. In your browser, enter:
      http://<ip-of-your-aio>
    2. In the login interface, enter username admin and password Cisco123

Step 8: Source the openrc file:

source openrc

Step 9: Create a Neutron Provider Network as described in Creating a VLAN Network.

Post Installation Steps

If you have not done so, complete these two procedures:

Testing the VLANs

You can verify the VLANs as follows.

Procedure

Step 1: Get the list of Neutron networks:

neutron net-list

You should see a table like the following:

+--------------------------------------+-------+-----------------------------------------------------+
| id                                   | name  | subnets                                             |
+--------------------------------------+-------+-----------------------------------------------------+
| 7c79fddf-3e51-473d-96f3-2af822e05dbf | vlan6 | a4fdb5dc-8319-4f47-8991-00186e0d622d 192.168.6.0/24 |
| f7d7ff38-a23a-463c-bdd0-145abc7f82c0 | vlan5 | 6c55a72f-7b8c-4445-91b6-18c7c1c28d5b 192.168.5.0/24 |
+--------------------------------------+-------+-----------------------------------------------------+

Step 2: Boot the test instance using the ID from the previous step:

nova boot --image cirros-x86_64 --flavor m1.tiny --key_name aio-key --nic net-id=f7d7ff38-a23a-463c-bdd0-145abc7f82c0 test-vm3

Step 3: Ping or SSH into the instance.

The instance has an IP address within the subnet associated with the trunked VLAN.

Known Issues

To troubleshoot common issues, see Troubleshooting. Look for issues associated with "AIO" or "Any" scenarios. In particular, the installer generates several harmless warning messages that can be ignored. These messages are documented on the troubleshooting page.

For a complete list of bugs, visit: [1].

Note: To deploy Neutron's Firewall as a Service, you may need to include "--config-file /etc/neutron/fwaas_driver.ini" when starting the neutron l3 agent.

Authors

Shannon McFarland (@eyepv6) - Principal Engineer

Dave Welsch - Senior Technical Writer - openstack-docs@cisco.com

Rating: 0.0/5 (0 votes cast)

Personal tools