OpenStack:Havana:All-in-One

From DocWiki

Revision as of 19:36, 11 March 2014 by Shmcfarl (Talk | contribs)
Jump to: navigation, search

Contents

Overview

The OpenStack Havana Release All-In-One (AIO) deployment builds off of the Cisco OpenStack Installer (COI) instructions. The Cisco OpenStack Installer provides support for a variety of deployment scenarios to include:

  • All-in-One
  • All-in-One plus additional Compute nodes
  • 2 Node
  • Full HA
  • Compressed HA

This document will cover the deployment of two networking scenarios based on the All-in-One scenario:

  • Model 1: All-in-One node using the Per-Tenant Router with Private Networks model for tenant network access (FlatDHCP + FloatingIPs using a Neutron Router)
  • Model 2: All-in-One with an additional Compute node using Per-Tenant Router with Private Networks
  • Model 3: All-in-One with an additional Compute node using Provider Network Extensions with VLANs (VLANs trunked into nodes from ToR switch)

Diagrams

Figure 1 illustrates the topology used in Model 1


Figure 1: AIO Per-Tenant Router with Private Networks Diagram

AIO-H2.jpg









Figure 2 illustrates the topology used in Model 2


Figure 2: AIO & Additional Compute Node using Per-Tenant Router with Private Networks Diagram

AIO-Compute-H2.jpg









Figure 3' illustrates the topology used in Model 3


Figure 3: AIO & Additional Compute Node using Provider Network Extensions with VLANs Diagram

AIO-Compute-VLAN-H2.jpg









Model 1

This section describes the process for deploying OpenStack with the Cisco OpenStack Installer in an All-In-One node configuration with Per-Tenant Routers with Private Networks

Assumptions

  • The Cisco OpenStack Installer requires that you have two physically or logically (VLAN) separated IP networks. One network is used to provide connectivity for OpenStack API endpoints, Open vSwitch (OVS) GRE endpoints (especially important if multiple compute nodes are added to the AIO deployment), and OpenStack/UCS management. The second network is used by OVS as the physical bridge interface and by Neutron as the public network.
  • The AIO node is built on Ubuntu 12.04 LTS which can be installed via manual ISO/DVD or PXE setup and can be deployed on physical baremetal hardware (i.e. Cisco UCS) or as Virtual Machines (i.e. VMware ESXi).
  • You have followed the installation steps in the Cisco OpenStack Installer (COI) instructions. Note:A recap of the AIO-specific instructions are provided below.
  • You are using hostnames for the various OpenStack roles that match those in the /root/puppet_openstack_builder/data/role_mappings.yaml file. If you are not using the default hostnames then you must add your custom hostname and role to the /root/puppet_openstack_builder/data/role_mappings.yaml before running the installation script.
    • The default hostname for the AIO node is:
      all-in-one

Building the All-in-One OpenStack Node

The deployment of the AIO node in Model 1 will begin after a fresh install of Ubuntu 12.04 LTS and with the network configuration based on the example shown in Figure 1.

On the node that you just built, become root:

sudo su - 

Install git:

apt-get install -y git

Clone the Cisco OpenStack Installer repository:

cd /root && git clone -b havana https://github.com/CiscoSystems/puppet_openstack_builder && cd puppet_openstack_builder && git checkout h.2

Note: Before running the installation script for COI it is important to make any modifications to the baseline AIO configuration if you have non-standard interface definitions, hostnames (can be viewed in /root/puppet_openstack_builder/data/role_mappings.yaml file), proxies, etc... Details on setting some of these custom values can be found in the Cisco OpenStack Installer (COI) instructions.

Here are three examples that include a way to set custom interface definitions and custom hostnames for the AIO Model 1 setup:

  • If you are using an interface other than 'eth0' on your node for SSH/Management access then export the default_interface value to the correct interface definition. In the example below, eth1 is used:
  • export default_interface=eth1 # This is the interface you logged into via ssh
  • If you are using an interface other than 'eth1' on your node for external instance (public) access then export the external_interface value. In the example below, eth2 is used:
  • export external_interface=eth2
  • If you are using a hostname other than "all-in-one" for the AIO node then you must update the /root/puppet_openstack_builder/data/role_mappings.yaml file to include your hostname and its role. For example if your hostname is "all-in-one-test1" then the role_mappings.yaml file should have an entry that looks like this:
  • all-in-one-test1: all_in_one

Export 'cisco' as the vendor:

export vendor=cisco

Export the AIO scenario:

export scenario=all_in_one

Change directory to where the install script is located and start the installation (this will take awhile depending on your Internet connection):

cd ~/puppet_openstack_builder/install-scripts
./install.sh 2>&1 | tee install.log

After the install script and Puppet run are completed, you should be at the prompt again with a "Finished catalog run". You can verify that all of the OpenStack Nova services were installed and running correctly by checking the Nova service list:

root@all-in-one:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth all-in-one                           internal         enabled    :-)   2014-03-11 17:34:17
nova-scheduler   all-in-one                           internal         enabled    :-)   2014-03-11 17:34:16
nova-conductor   all-in-one                           internal         enabled    :-)   2014-03-11 17:34:13
nova-compute     all-in-one                           nova             enabled    :-)   2014-03-11 17:34:13
nova-cert        all-in-one                           internal         enabled    :-)   2014-03-11 17:34:17

Neutron Networking for Models 1 & 2

This section will walk through buiding a Per-Tenant Router with Private Networks Neutron setup. You can opt to perform all of the steps below in the OpenStack Dashboard or via CLI. The CLI steps are shown below. Also, please consult the Figure 1 diagram so that you can easily understand the network layout used by Neutron in our example.

Before running OpenStack client commands, you need to source the installed openrc file located in the /root/ directory:

source openrc

Create a public network to be used for instances (VMs) to gain external (public) connectivity:

neutron net-create Public_Network --router:external=True

Create a subnet that is associated with the previously created public network. Note: If you have existing hosts on the same subnet that you are about to use for the public subnet then you must use an allocation pool that starts in a range that will not conflict with other network nodes. One example of this is if you have HSRP/VRRP/GLPB upstream and they are using address in the public subnet ranges (i.e. 192.168.81.1, 192.168.81.2, 192.168.81.3) then your allocation range must start in a non-overlapping range.

neutron subnet-create --name Public_Subnet --allocation-pool start=192.168.81.10,end=192.168.81.254 Public_Network 192.168.81.0/24

Create a private network and subnetthat will be used to attached instances to:

neutron net-create Private_Net10
neutron subnet-create --name Private_Net10_Subnet Private_Net10 10.10.10.0/24 --dns_nameservers list=true 8.8.8.8 8.8.4.4

Create a Neutron router:

neutron router-create os-router-1

Associate a Neutron router interface with the previously created private subnet:

neutron router-interface-add os-router-1 Private_Net10_Subnet

Set the default gateway (previously created public network) for the Neutron router:

neutron router-gateway-set os-router-1 Public_Network

Modify the default Neutron security group to allows for ICMP (for pings) and SSH (for access to the instances):

neutron security-group-rule-create --protocol icmp --direction ingress default
neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default

SSH Keys for Models 1, 2 and 3

Create SSH keys from the node that will be used to SSH into the OpenStack instances (example keypair name is "aio-key"):

ssh-keygen
cd /root/.ssh/
nova keypair-add --pub_key id_rsa.pub aio-key

Upload Image into Glance for use to Launch Instances for Models 1, 2 and 3

Download the image of your choice. Below there are examples for downloading Cirros, Ubuntu 12.04 and Fedora20:

  • Cirros:
    wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
  • Fedora:
    wget http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2
  • Ubuntu:
    wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

Upload the images into Glance:

  • Cirros:
    glance image-create --name cirros-x86_64 --is-public True --disk-format qcow2 --container-format ovf --file cirros-0.3.1-x86_64-disk.img --progress
  • Fedora:
    glance image-create --name Fedora20 --is-public True --disk-format qcow2 --container-format bare --file Fedora-x86_64-19-20130627-sda.qcow2 --progress
  • Ubuntu:
    glance image-create --name precise-x86_64 --is-public True --disk-format qcow2 --container-format bare --file precise-server-cloudimg-amd64-disk1.img --progress

Boot an Instance

1. Boot an Instance (Cirros image example shown below). Run the "neutron net-list" command to get a list of networks. Use the ID for the Private_Net10 network from the net-list output in the --nic net-id= field:

root@all-in-one:~# neutron net-list
+--------------------------------------+----------------+------------------------------------------------------+
| id                                   | name           | subnets                                              |
+--------------------------------------+----------------+------------------------------------------------------+
| 42823c88-bb86-4e9a-9f7b-ef1c0631ee5e | Private_Net10  | f48bca75-7fe4-4510-b9fd-c0323e416376 10.10.10.0/24   |
| 85650115-093b-49be-9fe1-ba2d34b4d3e2 | Public_Network | 2d89ac21-3611-44ef-b5d7-924fd7854e0d 192.168.81.0/24 |
+--------------------------------------+----------------+------------------------------------------------------+
nova boot --image cirros-x86_64 --flavor m1.tiny --key_name aio-key --nic net-id=42823c88-bb86-4e9a-9f7b-ef1c0631ee5e test-vm1

Verify that your instance has spawned successfully. Note: The first time an instance is launched on the system it can take a bit longer to boot than subsequent launches of instances:

nova show test-vm1

2. Verify connectivity to the instance from the AIO node. Since namespaces are being used in this model, you will need to run the commands from the context of the qrouter using the "ip netns exec qrouter" syntax. List the qrouter to get its router-id, connect to the qrouter and get a list of its addresses, ping the instance from the qrouter and then SSH into the instance from the qrouter:

root@all-in-one:~# neutron router-list
+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| id                                   | name        | external_gateway_info                                                       |
+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| 58d8840a-74ca-48a2-a6f2-7853eef9a36e | os-router-1 | {"network_id": "85650115-093b-49be-9fe1-ba2d34b4d3e2", "enable_snat": true} |
+--------------------------------------+-------------+-----------------------------------------------------------------------------+

Alternatively, you can get the qrouter ID via:

ip netns
ip netns exec qrouter-<neutron-router-id> ip addr list
ip netns exec qrouter-<neutron-router-id> ping <fixed-ip-of-instance>
ip netns exec qrouter-<neutron-router-id> ssh cirros@<fixed-ip-of-instance>

NOTE:You can get the internal fixed IP of your instance with the following command: nova show <your_instance_name>

3. Create and associate a Floating IP. You will need to get a list of the networks copy the correct IDs:

neutron net-list
neutron port-list
neutron floatingip-create --port_id <internal VM port-id> <public net-id>
  • Example:
root@all-in-one:~# neutron floatingip-create --port_id 5510471e-2b48-4736-9112-aee22f3c6ecb e1a31822-26f1-461a-85b9-7d1e084e619c
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    | 10.10.10.2                           |
| floating_ip_address | 192.168.81.12                        |
| floating_network_id | e1a31822-26f1-461a-85b9-7d1e084e619c |
| id                  | 48ffe524-43bc-44fc-85b5-9c568ed64af1 |
| port_id             | 5510471e-2b48-4736-9112-aee22f3c6ecb |
| router_id           | 526f262b-225e-4e1d-9a5b-1619f806960a |
| tenant_id           | 5ed6e50345bb49cfa0090746fdb68533     |
+---------------------+--------------------------------------+

4. Ping and SSH to your instance using the "floating_ip_address" from an external host.



Model 2

This section describes the process for deploying OpenStack with the COI All-in-One node plus a 2nd node acting as a dedicated OpenStack compute role. This model will use Per-Tenant Routers with Private Networks and will build upon the AIO install from Model 1. This section will not include a Neutron networking, Image upload, SSH Keys or Instance boot section as everything in the Model 1 example directly applies to Model 2. The focus of this section is only for adding the 2nd compute node.

Reference Figure 2 to align the topology with the configuration steps below.

Assumptions

  • In the Model 2 example, the AIO node will act as the 'networking node' and all traffic into and out of the instances running on the 2nd compute node will traverse a GRE tunnel between the br-tun interfaces on the AIO node and the compute node. Based on this example, you do not need a 2nd physical NIC on the compute node.
  • The compote node is built on Ubuntu 12.04 LTS which can be installed via manual ISO/DVD or PXE setup and can be deployed on physical baremetal hardware (i.e. Cisco UCS) or as Virtual Machines (i.e. VMware ESXi).
  • You have added local host entry or DNS entry for the AIO node. In our example there should be a hosts entry on the compute node that looks like this:
  • 192.168.80.140  all-in-one.example.com  all-in-one
  • You are using hostnames for the various OpenStack roles that match those in the /root/puppet_openstack_builder/data/role_mappings.yaml file. If you are not using the default hostnames then you must add your custom hostname and role to the /root/puppet_openstack_builder/data/role_mappings.yaml before running the installation script. For example, if your compute node hostname is "compute-server01-test1" then the role_mappings.yaml file should have an entry that looks like this:
  • compute-server01-test1: compute


Building the OpenStack Compute Node

The deployment of the compute node in Model 2 will begin after a fresh install of Ubuntu 12.04 LTS and with the network configuration based on the example shown in Figure 2 and after you have completed the full Model 1 walk-thru (the setup of the compute node requires that the AIO node is already deployed).

On the node that you just built, become root:

sudo su - 

Install git:

apt-get install -y git

Clone the Cisco OpenStack Installer repository:

cd /root && git clone -b havana https://github.com/CiscoSystems/puppet_openstack_builder && cd puppet_openstack_builder && git checkout h.2

Change to the install_scripts directory:

cd install_scripts/

Export the IP address of your AIO node (which is also acting as the Puppet master/Build Server):

export build_server_ip=192.168.80.140

Run the setup.sh file to prep the node for the Puppet agent run:

bash setup.sh

Begin the OpenStack Compute node build by starting the Puppet agent (note the all-in-one.example.com hostname used. Modify for your environment):

puppet agent -td --server=all-in-one.example.com --pluginsync

After the Puppet agent runs, you will end up at the prompt after a "Finished catalog run" message.

Verify that the OpenStack Nova services are running on the all-in-one node and computer-server01 node:

root@all-in-one:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth all-in-one                           internal         enabled    :-)   2014-03-11 19:00:52
nova-scheduler   all-in-one                           internal         enabled    :-)   2014-03-11 19:00:52
nova-conductor   all-in-one                           internal         enabled    :-)   2014-03-11 19:00:52
nova-compute     all-in-one                           nova             enabled    :-)   2014-03-11 19:00:53
nova-cert        all-in-one                           internal         enabled    :-)   2014-03-11 19:00:52
nova-compute     compute-server01                     nova             enabled    :-)   2014-03-11 19:00:50

You can also verify that the new compute node appears in the OpenStack Nova Hypervisor-list:

root@all-in-one:~# nova hypervisor-list
+----+------------------------------+
| ID | Hypervisor hostname          |
+----+------------------------------+
| 1  | all-in-one.example.com       |
| 2  | compute-server01.example.com |
+----+------------------------------+

Launch an Instance

You can follow all of the steps in Model 1 to setup your Neutron network, Glance images, SSH keys and Instance launch as they will directly apply to Model 2. However, if you want to test that an instance successfully launches against your new compute node then you can boot an instance and identify the compute node by name:

nova boot --image cirros-x86_64 --flavor m1.tiny --key_name aio-key --nic net-id=42823c88-bb86-4e9a-9f7b-ef1c0631ee5e --availability-zone nova:compute-server01 test-vm2

Check to see if the instance has launched against the compute node:

root@all-in-one:~# nova hypervisor-servers compute-server01
+--------------------------------------+-------------------+---------------+------------------------------+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname          |
+--------------------------------------+-------------------+---------------+------------------------------+
| ba995773-bb1c-419c-9aa1-be67d6967345 | instance-00000006 | 2             | compute-server01.example.com |
+--------------------------------------+-------------------+---------------+------------------------------+

Rating: 0.0/5 (0 votes cast)

Personal tools