Cisco OpenStack Edition: Folsom Manual Install

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m
m
Line 93: Line 93:
<br>  
<br>  
-
[[Image:Network-topology-v1.0.png|thumb|left|Network-topology-v1.0.png]]  
+
[[Image:Network-topology-v1.0.png|thumb|left]]  
<br>  
<br>  
Line 106: Line 106:
<br>  
<br>  
 +
 +
*Other Network Services  
*Other Network Services  
Line 138: Line 140:
<pre>apt-get update &amp;&amp; apt-get dist-upgrade -y</pre>  
<pre>apt-get update &amp;&amp; apt-get dist-upgrade -y</pre>  
==== Networking  ====
==== Networking  ====
-
 
+
*Our implementation uses VLANS for network separation.  Make sure you have the vlan package installed and your network switches have been configured for VLAN's: 
 +
<pre>apt-get install vlan</pre>
*Controller Node (control03) /etc/network/interfaces:
*Controller Node (control03) /etc/network/interfaces:
<pre># The loopback network interface
<pre># The loopback network interface
Line 173: Line 176:
*Configure the NTP:
*Configure the NTP:
<pre>sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
<pre>sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
-
service ntp restart</pre>  
+
service ntp restart</pre>  
==== MySQL &amp; RabbitMQ  ====
==== MySQL &amp; RabbitMQ  ====
Line 448: Line 451:
*Update your system:
*Update your system:
<pre>apt-get update &amp;&amp; apt-get dist-upgrade -y</pre>  
<pre>apt-get update &amp;&amp; apt-get dist-upgrade -y</pre>  
-
 
==== Networking  ====
==== Networking  ====
 +
*Our implementation uses VLANS for network separation.  Make sure you have the vlan package installed and your network switches have been configured for VLAN's:
 +
<pre>apt-get install vlan</pre>
*Network Node (control02) /etc/network/interfaces. # Note: The Public/API facing NIC on the Network node does not have an IP address assigned:
*Network Node (control02) /etc/network/interfaces. # Note: The Public/API facing NIC on the Network node does not have an IP address assigned:
<pre># The loopback network interface
<pre># The loopback network interface
Line 478: Line 482:
  up ifconfig $IFACE 0.0.0.0 up
  up ifconfig $IFACE 0.0.0.0 up
  up ip link set $IFACE promisc on  
  up ip link set $IFACE promisc on  
-
  down ifconfig $IFACE down</pre>
+
  down ifconfig $IFACE down</pre>  
==== Time Synchronization  ====
==== Time Synchronization  ====
Line 485: Line 489:
*Configure the NTP:
*Configure the NTP:
<pre>sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
<pre>sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
-
service ntp restart</pre>  
+
service ntp restart</pre>  
-
==== Quantum ====
+
==== Quantum Installation ====
 +
 
*Install the Quantum openvswitch plugin, openvswitch agent, l3_agent, and dhcp_agent:
*Install the Quantum openvswitch plugin, openvswitch agent, l3_agent, and dhcp_agent:
-
<pre>apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent</pre>
+
<pre>apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent</pre>  
-
*Quantum dhcp_agent uses dnsmasq by default. Verify that dnsmasq is installed:
+
*Quantum dhcp_agent uses dnsmasq by default. Verify that dnsmasq is installed:
<pre>dpkg -l | grep dnsmasq
<pre>dpkg -l | grep dnsmasq
ii  dnsmasq-base                    2.59-4                      Small caching DNS proxy and DHCP/TFTP server
ii  dnsmasq-base                    2.59-4                      Small caching DNS proxy and DHCP/TFTP server
-
ii  dnsmasq-utils                    2.59-4                      Utilities for manipulating DHCP leases</pre>
+
ii  dnsmasq-utils                    2.59-4                      Utilities for manipulating DHCP leases</pre>  
*If dnsmasq-base and dnsmasq-utils packages are not installed, then install them manually:
*If dnsmasq-base and dnsmasq-utils packages are not installed, then install them manually:
<pre>apt-get install -y dnsmasq-base
<pre>apt-get install -y dnsmasq-base
-
apt-get install -y dnsmasq-utils</pre>
+
apt-get install -y dnsmasq-utils</pre>  
-
*The Network node running quantum-plugin-openvswitch-agent also requires that an OVS bridge named "br-int", "br-ex" exists and that the "br-ex" is associated with the Public/API interface (eth0.221 in this setup). In order for the below commands will take, you must reboot the Network node at this point. If you don't and you attempt to add the bridges, you will receive errors related to db.sock. Once the node is rebooted create the bridges. To create them, run:
+
*The Network node running quantum-plugin-openvswitch-agent also requires that an OVS bridge named "br-int", "br-ex" exists and that the "br-ex" is associated with the Public/API interface (eth0.221 in this setup). In order for the below commands will take, you must reboot the Network node at this point. If you don't and you attempt to add the bridges, you will receive errors related to db.sock. Once the node is rebooted create the bridges. To create them, run:
<pre>ovs-vsctl add-br br-int
<pre>ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-br br-ex
-
ovs-vsctl add-port br-ex eth0.221</pre>
+
ovs-vsctl add-port br-ex eth0.221</pre>  
-
*If you still get db.sock errors, you will have to use a workaround for a bug (https://answers.launchpad.net/quantum/+question/210248):  
+
*If you still get db.sock errors, you will have to use a workaround for a bug (https://answers.launchpad.net/quantum/+question/210248):
<pre>kernel_version=`cat /proc/version | cut -d " " -f3`
<pre>kernel_version=`cat /proc/version | cut -d " " -f3`
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
apt-get autoremove openvswitch-datapath-dkms
apt-get autoremove openvswitch-datapath-dkms
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
-
/etc/init.d/openvswitch-switch restart</pre>
+
/etc/init.d/openvswitch-switch restart</pre>  
**Now add the bridges (only if the above workaround was needed):
**Now add the bridges (only if the above workaround was needed):
<pre>ovs-vsctl add-br br-int
<pre>ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-br br-ex
-
ovs-vsctl add-port br-ex eth0.221</pre>
+
ovs-vsctl add-port br-ex eth0.221</pre>  
*Edit /etc/quantum/api-paste.ini:
*Edit /etc/quantum/api-paste.ini:
<pre>[filter:authtoken]
<pre>[filter:authtoken]
Line 518: Line 523:
admin_tenant_name=services
admin_tenant_name=services
admin_user=quantum
admin_user=quantum
-
admin_password=Cisco123</pre>
+
admin_password=Cisco123</pre>  
-
 
+
*Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
-
*Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:  
+
<pre>#Under the database section
<pre>#Under the database section
[DATABASE]
[DATABASE]
-
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum</pre>
+
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum</pre>  
-
 
+
#Under the OVS section - Ensure the "local_ip" value is correct. In this case it is the eth0.223 address on the Network node. If this is wrong GRE and, therefore, Quantum won't work correctly:
-
#Under the OVS section - Ensure the "local_ip" value is correct. In this case it is the eth0.223 address on the Network node. If this is wrong GRE and, therefore, Quantum won't work correctly:
+
<pre>[OVS]
<pre>[OVS]
enable_tunneling = True
enable_tunneling = True
Line 533: Line 536:
network_vlan_ranges=
network_vlan_ranges=
tenant_network_type=gre
tenant_network_type=gre
-
local_ip = 10.0.0.42</pre>
+
local_ip = 10.0.0.42</pre>  
-
 
+
*Update the /etc/quantum/l3_agent.ini. (Ensure that the "metadata_ip" is the same value set in the "metadata_listen" entry in the nova.conf file on the Controller):
*Update the /etc/quantum/l3_agent.ini. (Ensure that the "metadata_ip" is the same value set in the "metadata_listen" entry in the nova.conf file on the Controller):
<pre>auth_url = http://192.168.220.43:35357/v2.0
<pre>auth_url = http://192.168.220.43:35357/v2.0
Line 543: Line 545:
metadata_ip = 192.168.221.43
metadata_ip = 192.168.221.43
metadata_port = 8775
metadata_port = 8775
-
use_namespaces = True</pre>
+
use_namespaces = True</pre>  
-
 
+
*Update the /etc/quantum/dhcp_agent.ini:
*Update the /etc/quantum/dhcp_agent.ini:
-
<pre>use_namespaces = True</pre>
+
<pre>use_namespaces = True</pre>  
*Also, update your RabbitMQ settings in /etc/quantum/quantum.conf:
*Also, update your RabbitMQ settings in /etc/quantum/quantum.conf:
<pre>allow_overlapping_ips = False
<pre>allow_overlapping_ips = False
Line 554: Line 555:
rabbit_password=Cisco123
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_host=192.168.220.43
-
rabbit_port=5672</pre>
+
rabbit_port=5672</pre>  
-
 
+
*Restart all the services:
*Restart all the services:
<pre>service quantum-plugin-openvswitch-agent restart
<pre>service quantum-plugin-openvswitch-agent restart
service quantum-dhcp-agent restart
service quantum-dhcp-agent restart
service quantum-l3-agent restart</pre>
service quantum-l3-agent restart</pre>
 +
=== Install the Compute Node (compute01)  ===
 +
 +
==== Preparing Ubuntu 12.04  ====
 +
 +
Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.
 +
 +
*Use sudo mode or run from root account for the entire installation:
 +
<pre>sudo su</pre>
 +
*You will receive the following error when trying to run update:
 +
<pre>GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199</pre>
 +
*Add the apt key to remove the error:
 +
<pre>gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
 +
gpg --armor --export E8CC67053ED3B199 | apt-key add -</pre>
 +
*Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
 +
<pre>gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199</pre>
 +
*Update your system:
 +
<pre>apt-get update &amp;&amp; apt-get dist-upgrade -y</pre>
 +
==== Networking  ====
 +
*Our implementation uses VLANS for network separation.  Make sure you have the vlan package installed and your network switches have been configured for VLAN's:
 +
<pre>apt-get install vlan</pre>
 +
*Compute Node (compute01) /etc/network/interfaces. # Note: The Compute node does not have a NIC attached to the Public/API network (VLAN221):
 +
<pre># The loopback network interface
 +
auto lo
 +
iface lo inet loopback
 +
 +
# Management Network
 +
auto eth0
 +
iface eth0 inet static
 +
address 192.168.220.51
 +
netmask 255.255.255.0
 +
gateway 192.168.220.1
 +
dns-nameservers 192.168.220.254
 +
dns-search dmz-pod2.lab
 +
 +
# Data Network
 +
auto eth0.223
 +
iface eth0.223 inet static
 +
vlan-raw-device eth0
 +
address 10.0.0.51
 +
netmask 255.255.255.0</pre>
 +
==== Time Synchronization  ====
 +
 +
*Install NTP:
 +
<pre>apt-get install -y ntp</pre>
 +
*Configure the NTP:
 +
<pre>sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
 +
service ntp restart</pre>
 +
==== KVM Installation ====
 +
*Make sure that your hardware supports virtualization:
 +
<pre>apt-get install -y cpu-checker
 +
kvm-ok</pre>
 +
*Normally you would get a good response. Now, move to install kvm and configure it:
 +
<pre>apt-get install -y qemu-kvm libvirt-bin</pre>
 +
*Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to add the "/dev/net/tun":
 +
<pre>cgroup_device_acl = [
 +
"/dev/null", "/dev/full", "/dev/zero",
 +
"/dev/random", "/dev/urandom",
 +
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
 +
"/dev/rtc", "/dev/hpet","/dev/net/tun"
 +
]</pre>
 +
*Restart the libvirt service to load the new values:
 +
<pre>service libvirt-bin restart</pre>
 +
==== Quantum Installation ====
 +
 +
*Install the Quantum openvswitch agent:
 +
<pre>apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent</pre>
 +
*Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
 +
<pre>#Under the database section
 +
[DATABASE]
 +
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum</pre>
 +
#Under the OVS section edit the following - Again be careful to add the correct "local_ip" entry. In this case it is the address for eth0.223 on compute01:
 +
<pre>[OVS]
 +
enable_tunneling = True
 +
tunnel_id_ranges = 1:1000
 +
integration_bridge=br-int
 +
tunnel_bridge = br-tun
 +
network_vlan_ranges=
 +
tenant_network_type=gre
 +
local_ip = 10.0.0.51</pre>
 +
*Edit /etc/quantum/quantum.conf:
 +
<pre>allow_overlapping_ips = False
 +
fake_rabbit = False
 +
rabbit_virtual_host=/quantum
 +
rabbit_userid=openstack_rabbit_user
 +
rabbit_password=Cisco123
 +
rabbit_host=192.168.220.43
 +
rabbit_port=5672</pre>
 +
*All hosts running quantum-plugin-openvswitch-agent require that an OVS bridge named "br-int" is created. Reboot the Compute node at this point.  If you don't and you attempt to add the bridge, you will receive errors related to db.sock.  Once the node is rebooted create the bridge. To create it, run:
 +
<pre>ovs-vsctl add-br br-int</pre>
 +
*Restart all the services:
 +
<pre>service quantum-plugin-openvswitch-agent restart</pre>
 +
==== Nova Installation ====
 +
 +
 +
*Install nova's required components for the compute node:
 +
<pre>apt-get install -y nova-compute</pre>
 +
*Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
 +
<pre>[filter:authtoken]
 +
paste.filter_factory = keystone.middleware.auth_token:filter_factory
 +
auth_host = 192.168.220.43
 +
auth_port = 35357
 +
auth_protocol = http
 +
auth_uri = http://192.168.220.43:35357/v2.0
 +
admin_tenant_name = services
 +
admin_user = nova
 +
admin_password = Cisco123</pre>
 +
 +
*Edit the /etc/nova/nova-compute.conf file:
 +
</pre>[DEFAULT]
 +
libvirt_type=kvm
 +
libvirt_ovs_bridge=br-int
 +
libvirt_vif_type=ethernet
 +
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
 +
libvirt_use_virtio_for_bridges=True</pre>
 +
*Replace the contents of the /etc/nova/nova.conf with the contents below:
 +
**Note: Ensure you re-verify the IP address going into metadata_host, vncserver_proxyclient_address, and vncserver_listen as they are NOT the same as the 192.168.220.43 (eth0 on control03).  metadata_host is eth0.221 on control03 (192.168.221.43) and the VNC proxy and listen address are eth0 on compute01 (192.168.220.51):
 +
<pre>[DEFAULT]
 +
logdir=/var/log/nova
 +
state_path=/var/lib/nova
 +
lock_path=/run/lock/nova
 +
verbose=True
 +
api_paste_config=/etc/nova/api-paste.ini
 +
s3_host=192.168.220.43
 +
ec2_host=192.168.220.43
 +
rabbit_port=5672
 +
rabbit_virtual_host=/
 +
rabbit_password=Cisco123
 +
rabbit_userid=openstack_rabbit_user
 +
rabbit_host=192.168.220.43
 +
metadata_host=192.168.221.43
 +
sql_connection=mysql://nova:Cisco123@192.168.220.43/nova
 +
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
 +
connection_type=libvirt
 +
 
 +
# Auth
 +
use_deprecated_auth=false
 +
auth_strategy=keystone
 +
keystone_ec2_url=http://192.168.220.43:5000/v2.0/ec2tokens
 +
 
 +
# Imaging service
 +
glance_api_servers=192.168.220.43:9292
 +
image_service=nova.image.glance.GlanceImageService
 +
 +
# VNC configuration
 +
vnc_enabled=true
 +
vncserver_proxyclient_address=192.168.220.51
 +
novncproxy_base_url=http://192.168.220.43:6080/vnc_auto.html
 +
vncserver_listen=192.168.220.51
 +
 +
# Network settings
 +
network_api_class=nova.network.quantumv2.api.API
 +
quantum_url=http://192.168.220.43:9696
 +
quantum_auth_strategy=keystone
 +
quantum_admin_auth_url=http://192.168.220.43:35357/v2.0
 +
quantum_connection_host=localhost
 +
quantum_admin_password=Cisco123
 +
quantum_admin_username=quantum
 +
quantum_admin_tenant_name=services
 +
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
 +
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
 +
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
 +
 +
# Compute
 +
compute_driver=libvirt.LibvirtDriver</pre>
 +
*Restart all nova-* services:
 +
<pre>cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done</pre> 
 +
*On the Control node, check for the smiling faces on nova-* services to confirm your installation (remember to run "source openrc" first):
 +
<pre>nova-manage service list</pre>

Revision as of 15:01, 8 December 2012

Contents

OpenStack Folsom Manual Installation

Introduction

The are two common ways of installing OpenStack, manually or via automation.  There is much focus on the full automation of OpenStack deployment using automation tools such as Puppet, Chef, JuJu and others and while these offer great advantages over manual configuration, they do hide the underworkings from those who need to learn what is really happening during an OpenStack setup.  This document can be used by those who want to learn a bit more about OpenStack installation process on the Folsom release using the following OpenStack components:

Dependencies

Critical Reminders

Two of the most common issues the people have in manual OpenStack deployments are basic mistakes with typos, incorrect IP addresses in configuration files and incorrect passwords in configuration files.  To save you from a many troubleshooting steps later down the road, ENSURE you are double and triple checking the configuration files and commands when a an account, password and/or IP address is used.  You will likely be using your own IP addressing and passwords in your setup and it is critical to ensure you get them write on each node.

The password used in this setup = Cisco123.  Every single account, service and configuration file uses this one password.  You will want to change this in your setup and you certainly want to use a strong password and a different password for each account/service if this system is going into production. 

Operating System

The operating system used for this installation is Ubuntu 12.04 LTS (Precise).

Nodes

This document uses three physical servers (Cisco UCS B or C-series) to serve the roles of Controller, Compute, Network.  While, physical servers are being used in the instructions, there is nothing preventing you from using three virtual machines running on your virtualization/hypervisor of choice.  The three distinct node types that are used in this document are:

  • Controller Node
    • Runs Nova API, Nova Cert, Nova Consoleauth, Nova Novncproxy, Nova Scheduler, Novnc, Quantum Server, Quantum Plugin OVS, Quantum API/registry, and Keystone services
    • Provides control plane functionality for managing the OpenStack environment
  • Compute Node
    • Runs Nova Compute, Quantum Plugin OVS, and OVS Plugin Agent services
    • Provides the hypervisor role for running Nova instances (Virtual Machines)
  • Network Node
    • Runs Quantum DHCP, Quantum L3 Agent, Quantum Plugin OVS, OVS Plugin Agent, DNSMASQ Base and Util services
    • Provides network services such as DHCP, network access and routing for Nova instances running on the Compute node

Network

The network design referenced in this document has three physically or logically (VLAN) seperate networks.  While, in this setup, VLANs are used for access to the nodes, in the Quantum environment with Open vSwitch (OVS) deployments, the connecitivity between multiple hosts on virtual networks uses either VLANs or tunneling (GRE). GRE is easier to deploy, especially in larger enviroments and does not suffer from the scalability limitations that VLANs do.  The networks are defined below:

  • Management and CIMC (Cisco Integrated Management Controller for UCS) Network
    • This network is used to perform management functions against the node. Examples include SSH to the nodes, the controller node hosting Horizon would listen for incoming connections on this network.
    • An IP address for each node is required for this network.
    • This network typically employs private (RFC1918) IP addressing.
  • Public/API Network
    • This networking is used for assigning Floating IP addresses to instances for communicating outside of the OpenStack Cloud
    • The Metaservice that is used for injecting information into instances (i.e. SSH keys) is attached to this network on the Controller node
    • The Controller node and Network node will have an interface attached to this network
    • An IP address for the Controller node is required for this network
    • This network typically employs publicly routable IP addressing if no external NATs are used upstream towards the Internet edge (Note: in this document all IP addressing for all interfaces comes out of various private addressing blocks)
  • Data Network (AKA: Private Network)
    • This network is used for providing connectivity to OpenStack Intances (Virtual Machines)
    • This node interfaces attached to this network are used for the Open vSwitch (OVS) GRE tunneling termination
    • In this document an IP address for each node is assigned
    • This network typically employs private (RFC1918) IP addressing

Figure 1 is used to help visualize the setup and to act as a reference for configuration steps later on in the document.  A summary of the network topology is as follows:

  • Controller Node
    • Hostname = control03
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.43
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 IP address = 192.168.221.43
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.43
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.13
  • Compute Node
    • Hostname = compute01
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.51
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.51
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.4
  • Network Node
    • Hostname = control02
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.42
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 No IP address is set for this interface (see notes later in document on OVS/Quantum setup)
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.42
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.3


Network-topology-v1.0.png








  • Other Network Services
    • DNS: In this setup an external DNS server is used for name resolution for OpenStack node resolution and external resolution.
    • NTP: In this setup an external NTP server(s) is used for time syncronization
    • Physical Network Switches: Each node in this setup is physically attached to a Cisco Nexus switch acting as a Top-of-Rack access layer device. Trunking is configured on each interface connecting to the eth0 NIC of each node. Note: Upstream routers/aggregation layer switches will most likely be terminating the L3 VLAN interfaces and if they are deployed in a redundant fashion with a First Hop Redundancy Protocol like HSRP or VRRP then you need to be careful what the IP addresses are on the physical L3 switches/routers as they may conflict with the IP address of the Quantum router on the public subnet (usually assigned .3 address). For example, if you are using HSRP and you have .1 as the standby IP address, .2 as the first L3 switch IP and .3 as the second L3 switch IP, you will receive a duplicate IP address error on the second L3 switch. This can be worked around by using high-order IPs on your upstream L3 device or altering the Quantum subnet configuration at the time of creation (more on this later).

Installation

The installation of the nodes will be in the following order:

  1. Controller Node
  2. Network Node
  3. Compute Node

Install the Controller Node (control03)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Our implementation uses VLANS for network separation.  Make sure you have the vlan package installed and your network switches have been configured for VLAN's: 
apt-get install vlan
  • Controller Node (control03) /etc/network/interfaces:
# The loopback network interface
auto lo iface lo inet loopback 

#Management Network
auto eth0 
iface eth0 inet static 
 address 192.168.220.43
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

#VM Network with OVS in tunnel mode

auto eth0.223 
iface eth0.223 inet static 
 vlan-raw-device eth0
 address 10.0.0.43
 netmask 255.255.255.0

#Public/API Network: Bridged Interface

auto eth0.221
iface eth0.221 inet static 
 vlan-raw-device eth0
 address 192.168.221.43
 netmask 255.255.255.0

Time Synchronization

  • Install NTP:
apt-get install -y ntp
  • Configure the NTP:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart

MySQL & RabbitMQ

  • Install MySQL. #Note: You will be prompted for the root password for mysql. Document this password as it will be needed later on when we login and create databases:
apt-get install -y mysql-server python-mysqldb
  • Configure mysql to accept all incoming requests:
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart
  • Install RabbitMQ:
apt-get install -y rabbitmq-server
  • Create a RabbitMQ user account that will be used by OpenStack services:
rabbitmqctl add_user openstack_rabbit_user Cisco123
  • Create the RabbitMQ vhost for Quantum:
rabbitmqctl add_vhost /quantum
  • Set the permissions for the new RabbitMQ user account:
rabbitmqctl set_permissions -p / openstack_rabbit_user ".*" ".*" ".*"
rabbitmqctl set_permissions -p /quantum openstack_rabbit_user ".*" ".*" ".*"

Keystone Installation

  • Start by the keystone packages:
apt-get install -y keystone
  • Create a MySQL database for Keystone (use root password that was created during original MySQL install) # Note: ALL services and DB accounts will use Cisco123:
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone_admin'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Edit the /etc/keystone/keystone.conf to the new database:
[DEFAULT]
admin_token   = keystone_admin_token

[sql]
connection = mysql://keystone_admin:Cisco123@192.168.220.43/keystone
  • Test whether MySQL is listening on 192.168.220.43 for the Keystone database:
mysql -h192.168.220.43 -ukeystone_admin -pCisco123 keystone
  • Restart the identity service then synchronize the database:
service keystone restart
keystone-manage db_sync
ADMIN_PASSWORD=${ADMIN_PASSWORD:-Cisco123}
export SERVICE_TOKEN="keystone_admin_token"
export SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0/"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-services}
  • Run the script to populate the Keystone database with data (users, tenants, services)
./keystone-data.sh
# MySQL definitions
MYSQL_USER=keystone_admin
MYSQL_DATABASE=keystone
MYSQL_HOST=192.168.220.43
MYSQL_PASSWORD=Cisco123

# Keystone definitions
KEYSTONE_REGION=RegionOne
SERVICE_TOKEN=keystone_admin_token
SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0"

# other definitions.  This should be your Controller Node IP address.
MASTER="192.168.220.43"
  • Run the script to populate the Keystone database with service endpoints. #Note: If you logout or reboot after running the keystone-data.sh script then you must re-export the following before running the keystone-endpoints.sh script:
export SERVICE_TOKEN="keystone_admin_token"
export SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0/"
./keystone-endpoints.sh
  • Create a simple credential file and load it so you won't be bothered later:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Cisco123
export OS_AUTH_URL="http://192.168.220.43:5000/v2.0/"
export OS_AUTH_STRATEGY=keystone
export SERVICE_TOKEN=keystone_admin_token
export SERVICE_ENDPOINT=http://192.168.220.43:35357/v2.0/
  • Load it:
source openrc
  • To test Keystone, install curl and use a curl request :
apt-get install curl openssl -y
curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "Cisco123"}}}' -H "Content-type: application/json" http://192.168.220.43:35357/v2.0/tokens | python -mjson.tool
  • Or you can use the Keystone command-line:
keystone user-list
keystone tenant-list
keystone service-list
keystone endpoint-list

Glance Installation

  • Install Glance packages:
apt-get install -y glance
  • Create a MySQL database for Glance (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Update /etc/glance/glance-api-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = Cisco123
  • Update the /etc/glance/glance-registry-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = Cisco123
  • Update /etc/glance/glance-api.conf with:
sql_connection = mysql://glance:Cisco123@192.168.220.43/glance

[paste_deploy]
flavor = keystone
  • Update the /etc/glance/glance-registry.conf with:
sql_connection = mysql://glance:Cisco123@192.168.220.43/glance

[paste_deploy]
flavor = keystone
  • Restart the glance-api and glance-registry services:
service glance-api restart; service glance-registry restart
  • Synchronize the glance database (You may get a message about deprecation - you can ignore):
glance-manage db_sync
  • Restart the services again to take into account the new modifications:
service glance-registry restart; service glance-api restart
  • Upload an image to Glance. Start by downloading the Ubuntu Precise cloud image to the Controller node and then uploading it to Glance:
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

glance add name="precise" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img
  • Now list the images to see what you have just uploaded:
glance image-list

Quantum Installation

  • Install the Quantum Server on the Controller Node:
apt-get install -y quantum-server quantum-plugin-openvswitch
  • Create a database (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE quantum;
GRANT ALL ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Edit /etc/quantum/quantum.conf. As part of the configuration, we will disable overlapping ip address support. This is needed to support the Nova metadata service and/or Nova security groups. More can be found at Quantum Limitations:
allow_overlapping_ips = False
fake_rabbit = False
rabbit_virtual_host=/quantum
rabbit_userid=openstack_rabbit_user
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_port=5672
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
#Under the database section
[DATABASE]
sql_connection = mysql://quantum:Cisco123@192.168.220.43/quantum

#Under the OVS section
[OVS]
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge=br-int
tunnel_bridge = br-tun
network_vlan_ranges=
tenant_network_type=gre
  • Edit /etc/quantum/api-paste.ini:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host=192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name=services
admin_user=quantum
admin_password=Cisco123
  • Restart the quantum server:
service quantum-server restart

Nova Installation

  • Start by installing nova components:
apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy
  • Prepare a Mysql database for Nova (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.220.43:35357/v2.0
admin_tenant_name = services
admin_user = nova
admin_password = Cisco123
  • Replace the contents of the /etc/nova/nova.conf with the following - take note that the IP address of the "metadata_list" is the "control03" Controller Node eth0.221 interface in the diagram:
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
ec2_listen=192.168.220.43
rabbit_port=5672
rabbit_virtual_host=/
rabbit_password=Cisco123
rabbit_userid=openstack_rabbit_user
rabbit_host=192.168.220.43
metadata_listen=192.168.221.43
sql_connection=mysql://nova:Cisco123@192.168.220.43/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=192.168.220.43:9292
image_service=nova.image.glance.GlanceImageService

# VNC configuration
novncproxy_port=6080
novncproxy_host=0.0.0.0
novnc_enabled=true
novncproxy_base_url=http://192.168.220.43:6080/vnc_auto.html

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://192.168.220.43:9696
quantum_auth_strategy=keystone
quantum_admin_auth_url=http://192.168.220.43:35357/v2.0
quantum_admin_password=Cisco123
quantum_admin_username=quantum
quantum_admin_tenant_name=services
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
  • Synchronize the Nova database (You may get a DEBUG message - You can ignore this):
nova-manage db sync
  • Restart nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
  • Check for the smiling faces on nova services to confirm your installation:
nova-manage service list
  • Also check that nova-api is running:
service nova-api status

Horizon Installation

  • To install horizon, proceed like this :
apt-get install openstack-dashboard memcached -y
  • If you don't like the OpenStack ubuntu theme, you can disabled it and go back to the default look:
vi /etc/openstack-dashboard/local_settings.py
# Comment these lines
# Enable the Ubuntu theme if it is present.
# try:
#    from ubuntu_theme import *
# except ImportError:
#    pass
  • Reload Apache and memcached:
service apache2 restart; service memcached restart
  • Access Horizon by using the following URL in your web browser:
http://192.168.220.43/horizon
  • Use **admin:Cisco123** for your login credentials  Note: A reboot might be needed for a successful login

Install the Network Node (control02)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Our implementation uses VLANS for network separation. Make sure you have the vlan package installed and your network switches have been configured for VLAN's:
apt-get install vlan
  • Network Node (control02) /etc/network/interfaces. # Note: The Public/API facing NIC on the Network node does not have an IP address assigned:
# The loopback network interface
auto lo
iface lo inet loopback

# Management Network
auto eth0
iface eth0 inet static
 address 192.168.220.42
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

# VM Network with OVS in tunnel mode
auto eth0.223
iface eth0.223 inet static
 vlan-raw-device eth0
 address 10.0.0.42
 netmask 255.255.255.0

# Public/API Network: Bridged Interface
auto eth0.221
iface eth0.221 inet manual
 vlan-raw-device eth0
 up ifconfig $IFACE 0.0.0.0 up
 up ip link set $IFACE promisc on 
 down ifconfig $IFACE down

Time Synchronization

  • Install NTP:
apt-get install -y ntp
  • Configure the NTP:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart

Quantum Installation

  • Install the Quantum openvswitch plugin, openvswitch agent, l3_agent, and dhcp_agent:
apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent
  • Quantum dhcp_agent uses dnsmasq by default. Verify that dnsmasq is installed:
dpkg -l | grep dnsmasq
ii  dnsmasq-base                     2.59-4                       Small caching DNS proxy and DHCP/TFTP server
ii  dnsmasq-utils                    2.59-4                       Utilities for manipulating DHCP leases
  • If dnsmasq-base and dnsmasq-utils packages are not installed, then install them manually:
apt-get install -y dnsmasq-base
apt-get install -y dnsmasq-utils
  • The Network node running quantum-plugin-openvswitch-agent also requires that an OVS bridge named "br-int", "br-ex" exists and that the "br-ex" is associated with the Public/API interface (eth0.221 in this setup). In order for the below commands will take, you must reboot the Network node at this point. If you don't and you attempt to add the bridges, you will receive errors related to db.sock. Once the node is rebooted create the bridges. To create them, run:
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth0.221
kernel_version=`cat /proc/version | cut -d " " -f3`
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
apt-get autoremove openvswitch-datapath-dkms
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
/etc/init.d/openvswitch-switch restart
    • Now add the bridges (only if the above workaround was needed):
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth0.221
  • Edit /etc/quantum/api-paste.ini:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host=192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name=services
admin_user=quantum
admin_password=Cisco123
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
#Under the database section
[DATABASE]
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum
  1. Under the OVS section - Ensure the "local_ip" value is correct. In this case it is the eth0.223 address on the Network node. If this is wrong GRE and, therefore, Quantum won't work correctly:
[OVS]
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge=br-int
tunnel_bridge = br-tun
network_vlan_ranges=
tenant_network_type=gre
local_ip = 10.0.0.42
  • Update the /etc/quantum/l3_agent.ini. (Ensure that the "metadata_ip" is the same value set in the "metadata_listen" entry in the nova.conf file on the Controller):
auth_url = http://192.168.220.43:35357/v2.0
auth_region = RegionOne
admin_tenant_name = services
admin_user = quantum
admin_password = Cisco123
metadata_ip = 192.168.221.43
metadata_port = 8775
use_namespaces = True
  • Update the /etc/quantum/dhcp_agent.ini:
use_namespaces = True
  • Also, update your RabbitMQ settings in /etc/quantum/quantum.conf:
allow_overlapping_ips = False
fake_rabbit = False
rabbit_virtual_host=/quantum
rabbit_userid=openstack_rabbit_user
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_port=5672
  • Restart all the services:
service quantum-plugin-openvswitch-agent restart
service quantum-dhcp-agent restart
service quantum-l3-agent restart

Install the Compute Node (compute01)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Our implementation uses VLANS for network separation. Make sure you have the vlan package installed and your network switches have been configured for VLAN's:
apt-get install vlan
  • Compute Node (compute01) /etc/network/interfaces. # Note: The Compute node does not have a NIC attached to the Public/API network (VLAN221):
# The loopback network interface
auto lo
iface lo inet loopback

# Management Network
auto eth0
iface eth0 inet static
 address 192.168.220.51
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

# Data Network
auto eth0.223
iface eth0.223 inet static
 vlan-raw-device eth0
 address 10.0.0.51
 netmask 255.255.255.0

Time Synchronization

  • Install NTP:
apt-get install -y ntp
  • Configure the NTP:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart

KVM Installation

  • Make sure that your hardware supports virtualization:
apt-get install -y cpu-checker
kvm-ok
  • Normally you would get a good response. Now, move to install kvm and configure it:
apt-get install -y qemu-kvm libvirt-bin
  • Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to add the "/dev/net/tun":
cgroup_device_acl = [
 "/dev/null", "/dev/full", "/dev/zero",
 "/dev/random", "/dev/urandom",
 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
 "/dev/rtc", "/dev/hpet","/dev/net/tun"
]
  • Restart the libvirt service to load the new values:
service libvirt-bin restart

Quantum Installation

  • Install the Quantum openvswitch agent:
apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
#Under the database section
[DATABASE]
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum
  1. Under the OVS section edit the following - Again be careful to add the correct "local_ip" entry. In this case it is the address for eth0.223 on compute01:
[OVS]
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge=br-int
tunnel_bridge = br-tun
network_vlan_ranges=
tenant_network_type=gre
local_ip = 10.0.0.51
  • Edit /etc/quantum/quantum.conf:
allow_overlapping_ips = False
fake_rabbit = False
rabbit_virtual_host=/quantum
rabbit_userid=openstack_rabbit_user
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_port=5672
  • All hosts running quantum-plugin-openvswitch-agent require that an OVS bridge named "br-int" is created. Reboot the Compute node at this point. If you don't and you attempt to add the bridge, you will receive errors related to db.sock. Once the node is rebooted create the bridge. To create it, run:
ovs-vsctl add-br br-int
  • Restart all the services:
service quantum-plugin-openvswitch-agent restart

Nova Installation

  • Install nova's required components for the compute node:
apt-get install -y nova-compute
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.220.43:35357/v2.0
admin_tenant_name = services
admin_user = nova
admin_password = Cisco123
  • Edit the /etc/nova/nova-compute.conf file:

</pre>[DEFAULT] libvirt_type=kvm libvirt_ovs_bridge=br-int libvirt_vif_type=ethernet libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver libvirt_use_virtio_for_bridges=True</pre>

  • Replace the contents of the /etc/nova/nova.conf with the contents below:
    • Note: Ensure you re-verify the IP address going into metadata_host, vncserver_proxyclient_address, and vncserver_listen as they are NOT the same as the 192.168.220.43 (eth0 on control03). metadata_host is eth0.221 on control03 (192.168.221.43) and the VNC proxy and listen address are eth0 on compute01 (192.168.220.51):
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
s3_host=192.168.220.43
ec2_host=192.168.220.43
rabbit_port=5672
rabbit_virtual_host=/
rabbit_password=Cisco123
rabbit_userid=openstack_rabbit_user
rabbit_host=192.168.220.43
metadata_host=192.168.221.43
sql_connection=mysql://nova:Cisco123@192.168.220.43/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
connection_type=libvirt
   
# Auth
use_deprecated_auth=false
auth_strategy=keystone
keystone_ec2_url=http://192.168.220.43:5000/v2.0/ec2tokens
   
# Imaging service
glance_api_servers=192.168.220.43:9292
image_service=nova.image.glance.GlanceImageService

# VNC configuration
vnc_enabled=true
vncserver_proxyclient_address=192.168.220.51
novncproxy_base_url=http://192.168.220.43:6080/vnc_auto.html
vncserver_listen=192.168.220.51

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://192.168.220.43:9696
quantum_auth_strategy=keystone
quantum_admin_auth_url=http://192.168.220.43:35357/v2.0
quantum_connection_host=localhost
quantum_admin_password=Cisco123
quantum_admin_username=quantum
quantum_admin_tenant_name=services
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

# Compute 
compute_driver=libvirt.LibvirtDriver
  • Restart all nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
  • On the Control node, check for the smiling faces on nova-* services to confirm your installation (remember to run "source openrc" first):
nova-manage service list

Rating: 4.6/5 (5 votes cast)

Personal tools