Cisco OpenStack Edition: Folsom Manual Install

From DocWiki

Revision as of 15:35, 8 December 2012 by Shmcfarl (Talk | contribs)
Jump to: navigation, search

Contents

OpenStack Folsom Manual Installation

Introduction

The are two common ways of installing OpenStack, manually or via automation.  There is much focus on the full automation of OpenStack deployment using automation tools such as Puppet, Chef, JuJu and others and while these offer great advantages over manual configuration, they do hide the underworkings from those who need to learn what is really happening during an OpenStack setup.  This document can be used by those who want to learn a bit more about OpenStack installation process on the Folsom release using the following OpenStack components:

Dependencies

Critical Reminders

Two of the most common issues the people have in manual OpenStack deployments are basic mistakes with typos, incorrect IP addresses in configuration files and incorrect passwords in configuration files.  To save you from a many troubleshooting steps later down the road, ENSURE you are double and triple checking the configuration files and commands when a an account, password and/or IP address is used.  You will likely be using your own IP addressing and passwords in your setup and it is critical to ensure you get them write on each node.

The password used in this setup = Cisco123.  Every single account, service and configuration file uses this one password.  You will want to change this in your setup and you certainly want to use a strong password and a different password for each account/service if this system is going into production. 

Operating System

The operating system used for this installation is Ubuntu 12.04 LTS (Precise).

Nodes

This document uses three physical servers (Cisco UCS B or C-series) to serve the roles of Controller, Compute, Network.  While, physical servers are being used in the instructions, there is nothing preventing you from using three virtual machines running on your virtualization/hypervisor of choice.  The three distinct node types that are used in this document are:

  • Controller Node
    • Runs Nova API, Nova Cert, Nova Consoleauth, Nova Novncproxy, Nova Scheduler, Novnc, Quantum Server, Quantum Plugin OVS, Quantum API/registry, and Keystone services
    • Provides control plane functionality for managing the OpenStack environment
  • Compute Node
    • Runs Nova Compute, Quantum Plugin OVS, and OVS Plugin Agent services
    • Provides the hypervisor role for running Nova instances (Virtual Machines)
  • Network Node
    • Runs Quantum DHCP, Quantum L3 Agent, Quantum Plugin OVS, OVS Plugin Agent, DNSMASQ Base and Util services
    • Provides network services such as DHCP, network access and routing for Nova instances running on the Compute node

Network

The network design referenced in this document has three physically or logically (VLAN) seperate networks.  While, in this setup, VLANs are used for access to the nodes, in the Quantum environment with Open vSwitch (OVS) deployments, the connecitivity between multiple hosts on virtual networks uses either VLANs or tunneling (GRE). GRE is easier to deploy, especially in larger enviroments and does not suffer from the scalability limitations that VLANs do.  The networks are defined below:

  • Management and CIMC (Cisco Integrated Management Controller for UCS) Network
    • This network is used to perform management functions against the node. Examples include SSH to the nodes, the controller node hosting Horizon would listen for incoming connections on this network.
    • An IP address for each node is required for this network.
    • This network typically employs private (RFC1918) IP addressing.
  • Public/API Network
    • This networking is used for assigning Floating IP addresses to instances for communicating outside of the OpenStack Cloud
    • The Metaservice that is used for injecting information into instances (i.e. SSH keys) is attached to this network on the Controller node
    • The Controller node and Network node will have an interface attached to this network
    • An IP address for the Controller node is required for this network
    • This network typically employs publicly routable IP addressing if no external NATs are used upstream towards the Internet edge (Note: in this document all IP addressing for all interfaces comes out of various private addressing blocks)
  • Data Network (AKA: Private Network)
    • This network is used for providing connectivity to OpenStack Intances (Virtual Machines)
    • This node interfaces attached to this network are used for the Open vSwitch (OVS) GRE tunneling termination
    • In this document an IP address for each node is assigned
    • This network typically employs private (RFC1918) IP addressing

Figure 1 is used to help visualize the setup and to act as a reference for configuration steps later on in the document.  A summary of the network topology is as follows:

  • Controller Node
    • Hostname = control03
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.43
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 IP address = 192.168.221.43
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.43
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.13
  • Compute Node
    • Hostname = compute01
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.51
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.51
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.4
  • Network Node
    • Hostname = control02
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.42
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 No IP address is set for this interface (see notes later in document on OVS/Quantum setup)
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.42
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.3


Network-topology-v1.0.png








  • Other Network Services
    • DNS: In this setup an external DNS server is used for name resolution for OpenStack node resolution and external resolution.
    • NTP: In this setup an external NTP server(s) is used for time syncronization
    • Physical Network Switches: Each node in this setup is physically attached to a Cisco Nexus switch acting as a Top-of-Rack access layer device. Trunking is configured on each interface connecting to the eth0 NIC of each node. Note: Upstream routers/aggregation layer switches will most likely be terminating the L3 VLAN interfaces and if they are deployed in a redundant fashion with a First Hop Redundancy Protocol like HSRP or VRRP then you need to be careful what the IP addresses are on the physical L3 switches/routers as they may conflict with the IP address of the Quantum router on the public subnet (usually assigned .3 address). For example, if you are using HSRP and you have .1 as the standby IP address, .2 as the first L3 switch IP and .3 as the second L3 switch IP, you will receive a duplicate IP address error on the second L3 switch. This can be worked around by using high-order IPs on your upstream L3 device or altering the Quantum subnet configuration at the time of creation (more on this later).

Installation

The installation of the nodes will be in the following order:

  1. Controller Node
  2. Network Node
  3. Compute Node

Install the Controller Node (control03)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Our implementation uses VLANS for network separation.  Make sure you have the vlan package installed and your network switches have been configured for VLAN's: 
apt-get install vlan
  • Controller Node (control03) /etc/network/interfaces:
# The loopback network interface
auto lo iface lo inet loopback 

#Management Network
auto eth0 
iface eth0 inet static 
 address 192.168.220.43
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

#VM Network with OVS in tunnel mode

auto eth0.223 
iface eth0.223 inet static 
 vlan-raw-device eth0
 address 10.0.0.43
 netmask 255.255.255.0

#Public/API Network: Bridged Interface

auto eth0.221
iface eth0.221 inet static 
 vlan-raw-device eth0
 address 192.168.221.43
 netmask 255.255.255.0

Time Synchronization

  • Install NTP:
apt-get install -y ntp
  • Configure the NTP:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart

MySQL & RabbitMQ

  • Install MySQL. #Note: You will be prompted for the root password for mysql. Document this password as it will be needed later on when we login and create databases:
apt-get install -y mysql-server python-mysqldb
  • Configure mysql to accept all incoming requests:
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart
  • Install RabbitMQ:
apt-get install -y rabbitmq-server
  • Create a RabbitMQ user account that will be used by OpenStack services:
rabbitmqctl add_user openstack_rabbit_user Cisco123
  • Create the RabbitMQ vhost for Quantum:
rabbitmqctl add_vhost /quantum
  • Set the permissions for the new RabbitMQ user account:
rabbitmqctl set_permissions -p / openstack_rabbit_user ".*" ".*" ".*"
rabbitmqctl set_permissions -p /quantum openstack_rabbit_user ".*" ".*" ".*"

Keystone Installation

  • Start by the keystone packages:
apt-get install -y keystone
  • Create a MySQL database for Keystone (use root password that was created during original MySQL install) # Note: ALL services and DB accounts will use Cisco123:
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone_admin'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Edit the /etc/keystone/keystone.conf to the new database:
[DEFAULT]
admin_token   = keystone_admin_token

[sql]
connection = mysql://keystone_admin:Cisco123@192.168.220.43/keystone
  • Test whether MySQL is listening on 192.168.220.43 for the Keystone database:
mysql -h192.168.220.43 -ukeystone_admin -pCisco123 keystone
  • Restart the identity service then synchronize the database:
service keystone restart
keystone-manage db_sync
ADMIN_PASSWORD=${ADMIN_PASSWORD:-Cisco123}
export SERVICE_TOKEN="keystone_admin_token"
export SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0/"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-services}
  • Run the script to populate the Keystone database with data (users, tenants, services)
./keystone-data.sh
# MySQL definitions
MYSQL_USER=keystone_admin
MYSQL_DATABASE=keystone
MYSQL_HOST=192.168.220.43
MYSQL_PASSWORD=Cisco123

# Keystone definitions
KEYSTONE_REGION=RegionOne
SERVICE_TOKEN=keystone_admin_token
SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0"

# other definitions.  This should be your Controller Node IP address.
MASTER="192.168.220.43"
  • Run the script to populate the Keystone database with service endpoints. #Note: If you logout or reboot after running the keystone-data.sh script then you must re-export the following before running the keystone-endpoints.sh script:
export SERVICE_TOKEN="keystone_admin_token"
export SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0/"
./keystone-endpoints.sh
  • Create a simple credential file and load it so you won't be bothered later:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Cisco123
export OS_AUTH_URL="http://192.168.220.43:5000/v2.0/"
export OS_AUTH_STRATEGY=keystone
export SERVICE_TOKEN=keystone_admin_token
export SERVICE_ENDPOINT=http://192.168.220.43:35357/v2.0/
  • Load it:
source openrc
  • To test Keystone, install curl and use a curl request :
apt-get install curl openssl -y
curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "Cisco123"}}}' -H "Content-type: application/json" http://192.168.220.43:35357/v2.0/tokens | python -mjson.tool
  • Or you can use the Keystone command-line:
keystone user-list
keystone tenant-list
keystone service-list
keystone endpoint-list

Glance Installation

  • Install Glance packages:
apt-get install -y glance
  • Create a MySQL database for Glance (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Update /etc/glance/glance-api-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = Cisco123
  • Update the /etc/glance/glance-registry-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = Cisco123
  • Update /etc/glance/glance-api.conf with:
sql_connection = mysql://glance:Cisco123@192.168.220.43/glance

[paste_deploy]
flavor = keystone
  • Update the /etc/glance/glance-registry.conf with:
sql_connection = mysql://glance:Cisco123@192.168.220.43/glance

[paste_deploy]
flavor = keystone
  • Restart the glance-api and glance-registry services:
service glance-api restart; service glance-registry restart
  • Synchronize the glance database (You may get a message about deprecation - you can ignore):
glance-manage db_sync
  • Restart the services again to take into account the new modifications:
service glance-registry restart; service glance-api restart
  • Upload an image to Glance. Start by downloading the Ubuntu Precise cloud image to the Controller node and then uploading it to Glance:
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

glance add name="precise" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img
  • Now list the images to see what you have just uploaded:
glance image-list

Quantum Installation

  • Install the Quantum Server on the Controller Node:
apt-get install -y quantum-server quantum-plugin-openvswitch
  • Create a database (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE quantum;
GRANT ALL ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Edit /etc/quantum/quantum.conf. As part of the configuration, we will disable overlapping ip address support. This is needed to support the Nova metadata service and/or Nova security groups. More can be found at Quantum Limitations:
allow_overlapping_ips = False
fake_rabbit = False
rabbit_virtual_host=/quantum
rabbit_userid=openstack_rabbit_user
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_port=5672
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
#Under the database section
[DATABASE]
sql_connection = mysql://quantum:Cisco123@192.168.220.43/quantum

#Under the OVS section
[OVS]
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge=br-int
tunnel_bridge = br-tun
network_vlan_ranges=
tenant_network_type=gre
  • Edit /etc/quantum/api-paste.ini:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host=192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name=services
admin_user=quantum
admin_password=Cisco123
  • Restart the quantum server:
service quantum-server restart

Nova Installation

  • Start by installing nova components:
apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy
  • Prepare a Mysql database for Nova (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.220.43:35357/v2.0
admin_tenant_name = services
admin_user = nova
admin_password = Cisco123
  • Replace the contents of the /etc/nova/nova.conf with the following - take note that the IP address of the "metadata_list" is the "control03" Controller Node eth0.221 interface in the diagram:
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
ec2_listen=192.168.220.43
rabbit_port=5672
rabbit_virtual_host=/
rabbit_password=Cisco123
rabbit_userid=openstack_rabbit_user
rabbit_host=192.168.220.43
metadata_listen=192.168.221.43
sql_connection=mysql://nova:Cisco123@192.168.220.43/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=192.168.220.43:9292
image_service=nova.image.glance.GlanceImageService

# VNC configuration
novncproxy_port=6080
novncproxy_host=0.0.0.0
novnc_enabled=true
novncproxy_base_url=http://192.168.220.43:6080/vnc_auto.html

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://192.168.220.43:9696
quantum_auth_strategy=keystone
quantum_admin_auth_url=http://192.168.220.43:35357/v2.0
quantum_admin_password=Cisco123
quantum_admin_username=quantum
quantum_admin_tenant_name=services
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
  • Synchronize the Nova database (You may get a DEBUG message - You can ignore this):
nova-manage db sync
  • Restart nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
  • Check for the smiling faces on nova services to confirm your installation:
nova-manage service list
  • Also check that nova-api is running:
service nova-api status

Horizon Installation

  • To install horizon, proceed like this :
apt-get install openstack-dashboard memcached -y
  • If you don't like the OpenStack ubuntu theme, you can disabled it and go back to the default look:
vi /etc/openstack-dashboard/local_settings.py
# Comment these lines
# Enable the Ubuntu theme if it is present.
# try:
#    from ubuntu_theme import *
# except ImportError:
#    pass
  • Reload Apache and memcached:
service apache2 restart; service memcached restart
  • Access Horizon by using the following URL in your web browser:
http://192.168.220.43/horizon
  • Use **admin:Cisco123** for your login credentials  Note: A reboot might be needed for a successful login

Install the Network Node (control02)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Our implementation uses VLANS for network separation. Make sure you have the vlan package installed and your network switches have been configured for VLAN's:
apt-get install vlan
  • Network Node (control02) /etc/network/interfaces. # Note: The Public/API facing NIC on the Network node does not have an IP address assigned:
# The loopback network interface
auto lo
iface lo inet loopback

# Management Network
auto eth0
iface eth0 inet static
 address 192.168.220.42
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

# VM Network with OVS in tunnel mode
auto eth0.223
iface eth0.223 inet static
 vlan-raw-device eth0
 address 10.0.0.42
 netmask 255.255.255.0

# Public/API Network: Bridged Interface
auto eth0.221
iface eth0.221 inet manual
 vlan-raw-device eth0
 up ifconfig $IFACE 0.0.0.0 up
 up ip link set $IFACE promisc on 
 down ifconfig $IFACE down

Time Synchronization

  • Install NTP:
apt-get install -y ntp
  • Configure the NTP:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart

Quantum Installation

  • Install the Quantum openvswitch plugin, openvswitch agent, l3_agent, and dhcp_agent:
apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent
  • Quantum dhcp_agent uses dnsmasq by default. Verify that dnsmasq is installed:
dpkg -l | grep dnsmasq
ii  dnsmasq-base                     2.59-4                       Small caching DNS proxy and DHCP/TFTP server
ii  dnsmasq-utils                    2.59-4                       Utilities for manipulating DHCP leases
  • If dnsmasq-base and dnsmasq-utils packages are not installed, then install them manually:
apt-get install -y dnsmasq-base
apt-get install -y dnsmasq-utils
  • The Network node running quantum-plugin-openvswitch-agent also requires that an OVS bridge named "br-int", "br-ex" exists and that the "br-ex" is associated with the Public/API interface (eth0.221 in this setup). In order for the below commands will take, you must reboot the Network node at this point. If you don't and you attempt to add the bridges, you will receive errors related to db.sock. Once the node is rebooted create the bridges. To create them, run:
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth0.221
kernel_version=`cat /proc/version | cut -d " " -f3`
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
apt-get autoremove openvswitch-datapath-dkms
apt-get install -y dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version
/etc/init.d/openvswitch-switch restart
    • Now add the bridges (only if the above workaround was needed):
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth0.221
  • Edit /etc/quantum/api-paste.ini:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host=192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name=services
admin_user=quantum
admin_password=Cisco123
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
#Under the database section
[DATABASE]
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum
  • Under the OVS section - Ensure the "local_ip" value is correct. In this case it is the eth0.223 address on the Network node. If this is wrong GRE and, therefore, Quantum won't work correctly:
[OVS]
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge=br-int
tunnel_bridge = br-tun
network_vlan_ranges=
tenant_network_type=gre
local_ip = 10.0.0.42
  • Update the /etc/quantum/l3_agent.ini. (Ensure that the "metadata_ip" is the same value set in the "metadata_listen" entry in the nova.conf file on the Controller):
auth_url = http://192.168.220.43:35357/v2.0
auth_region = RegionOne
admin_tenant_name = services
admin_user = quantum
admin_password = Cisco123
metadata_ip = 192.168.221.43
metadata_port = 8775
use_namespaces = True
  • Update the /etc/quantum/dhcp_agent.ini:
use_namespaces = True
  • Also, update your RabbitMQ settings in /etc/quantum/quantum.conf:
allow_overlapping_ips = False
fake_rabbit = False
rabbit_virtual_host=/quantum
rabbit_userid=openstack_rabbit_user
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_port=5672
  • Restart all the services:
service quantum-plugin-openvswitch-agent restart
service quantum-dhcp-agent restart
service quantum-l3-agent restart

Install the Compute Node (compute01)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Our implementation uses VLANS for network separation. Make sure you have the vlan package installed and your network switches have been configured for VLAN's:
apt-get install vlan
  • Compute Node (compute01) /etc/network/interfaces. # Note: The Compute node does not have a NIC attached to the Public/API network (VLAN221):
# The loopback network interface
auto lo
iface lo inet loopback

# Management Network
auto eth0
iface eth0 inet static
 address 192.168.220.51
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

# Data Network
auto eth0.223
iface eth0.223 inet static
 vlan-raw-device eth0
 address 10.0.0.51
 netmask 255.255.255.0

Time Synchronization

  • Install NTP:
apt-get install -y ntp
  • Configure the NTP:
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
service ntp restart

KVM Installation

  • Make sure that your hardware supports virtualization:
apt-get install -y cpu-checker
kvm-ok
  • Normally you would get a good response. Now, move to install kvm and configure it:
apt-get install -y qemu-kvm libvirt-bin
  • Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to add the "/dev/net/tun":
cgroup_device_acl = [
 "/dev/null", "/dev/full", "/dev/zero",
 "/dev/random", "/dev/urandom",
 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
 "/dev/rtc", "/dev/hpet","/dev/net/tun"
]
  • Restart the libvirt service to load the new values:
service libvirt-bin restart

Quantum Installation

  • Install the Quantum openvswitch agent:
apt-get -y install quantum-plugin-openvswitch quantum-plugin-openvswitch-agent
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:
#Under the database section
[DATABASE]
sql_connection=mysql://quantum:Cisco123@192.168.220.43/quantum
  • Under the OVS section edit the following - Again be careful to add the correct "local_ip" entry. In this case it is the address for eth0.223 on compute01:
[OVS]
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge=br-int
tunnel_bridge = br-tun
network_vlan_ranges=
tenant_network_type=gre
local_ip = 10.0.0.51
  • Edit /etc/quantum/quantum.conf:
allow_overlapping_ips = False
fake_rabbit = False
rabbit_virtual_host=/quantum
rabbit_userid=openstack_rabbit_user
rabbit_password=Cisco123
rabbit_host=192.168.220.43
rabbit_port=5672
  • All hosts running quantum-plugin-openvswitch-agent require that an OVS bridge named "br-int" is created. Reboot the Compute node at this point. If you don't and you attempt to add the bridge, you will receive errors related to db.sock. Once the node is rebooted create the bridge. To create it, run:
ovs-vsctl add-br br-int
  • Restart all the services:
service quantum-plugin-openvswitch-agent restart

Nova Installation

  • Install nova's required components for the compute node:
apt-get install -y nova-compute
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.220.43:35357/v2.0
admin_tenant_name = services
admin_user = nova
admin_password = Cisco123
  • Edit the /etc/nova/nova-compute.conf file:
[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
  • Replace the contents of the /etc/nova/nova.conf with the contents below:
    • Note: Ensure you re-verify the IP address going into metadata_host, vncserver_proxyclient_address, and vncserver_listen as they are NOT the same as the 192.168.220.43 (eth0 on control03). metadata_host is eth0.221 on control03 (192.168.221.43) and the VNC proxy and listen address are eth0 on compute01 (192.168.220.51):
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
s3_host=192.168.220.43
ec2_host=192.168.220.43
rabbit_port=5672
rabbit_virtual_host=/
rabbit_password=Cisco123
rabbit_userid=openstack_rabbit_user
rabbit_host=192.168.220.43
metadata_host=192.168.221.43
sql_connection=mysql://nova:Cisco123@192.168.220.43/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
connection_type=libvirt
   
# Auth
use_deprecated_auth=false
auth_strategy=keystone
keystone_ec2_url=http://192.168.220.43:5000/v2.0/ec2tokens
   
# Imaging service
glance_api_servers=192.168.220.43:9292
image_service=nova.image.glance.GlanceImageService

# VNC configuration
vnc_enabled=true
vncserver_proxyclient_address=192.168.220.51
novncproxy_base_url=http://192.168.220.43:6080/vnc_auto.html
vncserver_listen=192.168.220.51

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://192.168.220.43:9696
quantum_auth_strategy=keystone
quantum_admin_auth_url=http://192.168.220.43:35357/v2.0
quantum_connection_host=localhost
quantum_admin_password=Cisco123
quantum_admin_username=quantum
quantum_admin_tenant_name=services
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

# Compute 
compute_driver=libvirt.LibvirtDriver
  • Restart all nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
  • On the Control node, check for the smiling faces on nova-* services to confirm your installation (remember to run "source openrc" first):
nova-manage service list

Your First VM

  • Run the following commands from either the Network node or Controller node. If something has to be done on a specific node it will be called out.
  • Create a Quantum external network. Note: The eth0.221 (Public/API network) interface on the Network node (control02) should not have an IP assigned:
quantum net-create public --router:external=True
  • Create a Quantum subnet. We are using 192.168.221.0/24 as our external network:
quantum subnet-create public 192.168.221.0/24
  • Create the internal (data) network used by the instances. Create additional networks and associated subnets as needed:
quantum net-create net1
quantum subnet-create net1 10.10.10.0/24
  • Create a virtual router and an associated interface used for the subnet created in the previous step. #Note: the <subnet-net1-id> value is found in the output that is generated from the "router-create" command:
quantum router-create router1
quantum router-interface-add router1 <subnet-net1-id>
  • Connect the virtual router to your external network. #Note: the <net-public-id> value is found in the output that was generated in the previous "quantum net-create public --router:external=True" command:
quantum router-gateway-set router1 <net-public-id>
  • The following commands are entered on the Controller node. Instances/VMs gain access to the metadata server locally present in the controller node via the external network. To create that necessary connection perform the following:
    • Get the ID of the quantum router:
quantum router-list
    • Copy the id from the command output. Issue the following command and replace router-id with the id found in the output from the previous step:
quantum port-list -- --device_id <router-id> --device_owner network:router_gateway
    • Copy the IP address from the command output above. Now create your static route on the Controller Node:
      • For a permanent static route add the following to /etc/network/interfaces on the Controller Node:
up route add -net 10.10.10.0/24 gw <router-ip-address>
      • Restart networking for the new route to take effect:
/etc/init.d/networking restart
      • Use the route command to verify the route has been added:
route -n
      • For a temporary static route (without having to restart networking), use the following command:
route add -net 10.10.10.0/24 gw <router-ip-address>
  • If you skipped the earlier step of downloading an image and uploading it to glance, do that now:
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

glance add name="precise" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img
  • On the Network node create an SSH keypair and add the public key to Nova. Note: Leave the passphrase empty when creating the keypair. You will need to to install the nova client support:
apt-get -y install python-novaclient
ssh-keygen

cd /root/.ssh/
































nova keypair-add --pub_key id_rsa.pub <key_name>
    • Example:
nova keypair-add --pub_key id_rsa.pub net-key
                  1. Before booting the instance, check for the ID of the network we created earlier:
quantum net-list
nova boot --image precise --flavor m1.small --key_name <key_name> --nic net-id=<quantum-net-id> <instance_name>
    • Example:
nova boot --image precise --flavor m1.small --key_name net-key --nic net-id=f9035744-72a9-42cf-bd46-73d54c0cea06 vm1
  • Watch the status of the instance:
nova show <instance_name>
    • Example:
nova show vm1
      • The instance is booted completely when the OS-EXT-STS:vm_state is "active". Make note of the IP address of the VM.
  • Alternatively, you can watch the complete log of the VM by doing:
nova console-log --length=25 vm1
                    1. Verify connectivity to Instance from Quantum Server (running L3 Agent):
    • Get the ID of the Quantum Router

quantum router-list

ip netns exec qrouter-<UUID_of_quantum_router> ip addr list ip netns exec qrouter-<UUID_of_quantum_router> ping <fixed-ip-of-instance>

  1. Example> ip netns exec qrouter-215445ab-49a7-4a38-992b-e99bc1e26ec2 ip addr list
  1. From the Network node SSH into the instance booted earlier:

ip netns exec qrouter-<UUID_of_quantum_router> ssh ubuntu@<fixed-ip-of-instance>

  1. Example> ip netns exec qrouter-64cde9d5-0a2e-48cd-8b4d-ddd3cfc82c36 ssh ubuntu@10.10.10.3

The authenticity of host '10.10.10.3 (10.10.10.3)' can't be established. ECDSA key fingerprint is 49:f4:59:90:d9:7e:ae:b9:5f:f9:d2:a5:67:ba:7b:15. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.10.10.3' (ECDSA) to the list of known hosts. Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-34-virtual x86_64)

* Documentation:  https://help.ubuntu.com/
 System information as of Thu Dec  6 21:04:03 UTC 2012
 System load:  0.04              Processes:           64
Usage of /:   33.3% of 1.96GB   Users logged in:     0
Memory usage: 8%                IP address for eth0: 10.10.10.3
Swap usage:   0%
 Graph this data and manage this system at https://landscape.canonical.com/

0 packages can be updated. 0 updates are security updates.

Get cloud support with Ubuntu Advantage Cloud Guest

 http://www.ubuntu.com/business/services/cloud

The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law.

To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details.

ubuntu@vm1:~$


10. To test from the Controller node (or other node besides the Network node), edit the default security group or create a new security group to allow for ICMP and SSH:

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

11. Boot an instance using the keys you created on the Controller node earlier and then create and associate a Floating IP:

quantum net-list quantum port-list quantum floatingip-create <ext-net-id> quantum floatingip-associate <floatingip-id> <internal VM port-id>

  • or in 1 step:

quantum floatingip-create --port_id <internal VM port-id> <ext-net-id>

  • ping and SSH to the address that came back in the "floating_ip_address" field

Rating: 4.6/5 (5 votes cast)

Personal tools