Cisco OpenStack Edition: Folsom Manual Install

From DocWiki

Revision as of 18:18, 7 December 2012 by Shmcfarl (Talk | contribs)
Jump to: navigation, search

Contents

OpenStack Folsom Manual Installation

Introduction

The are two common ways of installing OpenStack, manually or via automation.  There is much focus on the full automation of OpenStack deployment using automation tools such as Puppet, Chef, JuJu and others and while these offer great advantages over manual configuration, they do hide the underworkings from those who need to learn what is really happening during an OpenStack setup.  This document can be used by those who want to learn a bit more about OpenStack installation process on the Folsom release using the following OpenStack components:

Dependencies

Operating System

The operating system used for this installation is Ubuntu 12.04 LTS (Precise).

Nodes

This document uses three physical servers (Cisco UCS B or C-series) to serve the roles of Controller, Compute, Network.  While, physical servers are being used in the instructions, there is nothing preventing you from using three virtual machines running on your virtualization/hypervisor of choice.  The three distinct node types that are used in this document are:

  • Controller Node
    • Runs Nova API, Nova Cert, Nova Consoleauth, Nova Novncproxy, Nova Scheduler, Novnc, Quantum Server, Quantum Plugin OVS, Quantum API/registry, and Keystone services
    • Provides control plane functionality for managing the OpenStack environment
  • Compute Node
    • Runs Nova Compute, Quantum Plugin OVS, and OVS Plugin Agent services
    • Provides the hypervisor role for running Nova instances (Virtual Machines)
  • Network Node
    • Runs Quantum DHCP, Quantum L3 Agent, Quantum Plugin OVS, OVS Plugin Agent, DNSMASQ Base and Util services
    • Provides network services such as DHCP, network access and routing for Nova instances running on the Compute node

Network

The network design referenced in this document has three physically or logically (VLAN) seperate networks.  While, in this setup, VLANs are used for access to the nodes, in the Quantum environment with Open vSwitch (OVS) deployments, the connecitivity between multiple hosts on virtual networks uses either VLANs or tunneling (GRE). GRE is easier to deploy, especially in larger enviroments and does not suffer from the scalability limitations that VLANs do.  The networks are defined below:

  • Management and CIMC (Cisco Integrated Management Controller for UCS) Network
    • This network is used to perform management functions against the node. Examples include SSH to the nodes, the controller node hosting Horizon would listen for incoming connections on this network.
    • An IP address for each node is required for this network.
    • This network typically employs private (RFC1918) IP addressing.
  • Public/API Network
    • This networking is used for assigning Floating IP addresses to instances for communicating outside of the OpenStack Cloud
    • The Metaservice that is used for injecting information into instances (i.e. SSH keys) is attached to this network on the Controller node
    • The Controller node and Network node will have an interface attached to this network
    • An IP address for the Controller node is required for this network
    • This network typically employs publicly routable IP addressing if no external NATs are used upstream towards the Internet edge (Note: in this document all IP addressing for all interfaces comes out of various private addressing blocks)
  • Data Network (AKA: Private Network)
    • This network is used for providing connectivity to OpenStack Intances (Virtual Machines)
    • This node interfaces attached to this network are used for the Open vSwitch (OVS) GRE tunneling termination
    • In this document an IP address for each node is assigned
    • This network typically employs private (RFC1918) IP addressing

Figure 1 is used to help visualize the setup and to act as a reference for configuration steps later on in the document.  A summary of the network topology is as follows:

  • Controller Node
    • Hostname = control03
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.43
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 IP address = 192.168.221.43
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.43
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.13
  • Compute Node
    • Hostname = compute01
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.51
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.51
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.4
  • Network Node
    • Hostname = control02
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.42
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 No IP address is set for this interface (see notes later in document on OVS/Quantum setup)
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.42
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.3


Network-topology-v1.0.png







  • Other Network Services
    • DNS: In this setup an external DNS server is used for name resolution for OpenStack node resolution and external resolution.
    • NTP: In this setup an external NTP server(s) is used for time syncronization
    • Physical Network Switches: Each node in this setup is physically attached to a Cisco Nexus switch acting as a Top-of-Rack access layer device. Trunking is configured on each interface connecting to the eth0 NIC of each node. Note: Upstream routers/aggregation layer switches will most likely be terminating the L3 VLAN interfaces and if they are deployed in a redundant fashion with a First Hop Redundancy Protocol like HSRP or VRRP then you need to be careful what the IP addresses are on the physical L3 switches/routers as they may conflict with the IP address of the Quantum router on the public subnet (usually assigned .3 address). For example, if you are using HSRP and you have .1 as the standby IP address, .2 as the first L3 switch IP and .3 as the second L3 switch IP, you will receive a duplicate IP address error on the second L3 switch. This can be worked around by using high-order IPs on your upstream L3 device or altering the Quantum subnet configuration at the time of creation (more on this later).

Installation

The installation of the nodes will be in the following order:

  1. Controller Node
  2. Network Node
  3. Compute Node

Install the Controller Node (control03)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
sudo su
  • You will receive the following error when trying to run update:
GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
apt-get update && apt-get dist-upgrade -y

Networking

  • Controller Node (control03) /etc/network/interfaces:
# The loopback network interface
auto lo iface lo inet loopback 

#Management Network
auto eth0 
iface eth0 inet static 
 address 192.168.220.43
 netmask 255.255.255.0
 gateway 192.168.220.1
 dns-nameservers 192.168.220.254
 dns-search dmz-pod2.lab

#VM Network with OVS in tunnel mode

auto eth0.223 
iface eth0.223 inet static 
 vlan-raw-device eth0
 address 10.0.0.43
 netmask 255.255.255.0

#Public/API Network: Bridged Interface

auto eth0.221
iface eth0.221 inet static 
 vlan-raw-device eth0
 address 192.168.221.43
 netmask 255.255.255.0

Time Synchronization

  • Install other services:
apt-get install -y ntp
  • Configure the NTP server to synchronize between your compute nodes and the controller node::
sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
 service ntp restart

MySQL & RabbitMQ

  • Install MySQL. #Note: You will be prompted for the root password for mysql. Document this password as it will be needed later on when we login and create databases:
apt-get install -y mysql-server python-mysqldb
  • Configure mysql to accept all incoming requests:
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart
  • Install RabbitMQ:
apt-get install -y rabbitmq-server
  • Create a RabbitMQ user account that will be used by OpenStack services:
rabbitmqctl add_user openstack_rabbit_user Cisco123
  • Create the RabbitMQ vhost for Quantum:
rabbitmqctl add_vhost /quantum
  • Set the permissions for the new RabbitMQ user account:
rabbitmqctl set_permissions -p / openstack_rabbit_user ".*" ".*" ".*"
rabbitmqctl set_permissions -p /quantum openstack_rabbit_user ".*" ".*" ".*"

Keystone Installation

  • Start by the keystone packages:
apt-get install -y keystone
  • Create a MySQL database for Keystone (use root password that was created during original MySQL install) # Note: ALL services and DB accounts will use Cisco123:
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone_admin'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Edit the /etc/keystone/keystone.conf to the new database:
[DEFAULT]
admin_token   = keystone_admin_token

[sql]
connection = mysql://keystone_admin:Cisco123@192.168.220.43/keystone
  • Test whether MySQL is listening on 192.168.220.43 for the Keystone database:
mysql -h192.168.220.43 -ukeystone_admin -pCisco123 keystone
  • Restart the identity service then synchronize the database:
service keystone restart
keystone-manage db_sync
ADMIN_PASSWORD=${ADMIN_PASSWORD:-Cisco123}
export SERVICE_TOKEN="keystone_admin_token"
export SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0/"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-services}
  • Run the script to populate the Keystone database with data (users, tenants, services)
./keystone-data.sh
# MySQL definitions
MYSQL_USER=keystone_admin
MYSQL_DATABASE=keystone
MYSQL_HOST=192.168.220.43
MYSQL_PASSWORD=Cisco123

# Keystone definitions
KEYSTONE_REGION=RegionOne
SERVICE_TOKEN=keystone_admin_token
SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0"

# other definitions.  This should be your Controller Node IP address.
MASTER="192.168.220.43"
  • Run the script to populate the Keystone database with service endpoints. #Note: If you logout or reboot after running the keystone-data.sh script then you must re-export the following before running the keystone-endpoints.sh script:
export SERVICE_TOKEN="keystone_admin_token"
export SERVICE_ENDPOINT="http://192.168.220.43:35357/v2.0/"
./keystone-endpoints.sh
  • Create a simple credential file and load it so you won't be bothered later:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=Cisco123
export OS_AUTH_URL="http://192.168.220.43:5000/v2.0/"
export OS_AUTH_STRATEGY=keystone
export SERVICE_TOKEN=keystone_admin_token
export SERVICE_ENDPOINT=http://192.168.220.43:35357/v2.0/
  • Load it:
source openrc
  • To test Keystone, install curl and use a curl request :
apt-get install curl openssl -y
curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "Cisco123"}}}' -H "Content-type: application/json" http://192.168.220.43:35357/v2.0/tokens | python -mjson.tool
  • Or you can use the Keystone command-line:
keystone user-list
keystone tenant-list
keystone service-list
keystone endpoint-list

Glance Installation

  • Install Glance packages:
apt-get install -y glance
  • Create a MySQL database for Glance (use root password that was created during original MySQL install):
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Cisco123';
quit;
  • Update /etc/glance/glance-api-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = Cisco123
  • Update the /etc/glance/glance-registry-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.220.43
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = glance
admin_password = Cisco123
  • Update /etc/glance/glance-api.conf with:
sql_connection = mysql://glance:Cisco123@192.168.220.43/glance

[paste_deploy]
flavor = keystone
  • Update the /etc/glance/glance-registry.conf with:
sql_connection = mysql://glance:Cisco123@192.168.220.43/glance

[paste_deploy]
flavor = keystone
  • Restart the glance-api and glance-registry services:
service glance-api restart; service glance-registry restart
  • Synchronize the glance database (You may get a message about deprecation - you can ignore):
glance-manage db_sync
  • Restart the services again to take into account the new modifications:
service glance-registry restart; service glance-api restart
  • Upload an image to Glance. Start by downloading the Ubuntu Precise cloud image to the Controller node and then uploading it to Glance:
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

glance add name="precise" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img
  • Now list the images to see what you have just uploaded:
glance image-list

Rating: 4.6/5 (5 votes cast)

Personal tools