Cisco OpenStack Edition: Folsom Manual Install

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m
m
Line 123: Line 123:
*You will receive the following error when trying to run update:<pre>GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199</pre>
*You will receive the following error when trying to run update:<pre>GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199</pre>
-
*Add the apt key to remove the error:<pre>gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199gpg --armor --export E8CC67053ED3B199 | apt-key add -</pre>
+
*Add the apt key to remove the error:<pre>gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
 +
gpg --armor --export E8CC67053ED3B199 | apt-key add -</pre>
*Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:<pre>gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199</pre>
*Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:<pre>gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199</pre>

Revision as of 17:35, 7 December 2012

Contents

OpenStack Folsom Manual Installation

Introduction

The are two common ways of installing OpenStack, manually or via automation.  There is much focus on the full automation of OpenStack deployment using automation tools such as Puppet, Chef, JuJu and others and while these offer great advantages over manual configuration, they do hide the underworkings from those who need to learn what is really happening during an OpenStack setup.  This document can be used by those who want to learn a bit more about OpenStack installation process on the Folsom release using the following OpenStack components:

Dependencies

Operating System

The operating system used for this installation is Ubuntu 12.04 LTS (Precise).

Nodes

This document uses three physical servers (Cisco UCS B or C-series) to serve the roles of Controller, Compute, Network.  While, physical servers are being used in the instructions, there is nothing preventing you from using three virtual machines running on your virtualization/hypervisor of choice.  The three distinct node types that are used in this document are:

  • Controller Node
    • Runs Nova API, Nova Cert, Nova Consoleauth, Nova Novncproxy, Nova Scheduler, Novnc, Quantum Server, Quantum Plugin OVS, Quantum API/registry, and Keystone services
    • Provides control plane functionality for managing the OpenStack environment
  • Compute Node
    • Runs Nova Compute, Quantum Plugin OVS, and OVS Plugin Agent services
    • Provides the hypervisor role for running Nova instances (Virtual Machines)
  • Network Node
    • Runs Quantum DHCP, Quantum L3 Agent, Quantum Plugin OVS, OVS Plugin Agent, DNSMASQ Base and Util services
    • Provides network services such as DHCP, network access and routing for Nova instances running on the Compute node

Network

The network design referenced in this document has three physically or logically (VLAN) seperate networks.  While, in this setup, VLANs are used for access to the nodes, in the Quantum environment with Open vSwitch (OVS) deployments, the connecitivity between multiple hosts on virtual networks uses either VLANs or tunneling (GRE). GRE is easier to deploy, especially in larger enviroments and does not suffer from the scalability limitations that VLANs do.  The networks are defined below:

  • Management and CIMC (Cisco Integrated Management Controller for UCS) Network
    • This network is used to perform management functions against the node. Examples include SSH to the nodes, the controller node hosting Horizon would listen for incoming connections on this network.
    • An IP address for each node is required for this network.
    • This network typically employs private (RFC1918) IP addressing.
  • Public/API Network
    • This networking is used for assigning Floating IP addresses to instances for communicating outside of the OpenStack Cloud
    • The Metaservice that is used for injecting information into instances (i.e. SSH keys) is attached to this network on the Controller node
    • The Controller node and Network node will have an interface attached to this network
    • An IP address for the Controller node is required for this network
    • This network typically employs publicly routable IP addressing if no external NATs are used upstream towards the Internet edge (Note: in this document all IP addressing for all interfaces comes out of various private addressing blocks)
  • Data Network (AKA: Private Network)
    • This network is used for providing connectivity to OpenStack Intances (Virtual Machines)
    • This node interfaces attached to this network are used for the Open vSwitch (OVS) GRE tunneling termination
    • In this document an IP address for each node is assigned
    • This network typically employs private (RFC1918) IP addressing

Figure 1 is used to help visualize the setup and to act as a reference for configuration steps later on in the document.  A summary of the network topology is as follows:

  • Controller Node
    • Hostname = control03
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.43
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 IP address = 192.168.221.43
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.43
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.13
  • Compute Node
    • Hostname = compute01
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.51
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.51
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.4
  • Network Node
    • Hostname = control02
    • Single physical NIC used to logically separate three networks
      • eth0 connects to the Management/CIMC network which is on VLAN220 (VLAN 220 is the Native VLAN on the upstream Layer 2 switch)
        • eth0 IP address = 192.168.220.42
      • eth0.221 connects to the Public/API network on VLAN221
        • eth0.221 No IP address is set for this interface (see notes later in document on OVS/Quantum setup)
      • eth0.223 connects to the Data network
        • eth0.223 IP address = 10.0.0.42
      • CIMC 0 connects to the Management/CIMC network
        • CIMC 0 IP address = 192.168.220.3


Network-topology-v1.0.png







  • Other Network Services
    • DNS: In this setup an external DNS server is used for name resolution for OpenStack node resolution and external resolution.
    • NTP: In this setup an external NTP server(s) is used for time syncronization
    • Physical Network Switches: Each node in this setup is physically attached to a Cisco Nexus switch acting as a Top-of-Rack access layer device. Trunking is configured on each interface connecting to the eth0 NIC of each node. Note: Upstream routers/aggregation layer switches will most likely be terminating the L3 VLAN interfaces and if they are deployed in a redundant fashion with a First Hop Redundancy Protocol like HSRP or VRRP then you need to be careful what the IP addresses are on the physical L3 switches/routers as they may conflict with the IP address of the Quantum router on the public subnet (usually assigned .3 address). For example, if you are using HSRP and you have .1 as the standby IP address, .2 as the first L3 switch IP and .3 as the second L3 switch IP, you will receive a duplicate IP address error on the second L3 switch. This can be worked around by using high-order IPs on your upstream L3 device or altering the Quantum subnet configuration at the time of creation (more on this later).

Installation

The installation of the nodes will be in the following order:

  1. Controller Node
  2. Network Node
  3. Compute Node

Install the Controller Node (control03)

Preparing Ubuntu 12.04

Install Ubuntu 12.04 (AMD 64-bit) from CD/ISO or automated install (i.e. kickstart). Use the networking information above to configure your network properties. Select ssh-server as the only additional package.

  • Use sudo mode or run from root account for the entire installation:
    sudo su
  • You will receive the following error when trying to run update:
    GPG error: http://128.107.252.163 folsom-proposed InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E8CC67053ED3B199
  • Add the apt key to remove the error:
    gpg --keyserver hkp://pgpkeys.mit.edu --recv-keys E8CC67053ED3B199
    
gpg --armor --export E8CC67053ED3B199 | apt-key add -
  • Note: If you have issues using the pgpkeys.mit.edu site you can use keyserver.ubuntu.com instead:
    gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys E8CC67053ED3B199
  • Update your system:
    apt-get update && apt-get dist-upgrade -y

Networking

  • Controller Node (control03) /etc/network/interfaces:
    # The loopback network interface
    

auto lo iface lo inet loopback

  1. Management Network

auto eth0 iface eth0 inet static

address 192.168.220.43
netmask 255.255.255.0
gateway 192.168.220.1
dns-nameservers 192.168.220.254
dns-search dmz-pod2.lab
  1. VM Network with OVS in tunnel mode

auto eth0.223 iface eth0.223 inet static

vlan-raw-device eth0
address 10.0.0.43
netmask 255.255.255.0
  1. Public/API Network: Bridged Interface

auto eth0.221 iface eth0.221 inet static

vlan-raw-device eth0
address 192.168.221.43

netmask 255.255.255.0



MySQL & RabbitMQ

  • Install MySQL. #Note: You will be prompted for the root password for mysql. Document this password as it will be needed later on when we login and create databases:
    apt-get install -y mysql-server python-mysqldb
  • Configure mysql to accept all incoming requests:
    sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
    

service mysql restart

  • Install RabbitMQ:
    apt-get install -y rabbitmq-server
  • Create a RabbitMQ user account that will be used by OpenStack services:
    rabbitmqctl add_user openstack_rabbit_user Cisco123
  • Create the RabbitMQ vhost for Quantum:
    rabbitmqctl add_vhost /quantum
  • Set the permissions for the new RabbitMQ user account:
    rabbitmqctl set_permissions -p / openstack_rabbit_user ".*" ".*" ".*"
    

rabbitmqctl set_permissions -p /quantum openstack_rabbit_user ".*" ".*" ".*"

Rating: 4.6/5 (5 votes cast)

Personal tools