OpenStack:Havana:2 role

From DocWiki

Jump to: navigation, search

This document walks you through a sample configuration to get a 2 role (multi node) deployment working on Cisco OpenStack Installer (COI) Havana Release 2 (H.2).

Contents

Assumptions:

  • Cisco OpenStack installer requires that you have at least two physically or logically separated IP networks.
  • This deployment has 3 nodes – build, control and compute nodes.
  • All the nodes are built on Ubuntu 12.04 LTS manually or PXE provisioned via cobbler on physical hardware or virtual machines.
  • Use this in conjunction with Havana deployment guide.

Build node Configuration:

  • Follow the build node setup instructions from the Havana installation guide. After install.sh completes you will need to configure the data model to reflect your topology.
  • Update /etc/puppet/data/role_mappings.yaml to reflect your short hostname for build, control and compute roles.
build-server build
control-server controller
compute-server: compute
  • Update /etc/puppet/data/hiera_data/user.common.yaml

diff --git a/data/hiera_data/user.common.yaml b/data/hiera_data/user.common.yaml
index 600fc31..a45bbbc 100644
--- a/data/hiera_data/user.common.yaml
+++ b/data/hiera_data/user.common.yaml
@@ -61,7 +61,7 @@ coe::base::controller_hostname: control-server
 # services on the control node.  In the compressed_ha or full_ha scenarios,
 # this will be an address to be configured as a VIP on the HAProxy
 # load balancers, not the address of the control node itself.
-controller_public_address: 192.168.242.10
+controller_public_address: 192.168.255.191
 
 # The protocol used to access API services on the control node.
 # Can be 'http' or 'https'.
@@ -71,13 +71,13 @@ controller_public_protocol: 'http'
 # In the compressed_ha or full_ha scenarios, this will be an address
 # to be configured as a VIP on the HAProxy load balancers, not the address
 # of the control node itself.
-controller_internal_address: 192.168.242.10
+controller_internal_address: 192.168.255.191
 
 # The IP address used for management functions (such as monitoring)
 # on the control node.  In the compressed_ha or full_ha scenarios, this will
 # be an address to be configured as a VIP on the HAProxy
 # load balancers, not the address of the control node itself.
-controller_admin_address: 192.168.242.10
+controller_admin_address: 192.168.255.191
 
 # Control node interfaces.
 # internal_ip be used for the ovs local_ip setting for GRE tunnels.
@@ -86,7 +86,7 @@ controller_admin_address: 192.168.242.10
 # (which is predefined already in $controller_node_internal, and the internal
 # interface for compute nodes.  It is generally also the IP address
 # used in Cobbler node definitions.
-internal_ip: "%{ipaddress_eth3}"
+internal_ip: "%{ipaddress_eth0}"
 
 # The external_interface is used to provide a Layer2 path for
 # the l3_agent external router interface.  It is expected that
@@ -95,34 +95,34 @@ internal_ip: "%{ipaddress_eth3}"
 # assuming that the first non "network" address in the external
 # network IP subnet will be used as the default forwarding path
 # if no more specific host routes are added.
-external_interface: eth2
+external_interface: eth1
 
 # The public_interface will have an IP address reachable by
 # all other nodes in the openstack cluster.  This address will
 # be used for API Access, for the Horizon UI, and as an endpoint
 # for the default GRE tunnel mechanism used in the OVS network
 # configuration.
-public_interface: eth1
+public_interface: eth0
 
 # The interface used for VM networking connectivity.  This will usually
 # be set to the same interface as public_interface.
-private_interface: eth1
+private_interface: eth0
 
 ### Cobbler config
 # The IP address of the node on which Cobbler will be installed and
 # on which it will listen.
-cobbler_node_ip: 192.168.242.10
+cobbler_node_ip: 192.168.255.194
 
 # The subnet address of the subnet on which Cobbler should serve DHCP
 # addresses.
-node_subnet: '192.168.242.0'
+node_subnet: '192.168.255.0'
 
 # The netmask of the subnet on which Cobbler should serve DHCP addresses.
 node_netmask: '255.255.255.0'
 
 # The default gateway that should be provided to DHCP clients that acquire
 # an address from Cobbler.
-node_gateway: '192.168.242.1'
+node_gateway: '192.168.255.1'
 
 # The admin username and crypted password used to authenticate to Cobbler.
 admin_user: localadmin
@@ -159,7 +159,7 @@ install_drive: /dev/sda
 # This should generally be an address that is accessible via
 # horizon.  You can set it to an actual IP address (e.g. "192.168.1.1"),
 # or use facter to get the IP address assigned to a particular interface.
-nova::compute::vncserver_proxyclient_address: "%{ipaddress_eth3}"
+nova::compute::vncserver_proxyclient_address: "%{ipaddress_eth0}"
 
 ### The following are passwords and usernames used for
 ### individual services.  You may wish to change the passwords below

  • Run puppet apply to get your build node configured.
puppet apply -v /etc/puppet/manifests/site.pp

Once puppet finished your build node should be ready to serve as a puppet master and cobbler server

Control and compute nodes:

Now that our build server is up, lets bring up the control and compute nodes. If you’re using cobbler you should be able to provision this via PXE.

If you like to do a manual setup,

  • Install Ubuntu 12.04 on the control and compute nodes.
  • Make sure the hostnames match the ones defined in role-mapping.yaml
  • Install git
 apt-get install git 
  • Clone Cisco Openstack Installer Repository and run setup
 
cd /root && git clone -b havana https://github.com/CiscoSystems/puppet_openstack_builder && cd puppet_openstack_builder && git checkout h.2
  • Export your build server IP address
cd install_scripts
export build_server=192.168.255.194
  • Now run the setup script to get your node ready to run right version of puppet. This script also makes sure you have the right hostname in /etc/hosts etc
bash setup.sh
  • Now your control and compute nodes are ready to run puppet. Begin the control/compute build by running puppet agent
 Puppet agent –td –server=build-server.domain.name –pluginsync 

After puppet runs finish you should have a successful openstack install.

Verification:

You can verify that all of the OpenStack Nova services were installed and running correctly by checking the Nova service list:

root@control-server:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth all-in-one                           internal         enabled    :-)   2014-03-11 17:34:17
nova-scheduler   all-in-one                           internal         enabled    :-)   2014-03-11 17:34:16
nova-conductor   all-in-one                           internal         enabled    :-)   2014-03-11 17:34:13
nova-compute     all-in-one                           nova             enabled    :-)   2014-03-11 17:34:13
nova-cert        all-in-one                           internal         enabled    :-)   2014-03-11 17:34:17

You can connect into the OpenStack Dashboard by entering:


http://ip-of-your-control-server

using username admin and password Cisco123.

Rating: 0.0/5 (0 votes cast)

Personal tools