Openstack:Havana-Openstack-Installer

From DocWiki

(Difference between revisions)
Jump to: navigation, search
(Connectivity Config)
(use the dev branch)
Line 9: Line 9:
First, clone the Cisco openstack installer repository:
First, clone the Cisco openstack installer repository:
-
<pre>git clone https://github.com/CiscoSystems/puppet_openstack_builder</pre>
+
<pre>git clone -b coi-development https://github.com/CiscoSystems/puppet_openstack_builder</pre>
And select cisco as the vendor:
And select cisco as the vendor:

Revision as of 22:02, 11 December 2013

Contents

Cisco Openstack Installer

The Havana release represents a significant departure from previous Cisco Openstack Installers. Where in the past users were required to edit the 'site.pp', a puppet manifest that describes their environment, this has now been changed to a yaml based data model. This provides significantly more flexibility, as a user can now modify almost any aspect of their Openstack install without needing to know any puppet.

These instructions require git to be installed. If you are on RedHat, 'yum install git', and if you are on Ubuntu, 'apt-get install git'

Build server

First, clone the Cisco openstack installer repository:

git clone -b coi-development https://github.com/CiscoSystems/puppet_openstack_builder

And select cisco as the vendor:

export vendor=cisco

A 'scenario' is a collection of roles that perform different tasks within an Openstack cluster. For example, if you wanted an environment where some nodes run all services, the scenario 'all_in_one' would be suitable, or if you wanted to separate control and compute nodes, the '2_role' scenario can do that. Here are the currently available scenarios and their roles:

- Scenario: all_in_one
- Roles: all_in_one, compute
- This scenario has a node with puppet services + control services + compute services,
  and optionally additional nodes with just compute services
- Scenario: 2_role
- Roles: build, control, compute, swift_proxy, swift_storage
- This scenario separates the puppet, control, compute, and swift services into separate nodes
- Scenario: full_ha
- Roles: build, control, compute, swift_proxy, swift_storage
- This scenario is similar to the 2_role one, but includes a load balancer role and is used for HA deployment

To select a scenario, export the appropriate environment variable like this:

export scenario=2_role

Now run a script that will prepare puppet, modules and repositories on the build node:

cd puppet_openstack_builder/install-scripts
./install.sh

Configuration and Customisation

At this point, the data model has been installed and any required customizations should be made to the data model. If you selected the all_in_one scenario, some sample data has already been placed in
/etc/puppet/data/hiera_data/user.yaml
that you should review for correctness. If the file is not there, go ahead and create it, and put in any config options that are required. Here are some of the common ones:
vi /etc/puppet/data/hiera_data/user.yaml

Required Config

# Set the hostname of the build node
coe::base::build_node_name: build-server

# Set the hostname of the control node
coe::base::controller_hostname: control-server

# Set the IP address of the control node
coe::base::controller_node_internal: 192.168.100.20

Software Repository Config

# Configure which repository to get packages from. Can be 'cisco_repo' or 'cloud_archive'
# The former is an apt repo maintained by Cisco and the latter is the Ubuntu Cloud Archive
coe::base::package_repo: 'cisco_repo'

# Which Openstack release to install - can be 'grizzly' or 'havana'. Other versions currently untested.
coe::base::openstack_release: 'havana'

# If cisco_repo is used, the mirror location can be configured as either ftp or http:
coe::base::openstack_repo_location: 'ftp://ftpeng.cisco.com/openstack'
coe::base::openstack_repo_location: 'http://openstack-repo.cisco.com/openstack'

# Cisco maintains a supplemental repo with packages that aren't core to Openstack, but
# are frequently used, such as ceph and mysql-galera. It can be enabled using ftp or http
coe::base::supplemental_repo: 'ftp://ftpeng.cisco.com/openstack/cisco_supplemental'
coe::base::supplemental_repo: 'http://openstack-repo.cisco.com/openstack/cisco_supplemental'

# If you are using the ubuntu repo, you can change from 'main' (default) to 'updates'
coe::base::ubuntu_repo: 'updates'

# Set a proxy server
coe::base::proxy: '192.168.100.100'

# Set a gateway server
coe::base::node_gateway: '192.168.100.101'

Connectivity Config

# The DNS Domain
domain: domain.name

# A list of NTP servers
ntp_servers: 
  - time-server.domain.name

# Used to tell Neutron agents which IP to bind to.
# We can get the ip address of a particular interface
# using a fact.
internal_ip: "%{ipaddress_eth1}"

# Similarly, VNC needs to be told the IP to bind to
nova::compute::vncserver_proxyclient_address: "%{ipaddress_eth1}"

# This interface will be used to NAT to the outside world. It
# only needs to be on the control node(s)
external_interface: eth2

# This interface is used for communication between openstack
# components, such as the database and message queue
public_interface: eth1

# This interface is used for VM network traffic
private_interface: eth1

# This will be used to tell openstack services where the DB and
# other internally consumed services are
controller_internal_address: 192.168.100.10

# This is used in the HA scenario to set the public keystone
# endpoint
controller_public_address: 192.168.100.10

Passwords

# Users can set either a single password for all services, plus
# the secret token for keystone
secret_key: secret
password: password123

# Or passwords can be specified for each service
cinder_db_password: cinder_pass
glance_db_password: glance_pass
keystone_db_password: key_pass
nova_db_password: nova_pass
network_db_password:   quantum_pass
database_root_password: mysql_pass
cinder_service_password: cinder_pass
glance_service_password: glance_pass
nova_service_password: nova_pass
ceilometer_service_password: ceilometer_pass
admin_password: Cisco123
admin_token: keystone_admin_token
network_service_password: quantum_pass
rpc_password: openstack_rabbit_password
metadata_shared_secret: metadata_shared_secret
horizon_secret_key: horizon_secret_key
ceilometer_metering_secret: ceilometer_metering_secret
ceilometer_db_password: ceilometer
heat_db_password: heat
heat_service_password: heat_pass

Nodes

You will also need to map the roles in your selected scenario to hostnames. This is done in /etc/puppet/data/role_mappings.yaml like so:

vi /etc/puppet/data/role_mappings.yaml
control-server: controller
control-server01: controller
control-server02: controller
control-server03: controller

compute-server: compute
compute-server01: compute
compute-server02: compute
compute-server03: compute

all-in-one: all_in_one

build-server: build

load-balancer01: load_balancer
load-balancer02: load_balancer

swift-proxy01: swift_proxy
swift-proxy02: swift_proxy

swift-storage01: swift_storage
swift-storage02: swift_storage
swift-storage03: swift_storage

Now run the master script, to turn the build node into a puppet master:

sh master.sh

Other Nodes

After setting up the build node, all other nodes can be deployed using these commands:

export build_server_ip=<YOUR_BUILD_NODE_IP>
bash <(curl \-fsS https://raw.github.com/stackforge/puppet\_openstack\_builder/master/install-scripts/setup.sh)
puppet agent -td --server=build-server.domain.name

Rating: 3.7/5 (21 votes cast)

Personal tools