OpenStack: Icehouse Installer Testing

From DocWiki

Jump to: navigation, search

Contents

Verifying the Compute Instances

After you complete the installation, all the defined compute nodes should be running.

Procedure

At the command line on the control node, view the running compute nodes by entering:

nova-manage service list

This command verifies that the OpenStack Nova services were installed and are running correctly.

The system returns a table that looks like the following:

Binary   Host                 Zone             Status     State Updated_At
nova-consoleauth all-in-one           internal         enabled    :-)   2014-03-11 17:34:17
nova-scheduler   all-in-one           internal         enabled    :-)   2014-03-11 17:34:16
nova-conductor   all-in-one           internal         enabled    :-)   2014-03-11 17:34:13
nova-compute     all-in-one           nova             enabled    :-)   2014-03-11 17:34:13
nova-cert        all-in-one           internal         enabled    :-)   2014-03-11 17:34:17

Using the Monitoring Interface

With the OpenStack deployment running, the Horizon monitoring interface is available. To log into the monitoring interface, do the following.

Procedure

Step 1: In your browser, navigate to control-node-IP</varname>/horizon/.

Step 2: Log into Horizon with the admin username and password in the site.pp file.

If you did not change the defaults, the username is admin, and the password is Cisco123.

Step 3: Examine the compute nodes in the interface:

a) In the Navigation pane on the right side of the interface, click the Admin tab.
b)In the System Panel on the Admin tab, choose Hypervisors.
The Work pane shows a table of all the running compute nodes.

Creating a Network

This section describes how to create a public network to be used for instances (also called virtual machines, or VMs) to gain external (public) connectivity. VMs are connected externally through a router you create on the control node and are connected to the router through a private GRE network.

Step 1: Create a public network. On a control node, enter the following:

neutron net-create public_network_name \ 
--router:external=True

Step 2: Create a subnet that is associated with the new public network.

Note: The range of IP addresses in your subnet must not conflict with other network nodes on the subnet. For example, if you have a gateway upstream using addresses in the public subnet ranges (192.168.81.1, 192.168.81.2, and so on) your allocation range must start in a nonoverlapping range.

IP addresses are examples only. Use IP addresses that are consistent with your network configuration and policies.

neutron subnet-create --name public_subnet_name \ 
--allocation-pool start=192.168.220.20,end=192.168.220.253 \
public_network_name 192.168.220.0/24

Step 3 Create a private network and subnet to attach instances to.

neutron net-create private_network_name
neutron subnet-create --name private_subnet_name \
private_network_name 10.10.10.0/24 \
--dns_nameservers nameserver1 nameserver2

Step 4: Create a Neutron router.

neutron router-create os_router_name

Step 5: Associate the Neutron router interface with the previously created private subnet.

neutron router-interface-add os_router_name \
private_subnet_name

Step 6: Set the default gateway (previously created public network) for the Neutron router.

neutron router-gateway-set os_router_name \
public_network_name

Step 7: Modify the default Neutron security group to allow for ICMP and SSH (for access to the instances).

neutron security-group-rule-create --protocol icmp --direction ingress default \
neutron security-group-rule-create --protocol tcp --port-range-min 22 \
--port-range-max 22 --direction ingress default

Creating a Tenant Instance

If you have one or more compute nodes running, you can create a VM on the OpenStack cloud.

Prerequisites

You must have a running compute node and a node running the Glance image database.

You must have control node with private and public networks as described in [[OpenStack: Installing Icehouse#Creating a Network|]]

Procedure

Step 1: Load a VM imageinto Glance:

a): Download an image to deploy.
A popular small test image is Cirros.
wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
b): Store the image in Glance.
glance image-create --name cirros-x86_64 --is-public True \
--disk-format qcow2 --container-format ovf --file cirros-0.3.1-x86_64-disk.img \
--progress

Step 2: Boot an Instance:

a): Enter the neutron net-list command to get a list of networks.
# neutron net-list
b): Boot the instance. Using the ID in the --nic net-id= field for the private network (private_network_name from the example in #Creating a Network, for example) from the table displayed by the net-list command in the previous step, enter the following:
nova boot --image cirros-x86_64 --flavor m1.tiny --key_name aio-key \
--nic net-id=32-byte_net-id test_vm_name

Step 3: Verify that your instance has spawned successfully.

Note: The first time an instance is launched on the system can take a bit longer to boot than subsequent launches of instances.

nova show test_vm_name

Step 4: Get the internal fixed IP of your instance with the following command:

nova show test_vm_name

Step 5: Verify connectivity to the instance from the control node.

Note: Because namespaces are being used in this model, you must run the commands from the context of the qrouter using the ip netns exec qrouter syntax.

a) List the qrouter to get its routerid.
neutron router-list
Alternatively, you can get the qrouter ID using the ip command.
ip netns
b) Connect to the qrouter and get a list of its addresses.
ip netns exec qrouter-neutron_router_id ip addr list
c) Ping the instance from the qrouter.
ip netns exec qrouter-neutron_router_id ping fixed_ip_of_instance
d) Use SSH to get into the instance from the qrouter.
ip netns exec qrouter-neutron_router_id ssh cirros@fixed_ip_of_instance

Step 6: Create a floating IP address for the VM.

a) Get a list of the networks.
neutron net-list
b) Get a list of the ports.
neutron port-list
c) Copy the correct IDs.
neutron floatingip-create --port_id internal_VM_port-id \
public_net-id

Step 7: From an external host, ping and SSH to your instance using the floating_ip_address.

ping floating_ip

~=~

<- Previous: Installing

Table of Contents ~=~ Overview: About ~ Components ~ Deployment ~ IP Networks ~=~ Prerequisites: System Requirements ~ Choosing a Deployment Scenario ~=~ Installing: Creating the Build Server ~ Building the Control and Compute Nodes ~=~ Testing: Verifying Nodes ~ Using the Monitoring Interface ~ Creating a Network ~ Creating a Tenant Instance

Rating: 0.0/5 (0 votes cast)

Personal tools