OpenStack/sandbox/DevStack Installation

From DocWiki

Jump to: navigation, search

Contents

Introduction

The CSR1kv can be used in Openstack in a variety of ways:

A: CSR1kv can be used to implement Neutron’s L3 Routing service API. In this case, the life cycle of CSR1kv VMs are managed by a L3 routing service plugin using Nova. This service plugin also takes care of making the appropriate configurations in the CSR1kv VMs to realize the service API. The CSR1kv VMs are therefore also not visible to regular users. They are hidden in a different admin tenant. The only abstraction visible to the user is the Neutron Router. Neutron's L3 Routing service API in CSR1kv can be used either with the ML2 plugin or with the N1kv plugin. The installation steps are described in Sections 4.4.15 and 4.4.16, respectively.

B: CSR1kv can also be used to implement Neutron’s Firewall service API. This implementation is dependent on the L3 Routing service in CSR1kv described in section A above. Consequently, the CSR1kv VMs are never visible to regular users in this case either. The users only see the Neutron Firewall resources. The installation steps for FWaaS in CSR1kv are described in Section 8.

C: CSR1kv can also be used to implement Neutron’s IPSEC VPN service API. This implementation is dependent on the L3 Routing service in CSR1kv described in section A above. As described in section B above, the CSR1kv VMs are never visible to regular users, who only see the Neutron VPN resources. The installation steps for VPNaaS in CSR1kv are described in Section 9.

D: CSR1k can finally be used as a regular Nova VM managed directly by the user. The CSR1kv VM is then like any other VM, e.g., Cirros or Ubuntu. The user is fully responsible for making configurations inside the CSR1kv, as well as manage its life cycle. The installation steps for the CSR1kv as a tenant VM are described in Section 6.

Setting Up Devstack in Kilo

This page describes how create and run a Devstack setup with that Cisco Cloud Service Router 1000V (CSR1kv) routing service plugin for the OpenStack Juno release.

The routing service plugin for Cisco CSR1kv is available in the upstream community Openstack Neutron repo on Github.

Note: Devstack is a constantly evolving set of bash scripts used to build a complete Openstack environment. The primarily targeted user group is openstack developers.

While Devstack is very useful, its high rate of change means that you will sometimes encounter problems.

Understanding of DevStack and various tools, including pip and apt-get, is essential to efficient use of DevStack. If you are not comfortable with these tools then you will probably have limited success running the CSR plugin using what is, after all, a development environment.

Installing Base OS

Ubuntu

Install Ubuntu 14.04.1 LTS Linux Server

Install Ubuntu

Red Hat

Install RHEL7, CentOS7, or Fedora 20 version of Linux Server installed.

[1]

Installing OpenStack

Ubuntu Devstack Installation and Setup for CSR1kv with N1kv

Prerequisites

The host on which you install DevStack must meet the following prerequisites:

  • The server must be running one of the two operating system flavors as the host OS
  • Ensure that any previous DevStack run is removed. In particular, ensure that any Openvswitch (OVS) or Linux bridges (and ports on such bridges) are removed.
  • The Ubuntu NetworkManager service should not be running on the host. NetworkManager attempts to manage the virtual interfaces that N1KV creates and that leads to installation failure. It is best to stop (and probably remove NetworkManager) by entering the following:
sudo service network-manager stop

Warning: Stopping the NetworkManager service causes the host to lose network connectivity. Connectivity can be restored by doing xxx

Installation

Follow this procedure to install and configure DevStack with the CSR routing service plugin.

Step 1. Download the DevStack branch from the csr1kv_for_routing_juno_minimal Cisco Devstack repo:

git clone \-b csr1kv_for_routing_juno_minimal [https://github.com/CiscoSystems/devstack.git]

This devstack branch points to the _master_ branch in the Openstack Neutron repo.

Step 2. Download the Neutron master branch from the Openstack Neutron repo on Github

Step 3. Select and edit the localrc file. There are three localrc files in the devstack branch. We recommend you use localrc.n1kv_and_csr1kv, which starts a full DevStack instance with all services and with the n1kv plugin.

Copy this file to ~/localrc and make the changes noted in the file.

Step 4. Run the stack script:

./stack.sh

Once stack completes, you should have a working environment. The local.sh script automatically runs the configuration scripts needed to create the L3 Admin Tenant, the management network, register images, flavors, and so on. A CSR1kv VM is created for each Neutron router that is created.

Note: If you encounter problems, see the Troubleshooting section below.

Configuration

There are three localrc files in the devstack branch. We recommend you use localrc.n1kv_and_csr1kv, which starts a full DevStack instance with all services and with the n1kv plugin.

Note: Another localrc file is available; see bottom of page.

Post Installation:

  • Enable the login password and SSH in Ubuntu cloud image by pasting the following in the post-creation script data field in the Horizon launch instance window. The default user is ubuntu.
#cloud-config
password: cisco1
chpasswd: 
ssh_pwauth: true
  • Remove the six distribution packages by entering the following:
sudo mv /usr/lib/python2.7/dist-packages/six.py /usr/lib/python2.7/dist-packages/six.py.old
sudo rm /usr/lib/python2.7/dist-packages/six.pyc

Note: Failing to remove these packages will result in an installation error.

  • We recommend that you use the Ubuntu 14.04 cloud image for your VMs. Cirros VMs may fail to obtain DHCP addresses.

Obtaining Images

N1kv VSM, VEM and CSR1kv

Currently, unsolved data plane problems in N1kV prevent one single virtual supervisor modules (VSM) and virtual ethernet modules (VEM) from working on all servers. We therefore recommend that you try the following VSMs and VEMs in combination until you find ones that work on your system.

Download VSMs from the path below on an XDM2 machine: /auto/n1k_barracuda/daily_build/barracuda/nexus/<version_number>/src/build/images/gdb The images, in order of preference, are:

n1000v-dk9.5.2.1.SK1.3.0.135.iso
n1000v-dk9.5.2.1.SK1.3.0.124.iso
n1000v-dk9.5.2.1.SK1.3.0.105.iso

Download VSMs from the path below on an XDM2 machine: /auto/n1k_barracuda/daily_build/barracuda/nexus/<version_number>/src/swordfish/build/images/ubuntu-12.04 The images, in order of preference, are:

nexus_1000v_vem-12.04-5.2.1.SK1.3.0.135.S0-0gdb.deb
nexus_1000v_vem-12.04-5.2.1.SK1.3.0.124.S0-1gdb.deb
nexus_1000v_vem-12.04-5.2.1.SK1.3.0.105.S0-4.deb

(In case none of the above images are found please check with Abhishek Raut (abhraut@cisco.com) from N1kv team to get pointer to the best VSM/VEM version to use.)

Download the CSR image from the following directory: /ws/pcm-bxb/OS/csr/in-band/

Server: bxb-xdm-102 IP: 161.44.90.109

The image file is: csr1000v-universalk9.BLD_MCP_DEV_LATEST_20140531_013025.qcow2

Browse to the Automatic Build System (ABS) page, , and from the mcp_dev branch, pick the latest image with lots of green check. Once selected, it's shows the location under Archive Location. Go there, and then chdir to linkfarm/ios1-ultra/ and get the qcow2 image.

+Older Images:+ VSM: /auto/n1k_barracuda/daily_build/albacore_throttle/nexus/50/src/build/images/gdb/n1000v-dk9.5.2.1.SK1.2.2.gbin /auto/n1k_barracuda/daily_build/albacore_throttle/nexus/50/src/build/images/gdb/n1000v-dk9.5.2.1.SK1.2.2.iso

VEM: /auto/n1k_barracuda/daily_build/albacore_throttle/nexus/50/src/swordfish/build/images/ubuntu-12.04/nexus_1000v_vem-12.04-5.2.1.SK1.2.2.S0-23gdb.deb

Recording of presentation about CSR router plugin, installation and plugging drivers

[2]

Update environment variables

If you have your setup running through a proxy server, you will need to set the following environment variables. You will need to substitute your sever IP address for 172.19.148.60. You can also add the following to your localrc.


export PROXY_HOST=proxy.esl.cisco.com:80 export https_proxy=[3] export http_proxy=[4] export ftp_proxy=[5] export HTTPS_PROXY=[6] export HTTP_PROXY=[7] export FTP_PROXY=[8]

printf \-v csr_ips '%s,' 10.0.100.\{1..100\};

export no_proxy="cisco.com,172.19.148.60,localhost,127.0.0.1,192.168.168.2,$\{csr_ips%,\}";

Run stack.sh

Run ./stack.sh. Once stack completes, you should have a working environment.

FAQs

Click the link to view the FAQs [9]

Ubuntu Devstack Installation and Setup for CSR1kv with OVS/ML2

The instruction for installation and setup are at http://wikicentral.cisco.com/display/OPENSTACK/Installation+Instructions+for+CSR+with+ML2

Red Hat OpenStack Installation and Setup

The Red Hat Quickstart guide explains how to use their Packstack installer \- [10]

CSR Instantiation as a neutron router

This is only available in the Juno release.

Since the CSR routing plugin is available, the neutron router API's can be used. For the Ubuntu devstack solution above, this should already have been done.

Step 1. Source the environment

It may be required to source the correct environment variables in order to execute the OpenStack CLI commands

Ubuntu

    _source /home/stack/devstack/openrc_ _admin demo_

Red Hat

    _source keystonerc_admin_


Step 2. Create two networks

Create two networks. The examples below create and internal (private) and external (public) network.
    External Network Creation:
        _neutron net-create public \--router:external True_
        _neutron subnet-create public 172.16.1.0/24_

    Internal Network Creation :
        _neutron net-create private_
        _neutron subnet-create private 10.0.0.0/24_

Step 3. Create the neutron router

Create the neutron router, and attach the public and private interfaces. The CSR acting as this neutron router will be launched in the tenant "l3admintenant", username/password = viewer/viewer

    _neutron router-create router1_

Step 4. Attach the interfaces

Attach the public and private interfaces. The CSR acting as this neutron router will be launched in the tenant "L3AdminTenant", username/password = viewer/viewer to access via Horizon.

''    neutron router-gateway-set <router1 ID> <public network ID>''
''    neutron router-interface-add <router1 ID> <private subnet ID>''

CSR Instantiation as a tenant VM

Step 1. Source the environment

It may be required to source the correct environment variables in order to execute the OpenStack CLI commands

Ubuntu

    _source_ _/home/stack/devstack/_{_}openrc admin demo_

Red Hat

    _source keystonerc_admin_

Step 2. Add CSR to image Store

Pick up available CSR QCOW2 format Image off CCO. Add the Image file in to Glance Image store through Glance APIs


Example:
    _glance image-create \--name csr_


Save the following information in a script (for example, csr-image.sh) and run it to create the glance image:

 csr1kvImageSrc="/root/csr_images/csr1000v-universalk9.03.13.00.S.154-3.S-ext.qcow2"

csr1kvImageName="csr-313-cco"

csr1kvDiskFormat="qcow2"

csr1kvContainerFormat="bare"

csr1kvGlanceExtraParams="--property hw_vif_model=virtio \--property hw_disk_bus=virtio \--property hw_cdrom_bus=ide"

tenantId=`keystone tenant-list \| grep " admin "\| cut \-f 2 \-d'\|'`

glance image-create \--name $csr1kvImageName \--owner $tenantId \--disk-format $csr1kvDiskFormat \--container-format $csr1kvContainerFormat \--file $csr1kvImageSrc $csr1kvGlanceExtraParams \--is-public true

Verify that the above added image has status "active" in glance repository

   _glance image-list_ 
   _glance image-show <img_id>_

Step 3. Create a Flavor

Create a CSR specific Flavor with the following options using the nova flavor-create CLI (ID: 100, Memory: 2 or 4GB, Disk Space: 0GB, VCPUs: 2 or 4)

Example:
   _nova flavor-create csr.2vcpu.4gb 100_ _4096 0 2_

Step 4. Create Neutron networks

Initially create a two networks if this hasn't already been done. The examples below create and internal (private) and external (public) network.

    External Network Creation:
        _neutron net-create public \--router:external True_
        _neutron subnet-create public 172.19.148.0/24_

    Internal Network Creation :
        _neutron net-create private_
        _neutron subnet-create private 10.11.12.0/24_

Step 5. Create Neutron Router and attach interfaces

Next, create the neutron router, and attach the public and private interfaces. The neutron router is needed to allow the tenant VM (CSR) to reach the external network. The tenant VM (CSR) is always connected to br-int and it can not reach br-ex which is the gateway to the external network.

    _neutron router-create router1_
''    neutron router-gateway-set <router1 ID> <public network ID>''
''    neutron router-interface-add <router1 ID> <private subnet ID>''

Step 6. Create ports to attach to the CSR

Creation of ports with a fixed IP address, allows it to be used in booting the CSR with the config drive.

    Port Creation Example:
''        neutron port-create private \--fixed-ip ip_address=10.11.12.2''

The following diagram from Horizon shows what will be created after the CSR is booted.


        !Screen Shot 2014-10-27 at 5.13.15 PM.png|border=1!

Step 6. CSR boot

The CSR can now be booted and attached to the port created above.

''nova boot <_ _instance_name_ _> \--image <image-id> \--flavor <_ _flavor_id_ _>  \--nic port-id=<port id>''

The CSR can also be booted with Config drive

The \--config-drive option can be used to specify the configuration that is loaded on the Cisco CSR 1000V when it comes up. Set the \--config-drive option to “true” and specify the configuration file name. The configuration file can either use the “ovf-env.xml” file using the OVF format, or the “iosxe_config.txt” file in which you enter the router configuration to be booted.

nova boot < _instance_name_ > \-\- image < _image_id_ > \-\- flavor < _flavor_id_ > \-\- nic net-id= < _uuid_ > \--config-drive= < true / false > \--file < _configuration_file_name_ >

nova boot < _instance_name_ > \-\- image < _image_id_ > \-\- flavor < _flavor_id_ > \-\- nic port-id= < _uuid_ > \--config-drive= < true / false > \--file < _configuration_file_name_ >

Note: These file names are hard-coded and required for the config-drive settings to boot.


The following example boots the Cisco CSR 1000V image on OpenStack with the “iosxe_config.txt” file containing the router configuration:

nova boot csr_instance \--image  csr_image \--flavor 6 \--nic  net-id=546af738-bc0f-43cf-89f2-1e2c747d1764  \--config-drive=true \--file  iosxe_config.txt=/opt/stack/iosxe_config.txt

Example iosxe_config.txt


hostname csr

line con 0
 logging synchronous
 transport preferred none

line vty 0 4
 login local
 transport preferred none
 transport input ssh telnet

username stack priv 15 secret cisco

interface GigabitEthernet1
 ip address 10.11.12.2 255.255.255.0
 no shutdown

ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 10.11.12.1

virtual-service csr_mgmt
 ip shared host-interface GigabitEthernet1
 activate

license accept end user agreement
license boot level premium

Step 7. Associate floating ip to CSR tenant VM

A floating IP from the public net to the CSR's private IP can be created if direct access from the external/public network is desired.

    _neutron floatingip-create <external network ID>_
''    neutron floatingip-associate <floating-ip ID> <port ID of CSR interface>''

Step 8. Route to host

If needed to access CSR from the host, add a route on host to direct traffic for above subnet to br-ex (OVS bridging construct)

''    sudo route add \-net 10.11.12.0/24 gw 172.24.4.226 dev br-ex''
''    sudo route add \-net 172.19.148.0/24 gw 0.0.0.0 dev br-ex''

Anti-spoof for CSR as a tenant VM

In neutron the anti-spoofing rules prevents a VM from transmitting traffic that isn't from it, and receiving traffic that isn't addressed to it. In order to disable this the anti-spoofing patch below needs to be applied. The user can then create a security group with the name defined in neutron.conf, and any instances spun up and associated with that security group will have anti spoofing disabled.

1. Download patch file. ([11]) 2. Copy file to /root/iptables_firewall.py.patch (on both controller and compute nodes) 3. Diff between the files to ensure that we are not pulling any other changes. diff <OPENSTACK_ROOT_DIR>/neutron/agent/linux/iptables_firewall.py iptables_firewall.py.patch 4. On controller and compute nodes:

    a.  cd <OPENSTACK_ROOT_DIR>/neutron/agent/linux
    b.  cp \-p iptables_firewall.py iptables_fiewall.py.orig
    c.   cp /root/iptables_firewall.py.patch iptables_firewall.py
    d.  Add configuration parameters:
        i.  vi /etc/neutron/neutron.conf
        ii.  Under \[default\] section, add:
             disable_anti_spoofing = True
             sec_group_svc_VM_name = security-group-name  ( default in case of ATT NetBond)
    e.  Restart Neutron server
        service neutron-server restart (Red Hat), If using devstack, restart the q-agt process
    f.   Restart Neutron OVS agent
        service neutron-openvswitch-agent restart (Red Hat)

CSR config drive on RHEL Openstack

With some CSR images, the config drive currently does not get applied to CSR in RHEL Openstack. The DDTS for this is CSCuq95156. If the CSR image you are using does not contain CSCuq95156, a work around for this is to modify the /etc/nova/release files on the compute nodes. Change the product = "OpenStack Compute" to "OpenStack Nova" in the example release file below:

\[Nova\] vendor = Red Hat product = OpenStack Compute package = 4.el7ost \[root@localhost nova\]#

Cisco FWaaS (ACL-based)

Enable Cisco FWaaS

Before starting stack.sh,

  • Enable Cisco FWaaS in the localrc file under the devstack directory:


enable_service q-fwaas
  • (Double check) Make sure following lines are in the ~/devstack/lib/neutron_plugins/services/firwall file


#FWAAS_PLUGIN=neutron.services.firewall.fwaas_plugin.FirewallPlugin
CISCO_FWAAS_PLUGIN=neutron.services.firewall.plugins.cisco.cisco_fwaas_plugin.CSRFirewallPlugin

function neutron_fwaas_configure_common {
  #_neutron_service_plugin_class_add $FWAAS_PLUGIN
  _neutron_service_plugin_class_add $CISCO_FWAAS_PLUGIN
}

Create / Update / Delete Firewall on CSR

Cisco firewall rules and policies can be provisioned from Dashboard or via Neutron CLI. Cisco firewall rules and policies can be provisioned from Dashboard or via Neutron CLI. Cisco firewall can be provisoned via Neutron CLI. (Dashboard support is coming.) Refer to the fllowing linke for more details on "neutron firewall-xxx" CLI: [12]

Provisioning CIsco firewall using Neutron CLI is described below.

  • Create Firewall*
  • Step 1: create firewall rule(s):
usage: neutron firewall-rule-create [-h] [-f {shell,table,value}] [-c COLUMN]
                                    [--max-width <integer>]
                                    [--variable VARIABLE] [--prefix PREFIX]
                                    [--request-format {json,xml}]
                                    [--tenant-id TENANT_ID] [--name NAME]
                                    [--description DESCRIPTION] [--shared]
                                    [--source-ip-address SOURCE_IP_ADDRESS]
                                    [--destination-ip-address DESTINATION_IP_ADDRESS]
                                    [--source-port SOURCE_PORT]
                                    [--destination-port DESTINATION_PORT]
                                    [--enabled {True,False}] --protocol
                                    {tcp,udp,icmp,any} --action {allow,deny}
Example: 
neutron firewall-rule-create --protocol icmp --action deny --name r1
  • Step 2: create firewall policy
usage: neutron firewall-policy-create [-h] [-f {shell,table,value}]
                                      [-c COLUMN] [--max-width <integer>]
                                      [--variable VARIABLE] [--prefix PREFIX]
                                      [--request-format {json,xml}]
                                      [--tenant-id TENANT_ID]
                                      [--description DESCRIPTION] [--shared]
                                      [--firewall-rules FIREWALL_RULES]
                                      [--audited]
                                      NAME
Example: 
neutron firewall-policy-create --firewall-rules r1 p1
  • Step 3: Identify firewall insertion point, i.e. port-id (interface) where the firewall to be applied
neutron port-list


  • Step 4: create firewall, with port-id and firewall direction
usage: neutron firewall-rule-create [-h] [-f {shell,table,value}] [-c COLUMN]
                                    [--max-width <integer>]
                                    [--variable VARIABLE] [--prefix PREFIX]
                                    [--request-format {json,xml}]
                                    [--tenant-id TENANT_ID] [--name NAME]
                                    [--description DESCRIPTION] [--shared]
                                    [--source-ip-address SOURCE_IP_ADDRESS]
                                    [--destination-ip-address DESTINATION_IP_ADDRESS]
                                    [--source-port SOURCE_PORT]
                                    [--destination-port DESTINATION_PORT]
                                    [--enabled {True,False}] --protocol
                                    {tcp,udp,icmp,any} --action {allow,deny}
Example: 
neutron firewall-rule-create --protocol icmp --action deny --name r1
  • Delete firewall*
neutron firewall-delete <firewall_name or firewall_uuid>
neutron firewall-policy-delete <firewall_policy_name or firewall_policy_uuid>
neutron firewall-rule-delete <firewall_rule_name or firewall_rule_uuid>
  • Update Firewall*
  • Update firewall rule
usage: neutron firewall-rule-update [-h] [--request-format {json,xml}]
                                    [--protocol {tcp,udp,icmp,any}]
                                    FIREWALL_RULE

Example:

neutron firewall-rule-update --protocol tcp r1
  • Update firewall policy


usage: neutron firewall-policy-update [-h] [--request-format {json,xml}]
                                      FIREWALL_POLICY

Example:

neutron firewall-policy-update --firewall-rules r2 p1
  • Update firewall
usage: neutron firewall-update -h [--request-format {json,xml}]
                               --policy POLICY
                               FIREWALL
                               --port-id <port_uuid>
                               --direction <inside|outside|both>

Example:


neutron firewall-update c34f68a0-efe9-484a-8a37-63dd5e06b2e2 --port-id e3103dba-3aa9-4ec0-a152-306ed7f38677 --direction inside
  • List firewalls rules, firewall policies, firewalls*
neutron firewall-rule-list
neutron firewall-policy-list
neutron firewall-list
  • Show firewall rule, firewall policy, firewall*
neutron firewall-rule-show <rule_uuid or rule_name>
neutron firewall-policy-show <policy_uuid or policy_name>
neutron firewall-show <firewall_uuid or firewall_name>

Cisco VPNaaS

Enable Cisco VPNaaS

Before starting stack.sh,

  • Enable Cisco VPNaaS in the *localrc* file under the devstack directory:


enable_service cisco_vpn


  • (Double check) Make sure following line is uncommented in /etc/neutron/neutron.conf file

service_provider = VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default

  • Uncomment line vpn_device_driver=neutron.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver in /etc/neutron/vpn_agent.ini
  • Uncomment line vpn_device_driver=neutron.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver in /opt/stack/neutron/etc/vpn_agent.ini

Create / Update / Delete Site-to-Site VPN on CSR

Site-to-Site VPN can be created via the Dashboard or Neutron CLI

  • Create Site-to-Site VPN*
  • Step 1: Create ike policy:
usage: neutron vpn-ikepolicy-create [-h] [-f {shell,table,value}] [-c COLUMN]
                                    [--max-width <integer>] [--prefix PREFIX]
                                    [--request-format {json,xml}]
                                    [--tenant-id TENANT_ID]
                                    [--description DESCRIPTION]
                                    [--auth-algorithm {sha1}]
                                    [--encryption-algorithm ENCRYPTION_ALGORITHM]
                                    [--phase1-negotiation-mode {main}]
                                    [--ike-version {v1,v2}]
                                    [--pfs {group2,group5,group14}]
                                    [--lifetime units=UNITS,value=VALUE]
                                    NAME
  • Step 2: Create ipsec policy
usage: neutron vpn-ipsecpolicy-create [-h] [-f {shell,table,value}]
                                      [-c COLUMN] [--max-width <integer>]
                                      [--prefix PREFIX]
                                      [--request-format {json,xml}]
                                      [--tenant-id TENANT_ID]
                                      [--description DESCRIPTION]
                                      [--transform-protocol {esp,ah,ah-esp}]
                                      [--auth-algorithm {sha1}]
                                      [--encryption-algorithm ENCRYPTION_ALGORITHM]
                                      [--encapsulation-mode {tunnel,transport}]
                                      [--pfs {group2,group5,group14}]
                                      [--lifetime units=UNITS,value=VALUE]
                                      NAME
  • Step 3: Create vpn-service
usage: neutron vpn-service-create [-h] [-f {shell,table,value}] [-c COLUMN]
                                  [--max-width <integer>] [--prefix PREFIX]
                                  [--request-format {json,xml}]
                                  [--tenant-id TENANT_ID] [--admin-state-down]
                                  [--name NAME] [--description DESCRIPTION]
                                  ROUTER SUBNET
  • Step 4: Create the site-to-site connection using the vpn-service
usage: neutron ipsec-site-connection-create [-h] [-f {shell,table,value}]
                                            [-c COLUMN]
                                            [--max-width <integer>]
                                            [--prefix PREFIX]
                                            [--request-format {json,xml}]
                                            [--tenant-id TENANT_ID]
                                            [--admin-state-down] [--name NAME]
                                            [--description DESCRIPTION]
                                            [--mtu MTU]
                                            [--initiator {bi-directional,response-only}]
                                            [--dpd action=ACTION,interval=INTERVAL,timeout=TIMEOUT]
                                            --vpnservice-id VPNSERVICE
                                            --ikepolicy-id IKEPOLICY
                                            --ipsecpolicy-id IPSECPOLICY
                                            --peer-address PEER_ADDRESS
                                            --peer-id PEER_ID --peer-cidr
                                            PEER_CIDRS --psk PSK

Show VPN related information

neutron vpn-ikepolicy-list
neutron vpn-ipsecpolicy-list
neutron vpn-service-list
neutron ipsec-site-connection-list

{*}enable_service cisco_vpn*



Objectives

This guide provides instructions on how to set up a CSR to provide site-to-site IPSec VPNaaS for an OpenStack cloud computing environment, using the Cisco Juno+ OpenStack repository.

For simplicity, the guide will use DevStack as a way to start up OpenStack. However, the same procedures can be applied to a traditional OpenStack deployment.

The OpenStack VPNaaS feature is an experimental release, and APIs may change in the future. The Cisco VPNaaS implementation requires a Cisco CSR and N1kV. Contact Cisco Sales for more information on these products.

Note: This software is provided "as is," and in no event does Cisco warrant that the software is error free or that customer will be able to operate the software without problems or interruptions.

Prerequisites

There are several things needed to setup an OpenStack cloud for VPN operation with CSR:

  • A CSR 3.13 .qcow2 image.
  • Latest N1kV .iso and .deb images
  • CSR licenses (contact Cisco Sales)
  • Cisco Juno+ Devstack repository
  • Community Kilo or Cisco Juno+ repos for OpenStack projects.
  • Adequate hardware (CPU/memory) for running OpenStack with CSR images (TODO link to UCS...)
  • Connectivity to another OpenStack (e.g. DevStack) cloud or compatible IPSec VPN site-to-site device (virtual or physical).


Assumptions:

  • You have a basic understanding of how to start up DevStack (http://devstack.org/).
  • You have some familiarity with how to configure (from a user's perspective) OpenStack's VPN with either Horizon or the Neutron client.
  • You're using Ubuntu Server 14.04 LTS (64 bit, obviously) on your host will all current updates. Using a different operating system is left as an exercise for the reader.


In this guide, we'll focus on the detailed steps needed to set up one of the two sites for the IPSec site-to-site VPN connectivity. The example uses the same method for the other site (with just brief info on the different IPs used), and you can follow the same instructions to create a complete site-to-site connection with two clouds using a CSR. Alternately, you can employ the reference OpenStack VPN implementation or a compatible virtual or physical VPN device for the other end of the site-to-site VPN connection.

Topology

In the example configuration, we have this physical topology:

Physical Topology.png

There are two UCS systems, which will each host an OpenStack (using DevStack) cloud for the site-to-site VPN end-points. A physical connection between the two nodes will be used for the "public" network (172.32.1.0/24), where traffic between the private networks (10.1.0.0/24 and 10.2.0.0/24) will be encrypted. In this lab setup, the physical switch used for the public network has its interfaces set up as trunk ports, allowing VLAN 321.

There is an internal sub-network on each node (10.0.32.0/24 and 10.0.33.0/24) for management of the CSR during operation, and another network (192.168.200.0/24) for setup of the clouds. Note: Internally, OpenStack uses 192.168.168.2 to communicate with the N1kV that is created.

Logically, we have the following topology for the first cloud, running on the devstack-32 host:

Devstack-32.png

The IPs shown for the CSR and the VMs are just for illustration purposes. They may/can be different, when you follow these procedures. For the other host, devstack-33, we have:

Devstack-33.png

The key information in these drawings are the subnets used for the CSR (private and public), and the physical ethernet interface being used.

Install Process

Step 1: Obtaining DevStack

The first thing we need to do, is obtain all of the OpenStack code and install it on the system. We'll use DevStack to do that, and will use the Cisco Juno+ repository. This has additional scripts that will set up a CSR router and a N1kV switch. Obtain the repo with:

cd 
git clone https://github.com/cisco-openstack/devstack
cd devstack
git checkout stable/junoplus

TODO: Verify that the repo location is correct and up-to-date.

Step 2: Preparing Your Environment

Download your CSR .qcow2 image into the ~/csr/3.13/ directory and the N1kV .iso and .dmg images into the ~/n1kv/ directory.

If you are using a firewall and have a proxy server set up, you may have to disable proxy for the local nodes, public subnet, private subnet, CSR manage IPs, the N1kV (192.168.168.2), and L3 config agent (10.0.32.2). Make sure the command is entered in the same terminal window where DevStack will be run (or put in your .bashrc).

printf -v lan '%s,' 192.168.200.{2..3};
printf -v public '%s,' 172.32.1.{10,11,12,13,20,21,22,23};
printf -v private '%s,' 10.1.0.{1..10};
printf -v admin_ips '%s,' 10.0.32.{10..19};
export no_proxy="cisco.com,${lan%,},${public%,},${private%,},${admin_ips%,},192.168.168.2,10.0.32.2";

Note: Substitute the IPs with the ones that apply for your system (10.0.32.*, 172.32.1.*, 10.1.0.*, 192.168.200.*)

Step 3: Configuring DevStack

For the devstack-32 host, we have these attributes to consider:

  • Local network (FIXED_RANGE) will be 10.1.0.0/24 with router (GW) using 10.1.0.1
  • Public GW at 172.32.1.10 on the 172.32.1.0/24 public network.
  • Set of floating IPs reserved on the public network in the range 172.32.1.11 - 172.32.1.19
  • Allocated IPs 10.0.32.10 - 10.0.32.254 for management interfaces for CSRs (the L3 config agent will use 10.0.32.2).
  • Using eth3 for the public network interface.
  • Chosen VLN range for use by N1kV 320-339.

Here is the localrc file for devstack-32, using the above information:

OFFLINE=False
RECLONE=Yes
# RECLONE=No

DEBUG=True
VERBOSE=True

HOST_IP=192.168.200.2

FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth3
NETWORK_GATEWAY=10.1.0.1
FLOATING_RANGE=172.32.1.0/24
PUBLIC_NETWORK_GATEWAY=172.32.1.10
Q_FLOATING_ALLOCATION_POOL="start=172.32.1.11,end=172.32.1.19"
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# Use br-int as bridge to reach external networks
PUBLIC_BRIDGE=br-int

our_pw=<password you want>
# Must use hard coded value, as scripts grep for the following variables.
MYSQL_USER=root
MYSQL_PASSWORD=<password you want>
RABBIT_PASSWORD=$our_pw
SERVICE_TOKEN=$our_pw
SERVICE_PASSWORD=$our_pw
ADMIN_PASSWORD=$our_pw

disable_service n-net
enable_service neutron
enable_service q-svc
disable_service q-agt
enable_service q-dhcp
enable_service ciscocfgagent
enable_service q-ciscorouter
enable_service cisco_vpn

# Destination path for installation of the OpenStack components.
# There is no need to specify it unless you want the code in
# some particular location (like in a directory shared by all VMs).
DEST=/opt/stack
SCREEN_LOGDIR=$DEST/screen-logs
LOGFILE=~/devstack/stack.sh.log

# Settings to get NoVNC to work.
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

# Type of virtualization to use. Options: kvm, lxc, qemu
LIBVIRT_TYPE=kvm
# Uncomment this to use LXC virtualization.
#LIBVIRT_TYPE=lxc

# List of images to use.
# ----------------------
case "$LIBVIRT_TYPE" in
    lxc) # the cirros root disk in the uec tarball is empty, so it will not work for lxc
	IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-rootfs.img.gz";;
    *)  # otherwise, use the uec style image (with kernel, ramdisk, disk)
	IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz";;
esac

Q_PLUGIN=cisco
declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(n1kv)
Q_CISCO_PLUGIN_RESTART_VSM=yes
Q_CISCO_PLUGIN_VSM_IP=192.168.168.2
Q_CISCO_PLUGIN_VSM_USERNAME=admin
Q_CISCO_PLUGIN_VSM_PASSWORD=<enter password here>
# Below are example images that can be used...
Q_CISCO_PLUGIN_VSM_ISO_IMAGE=$HOME/n1kv/n1000v-dk9.5.2.1.SK1.3.0.135.iso
Q_CISCO_PLUGIN_UVEM_DEB_IMAGE=$HOME/n1kv/nexus_1000v_vem-12.04-5.2.1.SK1.3.0.135.S0-0gdb.deb

Q_CISCO_PLUGIN_HOST_MGMT_INTF=eth2
Q_CISCO_PLUGIN_UPSTREAM_INTF=eth3
Q_CISCO_CSR1KV_SETUP_SCRIPT_DIR=$HOME/devstack/lib/neutron_plugins/services/csr1kv_l3_setup/
Q_CISCO_MGMT_SUBNET=10.0.32.0
Q_CISCO_MGMT_CFG_AGENT_IP=10.0.32.2
Q_CISCO_MGMT_SUBNET_USAGE_RANGE_START=10.0.32.10
Q_CISCO_MGMT_SUBNET_USAGE_RANGE_END=10.0.32.254

NOVA_USE_QUANTUM_API=v2
N1KV_VLAN_NET_PROFILE_NAME=default_network_profile
# Using host 32 and 33
N1KV_VLAN_NET_SEGMENT_RANGE=320-339

Q_CISCO_ROUTER_PLUGIN=yes
Q_CISCO_CSR1KV_QCOW2_IMAGE=$HOME/csr/3.13/<csr100v 3.13 image>.qcow2

GIT_BASE=https://github.com

# Until ncclient pipy packages contains the latest change for CSR1kv we fetch the needed version like this.
NCCLIENT_VERSION=0.4.1
NCCLIENT_REPO=${GIT_BASE}/leopoul/ncclient.git
NCCLIENT_COMMIT_ID=bafd9b22e2fb423a577ed9c91d28272adbff30d3

In this file, set your password for services, MySQL, and the VSM (N1kV). Use the filename for the CSR and N1kV images that were downloaded in the previous step.

Note: This localrc file enables the ciscocfgagent and q-ciscorouter services, which are required by the cisco_vpn service (also enabled). The cisco_vpn service will run the q-vpn service using Cisco service and device drivers and does NOT use the q-l3 service (the ciscocfgagent and q-ciscorouter are used instead).

Note: The N1kV VM will be created when stacking, and 192.168.168.2 (as specified in the localrc), will be used to communicate with the N1kV. A VLAN range is configured for the N1kV and it will use the second entry in the range for the external subnet interfaces. In this example, the range is 320-339, and the public network will use VLAN 321 (hence the reason for setting up the physical switch between clouds with this VLAN allowed.


If you want to use the latest (and moving) Kilo development code for OpenStack projects, you can leave this as-is. However, if you want a fixed (stable) Juno version with updates for an integrated VPN/L3 router plugin, you can add these lines to the localrc:

# Networking Service
NEUTRON_REPO=${GIT_BASE}/cisco-openstack/neutron.git
NEUTRON_BRANCH=stable/junoplus

# The following will be picked up from the community 'opsenstack' area
# Compute service
NOVA_BRANCH=stable/juno

# Volume Service
CINDER_BRANCH=stable/juno

# Image Service
GLANCE_BRANCH=stable/juno

# Web UI (Dashboard)
HORIZON_BRANCH=stable/juno

# Auth Services
KEYSTONE_BRANCH=stable/juno

# Any others desired...

Step 4: OpenStack Startup

At this point, you can start DevStack, by using ./stack.sh. Once everything is installed and running, you can modify localrc to turn off recloning for future stacking to save time :

# RECLONE=yes
RECLONE=No


Next, you can create some VMs to use for testing out the VPN. Here is a script that creates two VMs (of different types):

cat << EOT | tee build-vms.32
source ~/devstack/openrc admin admin
glance image-update cirros-0.3.3-x86_64-uec --property hw_vif_model=e1000

source ~/devstack/openrc admin demo
PRIVATE_NET=\`neutron net-list | grep private | cut -f 2 -d'|' | cut -f 2 -d' '\`

nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=\$PRIVATE_NET peter
nova boot --flavor 3 --image ubuntu-14.04-server-cloudimg-amd64 --user-data \$HOME/devstack/user_data.txt --nic net-id=\$PRIVATE_NET paul
EOT

chmod 755 build-vms.32

cat << EOT | tee user_data.txt
#cloud-config
password: cisco1
chpasswd: { expire: false}
ssh_pwauth: true
EOT

build-vms.32

This script will create a Cirros and a Ubuntu VM on the private subnet (10.1.0.0/24). In your setup, you can create as many or as few VMs as desired. You can use "nova list" to see what IPs were assigned to each VM. You may want to rename the script for the node you are working on.

Step 5: Setting up the other host

Clone the DevStack repo and download the images for CSR and N1kV.

Set the proxy, information, if needed. For example:

printf -v lan '%s,' 192.168.200.{2..3};
printf -v public '%s,' 172.32.1.{10,11,12,13,20,21,22,23};
printf -v private '%s,' 10.2.0.{1..10};
printf -v admin_ips '%s,' 10.0.33.{10..19};
export no_proxy="cisco.com,${lan%,},${public%,},${private%,},${admin_ips%,},192.168.168.2,10.0.33.2";

The second host, devstack-33, has different attributes:

  • Local network (FIXED_RANGE) will be 10.2.0.0/24 with router (GW) using 10.2.0.1
  • Public GW at 172.24.4.20 on the 172.24.4.0/24 public network.
  • Set of floating IPs reserved on the public network in the range 172.24.4.21 - 172.24.4.29
  • Using eth5 for the public network interface, and eth4 for the host admin network.
  • We specify to use the cisco_vpn_agent.ini file via the Q_VPN_EXTRA_CONF_FILES

For reference, here is the localrc used:

OFFLINE=False
# RECLONE=Yes
RECLONE=No

DEBUG=True
VERBOSE=True

HOST_IP=192.168.200.3

FIXED_RANGE=10.2.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth5
NETWORK_GATEWAY=10.2.0.1
FLOATING_RANGE=172.32.1.0/24
# The following doesn't exist on this setup
PUBLIC_NETWORK_GATEWAY=172.32.1.20
Q_FLOATING_ALLOCATION_POOL="start=172.32.1.21,end=172.32.1.29"
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# Use br-int as bridge to reach external networks
PUBLIC_BRIDGE=br-int

our_pw=<password you want>
# Must use hard coded value, as scripts grep for the following variables.
MYSQL_USER=root
MYSQL_PASSWORD=<password you want>
RABBIT_PASSWORD=$our_pw
SERVICE_TOKEN=$our_pw
SERVICE_PASSWORD=$our_pw
ADMIN_PASSWORD=$our_pw

disable_service n-net
enable_service neutron
enable_service q-svc
disable_service q-agt
enable_service q-dhcp
enable_service ciscocfgagent
enable_service q-ciscorouter
# enable_service q-ciscodevicemanager
enable_service cisco_vpn

# Destination path for installation of the OpenStack components.
# There is no need to specify it unless you want the code in
# some particular location (like in a directory shared by all VMs).
DEST=/opt/stack
SCREEN_LOGDIR=$DEST/screen-logs
LOGFILE=~/devstack/stack.sh.log

# Settings to get NoVNC to work.
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

# Type of virtualization to use. Options: kvm, lxc, qemu
LIBVIRT_TYPE=kvm
# Uncomment this to use LXC virtualization.
#LIBVIRT_TYPE=lxc

# List of images to use.
# ----------------------
case "$LIBVIRT_TYPE" in
    lxc) # the cirros root disk in the uec tarball is empty, so it will not work for lxc
	IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-rootfs.img.gz";;
    *)  # otherwise, use the uec style image (with kernel, ramdisk, disk)
	IMAGE_URLS="http://cloud-images.ubuntu.com/releases/14.04.1/release/ubuntu-14.04-server-cloudimg-amd64.tar.gz,http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz";;
esac

Q_PLUGIN=cisco
declare -a Q_CISCO_PLUGIN_SUBPLUGINS=(n1kv)
Q_CISCO_PLUGIN_RESTART_VSM=yes
Q_CISCO_PLUGIN_VSM_IP=192.168.168.2
Q_CISCO_PLUGIN_VSM_USERNAME=admin
Q_CISCO_PLUGIN_VSM_PASSWORD=<enter password here>
# Below are example images that can be used...
Q_CISCO_PLUGIN_VSM_ISO_IMAGE=$HOME/n1kv/n1000v-dk9.5.2.1.SK1.3.0.135.iso
Q_CISCO_PLUGIN_UVEM_DEB_IMAGE=$HOME/n1kv/nexus_1000v_vem-12.04-5.2.1.SK1.3.0.135.S0-0gdb.deb

Q_CISCO_PLUGIN_HOST_MGMT_INTF=eth4
Q_CISCO_PLUGIN_UPSTREAM_INTF=eth5
Q_CISCO_CSR1KV_SETUP_SCRIPT_DIR=$HOME/devstack/lib/neutron_plugins/services/csr1kv_l3_setup/
Q_CISCO_MGMT_SUBNET=10.0.33.0
Q_CISCO_MGMT_CFG_AGENT_IP=10.0.33.2
Q_CISCO_MGMT_SUBNET_USAGE_RANGE_START=10.0.33.10
Q_CISCO_MGMT_SUBNET_USAGE_RANGE_END=10.0.33.254

NOVA_USE_QUANTUM_API=v2
N1KV_VLAN_NET_PROFILE_NAME=default_network_profile
# Using host 32 and 33
N1KV_VLAN_NET_SEGMENT_RANGE=320-339

Q_CISCO_ROUTER_PLUGIN=yes
Q_CISCO_CSR1KV_QCOW2_IMAGE=$HOME/csr/3.13/<csr100v 3.13 image>.qcow2

GIT_BASE=https://github.com

# Until ncclient pipy packages contains the latest change for CSR1kv we fetch the needed version like this.
NCCLIENT_VERSION=0.4.1
NCCLIENT_REPO=${GIT_BASE}/leopoul/ncclient.git
NCCLIENT_COMMIT_ID=bafd9b22e2fb423a577ed9c91d28272adbff30d3

Again, set the passwords for MySQL and services, and passwords and filenames used for CSR and N1kV. Adjust IPs as well, for this host. If you want the fixed Juno+ OpenStack repos, as the same lines as were mentioned above for devstack-32.

Stack, and then create any VM images desired. For example:

cat << EOT | tee build-vms.33
source ~/devstack/openrc admin admin
glance image-update cirros-0.3.3-x86_64-uec --property hw_vif_model=e1000

source ~/devstack/openrc admin demo
PRIVATE_NET=\`neutron net-list | grep private | cut -f 2 -d'|' | cut -f 2 -d' '\`

nova boot --flavor 1 --image cirros-0.3.3-x86_64-uec --nic net-id=\$PRIVATE_NET mary
nova boot --flavor 3 --image ubuntu-14.04-server-cloudimg-amd64 --user-data \$HOME/devstack/user_data.txt --nic net-id=\$PRIVATE_NET thomas
EOT

chmod 755 build-vms.33

cat << EOT | tee user_data.txt
#cloud-config
password: cisco1
chpasswd: { expire: false}
ssh_pwauth: true
EOT

./build-vms.33

This script will create a Cirros and a Ubuntu VM on the private subnet (10.1.0.0/24). In your setup, you can create as many or as few VMs as desired. You can use "nova list" to see what IPs were assigned to each VM. Like with the other node, you need to create the user_data.txt file for a Ubuntu image.

Check Basic Operation

You can verify that the VMs can ping the CSR router that was created (router1), by SSHing into the VM and then pinging the local IP of the CSR.

You can also SSH into the CSR and check that the the remote CSR (via the public subnet) and local subnet VMs can be pinged. You can select the internal tenant used for the CSR ("source ~/devstack/openrc neutron L3AdminTenant") and do a "nova list". To SSH in, use:

ssh stack@10.0.32.10 -o KexAlgorithms=diffie-hellman-group14-sha1

Use the IP from the nova list output, and use a password of 'cisco'.

Use 'show running' to identify the VRF used for public and private networks (will be the same VRF), and then you can do 'ping vrf nrouter-xxxxxx 172.32.1.21', for example, to check the far end CSR from devstack-32's CSR (assuming it was assigned the 172.32.1.21 IP for its public interface).

To verify access to the CSR's REST APIs use (assuming that the CSR's management IP is 10.0.32.10):

curl  -X POST https://10.0.32.10:55443/api/v1/auth/token-services -H "Accept:application/json" -H "Content-Length: 0" -u "stack:cisco" -d "" -k -3 -v

This should succeed and return an authorization token for the CSR:

[verbose output elided]
{"kind": "object#auth-token", "expiry-time": "Tue Nov 18 13:37:10 2014", "token-id": "rbfYhwy/OKpGAYUL02DW03g+moWpXkUbehYBOK5YPmI=", "link": "https://10.0.32.10:55443/api

Now that we know there is REST management access to the CSR from the host, and the CSR has connectivity on the public and private interfaces, we can use the Neutron client CLI commands or Horizon to configure a site-to-site IPSec connection between the two clouds. On devstack-32, these commands can be issued:

source openrc admin demo

neutron vpn-ikepolicy-create ikepolicy1
neutron vpn-ipsecpolicy-create ipsecpolicy1
neutron vpn-service-create --name myvpn --description "My vpn service" router1 private-subnet

neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 \
        --ipsecpolicy-id ipsecpolicy1 --peer-address 172.32.1.21 --peer-id 172.32.1.21 \
        --peer-cidr 10.2.0.0/24 --psk secret


Here are the steps to setup the connection on devstack-33:

source openrc admin demo

neutron vpn-ikepolicy-create ikepolicy1
neutron vpn-ipsecpolicy-create ipsecpolicy1
neutron vpn-service-create --name myvpn --description "My vpn service" router1 private-subnet

neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id myvpn --ikepolicy-id ikepolicy1 \
        --ipsecpolicy-id ipsecpolicy1 --peer-address 172.32.1.11 --peer-id 172.32.1.11 \
        --peer-cidr 10.1.0.0/24 --psk secret


You should be able to ping from a VM in the devstack-32 cloud to a VM in the devstack-33 cloud. The neutron vpn-service-status and ipsec-site-connection-list commands can be used to see the status of the IPSec connection. You can also SSH into the CSR and look at the configuration, logs, and enable debugging.

Note: It takes time for the VPN connection to be negotiated.

Reference Information

TODO: OS link, VPN APIs, CSR pages,...

Rating: 0.0/5 (0 votes cast)

Personal tools