L3 functionality in OpenStack via namespace routers has some limitations. One limitation is the inability to easily leverage hardware based routing technologies which provide scale, throughput, and in some cases functionality that is lacking from the existing software based solutions. Another limitation is the lack of high availability models which are prevalent in most datacenter environments, especially for a services as important as the gateway forwarding function. (Note: This is (partially) addressed with DVR, but some issues remain). Via the standard OpenStack Neutron plugin architecture, the OpenStack ASR1000 plugin addresses these shortfalls.
To provide a fully functional L3 service, the plugin supports these features:
1) Support for static L3 forwarding between associated tenant L2 networks (and their associated L3 subnets)
2) Support for overlapping IP address ranges between different tenants (so each tenant could use the same RFC-1918 IPv4 address space)
3) Support for NAT Overload (or PAT) for connections originating behind the tenant router and targeting a device on (or through) an “external” network.
4) Support for NAT for connections originating from (or through) an “external” network targeting a specific tenant network attached device (VM).
In addition, in order to support data center resiliency models, a high availability feature is also provided to support the above features. For L3 forwarding, this is multiple router redundancy with a L3 redundancy protocol; namely HSRP.
The ASR1K routing service Plugin, in conjunction with the hosting device manager plugin, provides functionality to map L3 functionality in a Neutron network to an ASR1000. The hosting device manager manages the mapping of L3 services to a variety of Cisco back ends, both virtual and physical, in a seamless manner. The plugin works in conjunction with the Cisco Config Agent which translates the Neutron configuration to the appropriate configuration representation in the back end. In this case, the Neutron configuration is realized as a Cisco IOS configuration in the ASR. As an example, floating IP is supported as Static and Dynamic NAT configuration with translation occurring on the hardware.
In addition to satisfying the scale requirements another requirement from most customers is to deploy in a High Availability environment. The plugin handles the creation of backup routers which are mapped to the ASR1K deployed as HSRP based HA pairs.
The deployment also supports multiple controllers and Cisco Config Agents for HA with Neutron. Support is also provided to enable an ASR1000 pair to be shared across multiple OpenStack deployments as multiple regions for better utilization of the hardware. The Cisco Config Agent also monitors the ASR1000 back ends to notify the plugin if a device is unreachable either for a transient interval or a longer term so appropriate action can be taken.
The OpenStack ASR1000 Plugin was first supported in the Liberty release. There are versions which support Mitaka and Newton, and Ocata support is planned. See details below.
For Devstack based deployments, the usual requirements of Controllers and Compute Nodes running RHEL or Ubuntu.
ASR1K Routers deployed as HA pairs.
ASR1K plugin and Cisco Config Agent.
ASR1K configured for netconf:
asr-1(config)#netconf max-sessions 16 asr-1(config)#netconf max-message 1000000 asr-1(config)#netconf ssh
Plugin and associated platform defects are tracked in OpenStack launchpad @ https://bugs.launchpad.net/networking-cisco/+bugs?field.tag=asr1k. Please review to see known issues.