IWAN Solutions:IWAN2.1

From DocWiki

Revision as of 17:21, 25 January 2016 by Jbarozet (Talk | contribs)
Jump to: navigation, search

Pfrv3-banner.png



PfR Enterprise WAN Application Control


Contents



PfRv3 DocWiki Navigation


!!!!!!! CURRENTLY UNDER UPDATE !!!!!


Version Used - PfRv3

Last Update: April 24, 2015

Performance Routing v3 (PfRv3) is the Cisco next generation Intelligent Path Control. IOS 15.4(3)M and IOS-XE 3.13 releases are the minimum releases required to support PfRv3. This guide is based on IOS 15.5(2)T with the new support of Transit Sites. This a major release that enables PfRv3 to support advanced topologies, multiple next-hop per DMVPN and multiple Transit Sites with the same prefix.

PfRv3 Transit Site documentation:


IWAN Overview

The Cisco Intelligent WAN (IWAN) solution provides design and implementation guidance for organizations looking to deploy a transport independent WAN with intelligent path control, application optimization, and secure connectivity to the Internet and branch locations while reducing the operating cost of the WAN. IWAN takes full advantage of premium WAN and cost-effective Internet services to increase bandwidth capacity without compromising performance, reliability, or security of collaboration or cloud-based applications.

For more information, refer to IWAN Overview and read about IWAN at www.cisco.com/go/iwan or contact your local Cisco account representative.


Enterprise Deployment Used in this Lab

Overview

An enterprise is beginning their WAN network redesign and is very interested in augmenting the bandwidth with an Internet based link. Their primary path is MPLS and they want to use this preferred path for all critical applications and fallback to the secondary when there is a performance issue.

IWAN uses a prescriptive design with an Hybrid Transport Independent design based on DMVPN. DMVPN is deployed across MPLS and Internet Transport. This greatly simplifies the routing by using a single routing domain that encompasses both transports. The DMVPN routers use tunnel interfaces that support IP unicast as well as IP multicast and broadcast traffic, including the use of dynamic routing protocols. After the initial spoke-to-hub tunnel is active, it is possible to create dynamic spoke-to-spoke tunnels when site-to-site IP traffic flows require it.

The Transport Independent Design is based on one DMVPN cloud per provider. In this guide we use two providers, one being considered as the primary (MPLS), and one considered as the secondary (Internet). Branch sites are connected to both DMVPN clouds and both tunnels are up.


Transport and Overlay Backbones

The Transport Topology is based on two Service Providers MPLS and Internet. A DMVPN overlay is built on top of each provider WAN. Each Hub Border Router also supports a single provider to simplify the routing configuration. MPLS being considered as the primary one with known SLAs. Therefore MPLS is the preferred path for voice/video and critical applications:


Pfrv3-topology-physical.png


The overlay topology is based on two DMVPN clouds:


Pfrv3-topology-overlay.png


This overlay design with two DMVPN clouds can accommodate any kind of transports. The primary path can connect to an MPLS-VPN or even to the Public Internet. The configuration of PfR (and QoS) will remain the same even if the transport design changes.


Datacenter Design

Addressing Plan:

  • 172.16.0.0/16 - MPLS Transport
  • 100.64.0.0/16 - INET Transport
  • 192.168.100.0.0/16 - DMVPN Overlay for MPLS
  • 192.168.200.0.0/16 - DMVPN Overlay for INET
  • Site1: 10.1.0.0/16
  • Site2: 10.2.0.0/16
  • Site3: 10.3.3.0/24
  • Site4: 10.4.4.0/24
  • Site5: 10.5.5.0/24

Site1: Datacenter1 (10.1.0.0/16):

  • A dedicated Master Controller (MC) R10 and two Border Routers (BRs) R11 (MPLS) and R12 (INET)

Site2: Datacenter2 (10.2.0.0/16):

  • A dedicated Master Controller (MC) R20 and two Border Routers (BRs) R21 (MPLS) and R12 (INET)

Branch Sites:

  • Site3; Single CPE branch. R31 (MC/BR)
  • Site4: Single CPE branch. R41 (MC/BR)
  • Site5: Dual CPE branch. R51 (MC/BR) and R52 (BR)


Traffic Generation

VOICE Between Site3 and Site1 and between Site3 and Site4 (spoke to spoke)

    • UDP dest-port 20000
    • UDP src-port 30000
    • DSCP = EF (46, 0x2E)
    • TOS = 0xB8, 184


CRITICAL Critical Application running on port TCP 7000 using DSCP AF21, between site3 and site1.

  • DC1-R82 to branch10, branch11, and branch12
  • TCP port 7000
  • DSCP = AF21 (18, 0x12)
  • TOS = 0x48, 72


BEST EFFORT Best effort application between Site3 and Site1.

    • TCP port 25 and 80
    • DSCP 0



Transport Independent Design (Dual DMVPN)

Design Summary

The design provides active-active WAN paths that take full advantage of DMVPN for consistent IPsec overlay. The MPLS and Internet connections can be terminated on a single router, or terminated on two separate routers for additional resiliency. The same design can be used over MPLS, Internet, or 3G/4G transports, making the design transport- independent.

Each DMVPN cloud has two DMVPN hubs:

  • R11 and R21 for DMVPN over MPLS
  • R12 and R22 for DMVPN over Internet

For the current IWAN release, only one transport is supported per DMVPN hub.

DMVPN requires the use of Internet Key Management Protocol version 2 (IKEv2) keepalive intervals for Dead Peer Detection (DPD), which is essential to facilitate fast reconvergence and for spoke registration to function properly in case a DMVPN hub is reloaded. This design enables a spoke to detect that an encryption peer has failed and that the IKEv2 session with that peer is stale, which then allows a new one to be created. Without DPD, the IPsec SA must time out (the default is 60 minutes) and when the router cannot renegotiate a new SA, a new IKEv2 session is initiated. The maximum wait time is approximately 60 minutes.



DMVPN Phase Summary

DMVPN has multiple phases that are summarized below:

Pfr-dmvpn-phases.png


DMVPN Phase 2 has no summarization on the hub:

  • Each spoke has the next-hop (spoke address) for each spoke destination prefix.
  • PfR has all information to enforce the path with dynamic PBR and the correct next-hop information


DMVPN phase3 allows route summarization:

  • When parent route lookup is performed, only the route to the hub is available
  • NHRP dynamically installs shortcut tunnel and hence populates RIB/CEF.
  • PfR still has the hub next-hop information and is currently unaware of the next-hop change.


PfRv3 supports all DMVPN Phases.


Front Door VRF

Virtual Route Forwarding (VRF) is a technology that allows multiple instances of a routing table to co-exist within the same router at the same time. Because the routing instances are independent, you can use the same or overlapping IP Addresses without conflicting with each other. The simplest form of VRF implementation is VRF Lite. In this implementation, each router within the network participates in the virtual routing environment on a peer-by-peer basis. VRF Lite configurations are only locally significant. The global VRF corresponds to the traditional routing table, and additional VRFs are given names and route descriptors (RDs). Certain features on the router are VRF aware, including static routing and routing protocols, interface forwarding and IPSec tunneling.

The IP routing policy used in this design for the WAN remote sites does not allow direct Internet access for web browsing or other uses; any remote-site hosts that access the Internet must do so via the Internet edge at the primary site. The end hosts require a default route for all Internet destinations; however, this route must force traffic across the primary or secondary WAN transport DMVPN tunnels. This requirement conflicts with the more general VPN spoke router requirement for an Internet-facing default route to bring up the VPN tunnel.

The multiple default route conflict is solved through the use of Front VRFs on the router. This is used in conjunction with DMVPN to permit the use of multiple default routes for both the DMVPN hub routers and DMVPN spoke routers. This combination of features is referred to as front-door (FVRF), because the VRF faces the Internet and the router internal interfaces and the mGRE tunnel all remain in the global VRF.

Pfr-iwan-fvrf.png


Note:

  • PfRv3 is VRF-aware but in this scenario PfRv3 does not care about traffic in the Front-Door VRF. This is just used to build the DMVPN tunnels.
  • Tunnels IP addresses are still in the global routing table


The DMVPN hub requires a connection to the Internet, and the DMVPN hub is usually connected through a Firewall using a DMZ interface specifically created and configured for a VPN termination router. This is not represented here.

The Front Door VRF implementation requires the following steps:

  • Creating the VRF
  • Assigning the external interface to the FVRF
  • Defining a default route in the FVRF to allow the creation of the DMVPN tunnel


Front Door VRF Configuration on R11:

!
vrf definition MPLS01
 !
 address-family ipv4
 exit-address-family
!
interface GigabitEthernet0/1
 description MPLS01-TRANSPORT
 bandwidth 1000000
 vrf forwarding MPLS01
 ip address 172.16.11.1 255.255.255.252
 delay 1
 load-interval 30
 no shutdown
!
ip route vrf MPLS01 0.0.0.0 0.0.0.0 172.16.11.2
!


Front Door VRF Configuration on R12:

!
vrf definition INET01
 !
 address-family ipv4
 exit-address-family
!
interface GigabitEthernet0/1
 description INET01-TRANSPORT
 bandwidth 1000000
 vrf forwarding INET01
 ip address 100.64.12.1 255.255.255.252
 delay 1
 load-interval 30
!
ip route vrf INET01 0.0.0.0 0.0.0.0 100.64.12.2
!


The DMVPN spoke routers at the WAN remote sites connect to the Internet directly through a router interface without a separate firewall. This connection is secured in two ways. Because the Internet interface is in a separate VRF, no traffic can access the global VRF except traffic sourced through the DMVPN tunnel. This design provides implicit security. Additionally, an IP access list permits only the traffic required for an encrypted tunnel, as well as DHCP and various ICMP protocols for troubleshooting. The IP access list must permit the protocols specified in the following configuration sample. The access list is applied inbound on the WAN interface, so filtering is done on traffic destined to the router.


interface Ethernet0/1
 ip access-group ACL-INET-PUBLIC in
!
interface Ethernet0/2
 ip access-group ACL-INET-PUBLIC in
!
ip access-list extended ACL-INET-PUBLIC
 permit udp any any eq non500-isakmp         ! IPsec via NAT-T 
 permit udp any any eq isakmp                ! ISAKMP (UDP 500)
 permit esp any any                          ! IPSEC
 permit udp any any eq bootpc                ! DHCP


The additional protocols listed in the following table may assist in troubleshooting, but are not explicitly required to allow DMVPN to function properly.

ip access-list extended ACL-INET-PUBLIC
 permit icmp any any echo                     ! Allow remote pings 
 permit icmp any any echo-reply               ! Allow ping replies (from our requests) 
 permit icmp any any ttl-exceeded             ! Allow traceroute replies (from our requests) 
 permit icmp any any port-unreachable         ! Allow traceroute replies (from our requests) 
 permit udp any any gt 1023 ttl eq 1          ! Allow remote traceroute 


Front Door VRF Configuration on R31 which is dual homed:

!
vrf definition INET01
 !
 address-family ipv4
 exit-address-family
!
vrf definition MPLS01
 !
 address-family ipv4
 exit-address-family
 !
!
interface GigabitEthernet0/1
 description MPLS01-TRANSPORT
 bandwidth 1000000
 vrf forwarding MPLS01
 ip address 172.16.31.1 255.255.255.252
 load-interval 30
 delay 1
 duplex auto
 speed auto
 media-type rj45
 no shutdown
!
interface GigabitEthernet0/2
 description INET01-TRANSPORT
 bandwidth 1000000
 vrf forwarding INET01
 ip address dhcp
 load-interval 30
 delay 1
 duplex auto
 speed auto
 media-type rj45
 no shutdown
!
ip route vrf MPLS01 0.0.0.0 0.0.0.0 172.16.31.2

!


Note that there is no static route for the INET transport. This is because DHCP is used and R31 will get its IP address and default gateway from DHCP.

The next step is to build the DMVPN tunnels. Tunnel source and destination IP addresses will be in the Front-Door VRF.


DMVPN Configuration - Hub IKEv2 and IPSec

The primary goal of encryption is to provide data confidentiality, integrity, and authenticity by encrypting IP packets as the data travels across a network. This is not mandatory in the IWAN design, especially for DMVPN over MPLS transport. The encrypted payloads are then encapsulated with a new header (or multiple headers) and transmitted across the network.


The MPLS and Internet connections are terminated on two separate routers for additional resiliency.

  • R11 and R21 are DMVPN hubs for the DMVPN cloud over MPLS.
  • R12 and R22 are DMVPN hubs for the DMVPN cloud over INET.
  • Branch with a single CPE has 2 tunnels, one per DMVPN cloud.
  • Branch with dual CPE has also 2 tunnels, one per CPE


In this solution guide, we are using IKEv2 with smart defaults and simplified commands (NHRP) to further simplify the configuration. Pre-shared key are used here for simplicity sake and as a first step to test IWAN. The PKI infrastructure and design will be fully described and explained in the IWAN Cisco Validated Design (CVD).

1. Configure the crypto keyring

  • The crypto keyring defines a pre-shared key (or password) valid for IP sources reachable within a particular VRF.
  • This key is a wildcard pre-shared key if it applies to any IP source.
  • A wildcard key is configured using the 0.0.0.0 0.0.0.0 network/mask combination.

2. IKE Proposal

  • The IKE proposal is based on smart defaults and therefore not defined here.

3. Configure the IKE Profile

  • The IKE profile creates an association between an identity address, a VRF, and a crypto keyring.
  • A wildcard address within a VRF is referenced with 0.0.0.0.

4. Define the IPSec transform set

  • A transform set is an acceptable combination of security protocols, algorithms, and other settings to apply to IPsec-protected traffic.
  • Peers agree to use a particular transform set when protecting a particular data flow.

5. Create the IPSec profile

  • The IPsec profile creates an association between an IKE profile and an IPsec transform-set.


Let's look at the R11 DMVPN configuration. A similar configuration applies to R21.

!------------------------------------------------------------
! KEYRING 
! Use pre-share key here
!------------------------------------------------------------
!
crypto ikev2 keyring DMVPN-KEYRING-1
 peer ANY
  address 0.0.0.0 0.0.0.0
  pre-shared-key c1sco123
 !
!
!
!------------------------------------------------------------
! IKEv2 PROPOSAL
!
! Removed IKEv2 proposal, will use smart default
!------------------------------------------------------------
!
!
!------------------------------------------------------------
! IKEv2 PROFILE
!------------------------------------------------------------
!
crypto ikev2 profile DMVPN-IKE-PROFILE-MPLS
 match fvrf MPLS01
 match identity remote address 0.0.0.0 
 authentication remote pre-share
 authentication local pre-share
 keyring local DMVPN-KEYRING-MPLS
!
!
!------------------------------------------------------------
! IPSEC
!------------------------------------------------------------
!
! It is recommended that you use the maximum window size to eliminate future anti-replay problems. 
! On the Cisco ASR 1000 router platform, the maximum replay window size is 512
! If you do not increase the window size, the router may drop packets 
! and you may see the following error message on the router CLI:
! %CRYPTO-4-PKT_REPLAY_ERR:  decrypt: replay check failed
!
crypto ipsec security-association replay window-size 512
!
!
crypto ipsec transform-set AES256/SHA/TRANSPORT esp-aes 256 esp-sha-hmac 
 mode transport
!
crypto ipsec profile DMVPN-IPSEC-PROFILE-MPLS
 set transform-set AES256/SHA/TRANSPORT 
 set ikev2-profile DMVPN-IKE-PROFILE-MPLS
!


Let's look at the R12 DMVPN configuration. A similar configuration applies to R22.

!------------------------------------------------------------
! KEYRING
! Use pre-share key here
!------------------------------------------------------------
!
crypto ikev2 keyring DMVPN-KEYRING-INET
 peer ANY
  address 0.0.0.0 0.0.0.0
  pre-shared-key CISCO123
 !
!
!------------------------------------------------------------
! IKEv2 PROPOSAL
!
! Removed IKEv2 proposal, will use smart default
!------------------------------------------------------------
!
!
!------------------------------------------------------------
! IKEv2 PROFILE
!------------------------------------------------------------
!
crypto ikev2 profile DMVPN-IKE-PROFILE-INET
 match fvrf INET01
 match identity remote address 0.0.0.0 
 authentication remote pre-share
 authentication local pre-share
 keyring local DMVPN-KEYRING-INET
!
!
!------------------------------------------------------------
! IPSEC
!------------------------------------------------------------
!
! To avoid %CRYPTO-4-PKT_REPLAY_ERR
crypto ipsec security-association replay window-size 512
!
!
crypto ipsec transform-set AES256/SHA/TRANSPORT esp-aes 256 esp-sha-hmac 
 mode transport
!
crypto ipsec profile DMVPN-IPSEC-PROFILE-INET
 set transform-set AES256/SHA/TRANSPORT 
 set ikev2-profile DMVPN-IKE-PROFILE-INET
!
!



DMVPN Configuration - Hub Interfaces

The additional headers introduce a certain amount of overhead to the overall packet length. The following table highlights the packet overhead associated with encryption based on the additional headers required for various combinations of IPsec and GRE.

  • GRE only 24 bytes
  • IPsec (Transport Mode): 36 bytes
  • IPsec (Tunnel Mode): 52 bytes
  • IPsec (Transport Mode) + GRE: 60 bytes
  • IPsec (Tunnel Mode) + GRE: 76 bytes


There is a maximum transmission unit (MTU) parameter for every link in an IP network and typically the MTU is 1500 bytes. IP packets larger than 1500 bytes must be fragmented when transmitted across these links. Fragmentation is not desirable and can impact network performance. To avoid fragmentation, the original packet size plus overhead must be 1500 bytes or less, which means that the sender must reduce the original packet size. To account for other potential overhead, Cisco recommends that you configure tunnel interfaces with a 1400 byte MTU.

There are dynamic methods for network clients to discover the path MTU, which allow the clients to reduce the size of packets they transmit. However, in many cases, these dynamic methods are unsuccessful, typically because security devices filter the necessary discovery traffic. This failure to discover the path MTU drives the need for a method that can reliably inform network clients of the appropriate packet size. The solution is to implement the ip tcp adjust mss [size] command on the WAN routers, which influences the TCP maximum segment size (MSS) value reported by end hosts.

The MSS defines the maximum amount of data that a host is willing to accept in a single TCP/IP datagram. The MSS value is sent as a TCP header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side. The sending host is required to limit the size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

The IP and TCP headers combine for 40 bytes of overhead, so the typical MSS value reported by network clients will be 1460. This design includes encrypted tunnels with a 1400 byte MTU, so the MSS used by endpoints should be configured to be 1360 to minimize any impact of fragmentation. In this solution, you implement the ip tcp adjust mss 1360 command on all WAN facing router interfaces.


DMVPN uses multipoint GRE (mGRE) tunnels. This type of tunnel requires a source interface only.

  • Use the same source interface that you use to connect to the Internet.
  • Set the tunnel vrf command to the VRF defined previously for FVRF.
  • Configure basic interface settings
    • The bandwidth setting should be set to match the bandwidth of the respective primary or secondary carrier.
    • The IP MTU should be configured to 1400
    • The ip tcp adjust-mss should be configured to 1360.
    • There is a 40 byte difference which corresponds to the combined IP and TCP header length.
  • Configure NHRP
    • Define Next Hop Server (NHS)
    • Define static mapping to R94 and vice-versa. Note the new configuration used which combines multiple lines that we used to have.
    • Dynamic mapping for spokes
    • Enable nhrp redirect for direct spoke to spoke tunnels
    • Set NHRP holdtime to 600
  • Apply the IPSec profile to the tunnel


This is the configuration on R11, hub for DMVPN over MPLS. A similar configuration is applied to R21.

!
interface Tunnel100
 description DMVPN-MPLS
 bandwidth 1000
 ip address 192.168.100.11 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip flow monitor MONITOR-STATS input
 ip flow monitor MONITOR-STATS output
 ip nhrp authentication CISCO
 ip nhrp map multicast dynamic
 ip nhrp network-id 100
 ip nhrp holdtime 600
 ip nhrp shortcut
 ip nhrp redirect
 ip tcp adjust-mss 1360
 load-interval 30
 tunnel source GigabitEthernet0/1
 tunnel mode gre multipoint
 tunnel key 100
 tunnel vrf MPLS01
 tunnel protection ipsec profile DMVPN-IPSEC-PROFILE-MPLS
 domain IWAN path MPLS path-id 1
!


This is the configuration on R12, hub for DMVPN over INET. A similar configuration is applied to R22.

interface Tunnel200
 description DMVPN-Internet
 bandwidth 1000
 ip address 192.168.200.12 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip flow monitor MONITOR-STATS input
 ip flow monitor MONITOR-STATS output
 ip nhrp authentication CISCO2
 ip nhrp map multicast dynamic
 ip nhrp network-id 200
 ip nhrp holdtime 600
 ip nhrp shortcut
 ip nhrp redirect
 ip tcp adjust-mss 1360
 load-interval 30
 tunnel source GigabitEthernet0/1
 tunnel mode gre multipoint
 tunnel key 200
 tunnel vrf INET01
 tunnel protection ipsec profile DMVPN-IPSEC-PROFILE-INET
 domain IWAN path INET path-id 2
!


DMVPN Configuration - Spokes IKEv2 and IPSec

IKEv2 and IPSec configuration is basically the same as explained in the hub section with the following additions:

  • A best practice is to enable Dead Peer Detection (DPD) on the spokes. Dead Peer Detection (DPD) detects unreachable IKE peers and Each peer’s DPD state is independent of the others. DPD is not recommended for Hub routers – it causes an increase in CPU overhead with large number of peers.
    • with keepalive intervals sent at 40-second intervals
    • with a 5-second retry interval, which is considered to be a reasonable setting to detect a failed hub.
  • We also enables if-state nhrp to make the tunnel going down.


R31 Spoke Configuration with 2 DMVPN tunnels is as follows.

!------------------------------------------------------------
! KEYRING
!------------------------------------------------------------
!
crypto ikev2 keyring DMVPN-KEYRING-INET
 peer ANY
  address 0.0.0.0 0.0.0.0
  pre-shared-key CISCO123
 !
!
crypto ikev2 keyring DMVPN-KEYRING-MPLS
 peer ANY
  address 0.0.0.0 0.0.0.0
  pre-shared-key CISCO456
 !
!
!
!------------------------------------------------------------
! IKEv2 PROPOSAL
!
! Removed IKEv2 proposal, will use smart default
!------------------------------------------------------------
!
!
!------------------------------------------------------------
! IKEv2 PROFILE
!------------------------------------------------------------
!
crypto ikev2 profile DMVPN-IKE-PROFILE-INET
 match fvrf INET01
 match identity remote address 0.0.0.0
 authentication local pre-share
 authentication remote pre-share
 keyring local DMVPN-KEYRING-INET
 dpd 40 5 on-demand
!
crypto ikev2 profile DMVPN-IKE-PROFILE-MPLS
 match fvrf MPLS01
 match identity remote address 0.0.0.0
 authentication local pre-share
 authentication remote pre-share
 keyring local DMVPN-KEYRING-MPLS
 dpd 40 5 on-demand
!
!
!
!------------------------------------------------------------
! IPSEC
!------------------------------------------------------------
!
! To avoid %CRYPTO-4-PKT_REPLAY_ERR
crypto ipsec security-association replay window-size 512
!
!
crypto ipsec transform-set AES256/SHA/TRANSPORT esp-aes 256 esp-sha-hmac
 mode transport
!
crypto ipsec profile DMVPN-IPSEC-PROFILE-INET
 set transform-set AES256/SHA/TRANSPORT
 set ikev2-profile DMVPN-IKE-PROFILE-INET
!
crypto ipsec profile DMVPN-IPSEC-PROFILE-MPLS
 set transform-set AES256/SHA/TRANSPORT
 set ikev2-profile DMVPN-IKE-PROFILE-MPLS
!


DMVPN Configuration - Spokes Interfaces

DMVPN uses multipoint GRE (mGRE) tunnels. This type of tunnel requires a source interface only.

  • Use the same source interface that you use to connect to the MPLS provider.
  • Set the tunnel vrf command to the VRF defined previously for FVRF.
  • Configure basic interface settings
    • The IP MTU should be configured to 1400
    • The ip tcp adjust-mss should be configured to 1360.
    • There is a 40 byte difference which corresponds to the combined IP and TCP header length.
  • Configure NHRP for DMVPN1 (MPLS) and DMVPN2 (INET)
    • R11 and R21 are defined as Next Hop Servers (NHS) on Tunnel100
    • R12 and R22 are defined as Next Hop Servers (NHS) on Tunnel200
    • Note the simplified syntax that enables NHS with a single line. Both are active, so both tunnels will be up. Routing will decide which one will be used.
    • Note that PfRv3 currently supports only one next-hop per DMVPN interface.
    • Enable NHRP shortcut for direct spoke to spoke tunnels
    • Set NHRP holdtime to 600
  • Apply the IPSec profile to the tunnel


Here is the configuration on R31, single CPE branch:

!
interface Tunnel100
 description DMVPN-MPLS
 bandwidth 400
 ip address 192.168.100.31 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip flow monitor MONITOR-STATS input
 ip flow monitor MONITOR-STATS output
 ip nhrp authentication CISCO
 ip nhrp network-id 100
 ip nhrp holdtime 600
 ip nhrp nhs 192.168.100.11 nbma 172.16.11.1 multicast
 ip nhrp nhs 192.168.100.21 nbma 172.16.21.1 multicast
 ip nhrp shortcut
 ip tcp adjust-mss 1360
 load-interval 30
 if-state nhrp
 tunnel source GigabitEthernet0/1
 tunnel mode gre multipoint
 tunnel key 100
 tunnel vrf MPLS01
 tunnel protection ipsec profile DMVPN-IPSEC-PROFILE-MPLS
!
interface Tunnel200
 description DMVPN-INET
 bandwidth 400
 ip address 192.168.200.31 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip flow monitor MONITOR-STATS input
 ip flow monitor MONITOR-STATS output
 ip nhrp authentication CISCO2
 ip nhrp network-id 200
 ip nhrp holdtime 600
 ip nhrp nhs 192.168.200.12 nbma 100.64.12.1 multicast
 ip nhrp nhs 192.168.200.22 nbma 100.64.22.1 multicast
 ip nhrp registration no-unique
 ip nhrp shortcut
 ip tcp adjust-mss 1360
 load-interval 30
 no nhrp route-watch
 if-state nhrp
 tunnel source GigabitEthernet0/2
 tunnel mode gre multipoint
 tunnel key 200
 tunnel vrf INET01
 tunnel protection ipsec profile DMVPN-IPSEC-PROFILE-INET
!
!

Notes:

  • NHRP requires all devices within a DMVPN cloud to use the same network ID and authentication key. The NHRP cache holdtime should be configured to 600 seconds.
  • ip nhrp registration no-unique: for designs where DMVPN spoke routers receive their external IP addresses through DHCP. It is possible for these routers to acquire different IP addresses after a reload. When the router attempts to register with the NHRP server, it may appear as a duplicate to an entry already in the cache and be rejected. The registration no-unique option allows you to overwrite existing cache entries. This feature is only required on NHRP clients (DMVPN spoke routers).
  • The if-state nhrp option ties the tunnel line-protocol state to the reachability of the NHRP NHS, and if the NHS is unreachable the tunnel line-protocol state changes to down.


Routing on the Overlay Backbone

Routing Principles

PfRv3 always checks for a parent route before being able to create a Channel or control a Traffic Class. Parent route check is done as follows:

  • Check to see if there is an NHRP shortcut route
  • If not – Check in the order of BGP, EIGRP, Static and RIB
  • If at any point, an NHRP short cut route appears, PfRv3 would pick that up and relinquish using the parent route from one of the routing protocols.


BGP Routing on the Overlay Backbone

BGP Routing Overview

BGP can be deployed as the IWAN / DMVPN routing protocol as an alternative to EIGRP. BGP is a popular choice for network operators that require a rich set of features to customized path selection in complex topologies and large-scale deployments. While traditionally positioned at the Provider WAN edge, recent enhancements such as BGP dynamic neighbors make it a viable choice for IWAN deployment, as static peers no longer need to be defined, allowing for zero touch deployment.

  • Hub Border Routers are BGP Route Reflectors and use BGP Dynamic Neighbors to simplify the configuration:
    • BGP dynamic neighbor support allows BGP peering to a group of remote neighbors that are defined by a range of IP addresses. Each range can be configured as a subnet IP address.
    • This allows spokes to initiate the BGP peering without having to preconfigure remote peers on the route-reflectors.

Pfr-bgp-dynamic.jpg


Principles:

  • A single iBGP routing domain is used
  • Define appropriate Hello/Hold timers for IWAN (20/60)
  • Hub:
    • DMVPN hub routers function as BGP route-reflectors for the spokes.
    • No BGP peering between RR.
    • BGP dynamic peer feature configured on the route-reflectors
    • Default and Internal summary route to spokes
    • Set Community and local preference for all prefixes
    • Redistribute BGP into local IGP
  • Spokes:
    • Peer to Hub/Transit BRs in each DMVPN cloud
    • Redistribution OSPF/BGP
    • Set a route tag to identify routes redistributed from BGP
    • Preferred path is MPLS due to highest Local Preference


When BGP is used, PfRv3 will be able to check in the BGP database and will use the best path as computed by BGP. This path needs to be via an external interface (WAN interface). If that is not the case, then PfRv3 will choose in sequence the path with biggest weight, then biggest local preference and finally the path with the smallest IP address.


Hub Configuration

R11 and R21 are BGP Route Reflectors over DMVPN-MPLS, R12 and R22 are BGP Route Reflectors over DMVPN-INET. There is no iBGP peering between them to simplify the configuration and policies.

With BGP Dynamic Neighbor, R11, R12, R21 and R22 just listen to incoming BGP connections. This avoids the manual configuration of all remote sites neighbors. In this design, there is no mutual redistribution, BGP is only redistributed into OSPF.

Tasks include the following on the hub routers:

  • Enable the BGP process for DMVPN routing
  • Configure BGP route advertisement and community tagging.
  • Configure BGP to OSPF redistribution. The routing policy redistribution design is constructed so that an MPLS outbound DMVPN path is preferred over the Internet DMVPN path, when both are available.


R11 Hub Configuration:


!--------------------------------------------------------------------
! OSPF
!--------------------------------------------------------------------
!
router ospf 1
 redistribute connected subnets
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 network 10.0.0.0 0.255.255.255 area 0
!
!
!--------------------------------------------------------------------
! BGP
!--------------------------------------------------------------------
!
router bgp 10
 bgp router-id 10.1.0.11
 bgp log-neighbor-changes
 bgp listen range 192.168.100.0/24 peer-group MPLS-SPOKES
 neighbor MPLS-SPOKES peer-group
 neighbor MPLS-SPOKES remote-as 10
 neighbor MPLS-SPOKES timers 20 60
 !
 address-family ipv4
  bgp redistribute-internal
  network 0.0.0.0 route-map BGP-DEFAULT-ROUTE
  aggregate-address 10.1.0.0 255.255.0.0 summary-only
  aggregate-address 10.0.0.0 255.0.0.0 summary-only
  redistribute connected route-map REDIST-CONNECTED-TO-BGP
  neighbor MPLS-SPOKES activate
  neighbor MPLS-SPOKES send-community
  neighbor MPLS-SPOKES route-reflector-client
  neighbor MPLS-SPOKES weight 50000
  neighbor MPLS-SPOKES soft-reconfiguration inbound
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-OUT out
  distance bgp 201 19 19
 exit-address-family
!
!
!
!
!--------------------------------------------------------------------
! BGP ROUTE MAP
!--------------------------------------------------------------------
!
route-map REDIST-CONNECTED-TO-BGP deny 10
 description Block redistribution of DMVPN Tunnel Interfaces
 match interface Tunnel100
!
route-map REDIST-CONNECTED-TO-BGP permit 20
 description Redistribute all other prefixes
!
route-map BGP-MPLS-SPOKES-OUT permit 10
 description First Priority MPLS via R11
 set local-preference 100000
!
route-map BGP-DEFAULT-ROUTE permit 10
description Set Next Hop Address to Tunnel IP
 set ip next-hop 192.168.100.11
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Modify Metric to Prefer MPLS over Internet
 set metric 1000
 set metric-type type-1
!
!

Notes:

  • All spokes are iBGP peers
  • R11 listens from incoming connections from range 192.168.100.0/24
  • R11 advertises datacenter prefix summaries, enterprise network summary and default route
  • An outbound route-map is used to tag BGP announcements to the spokes a local preference. R11 has the highest local preference and is the preferred hub.
  • R11 redistribute BGP routes in OSPF with metric 1000 to be the primary next hop from the hub site. It is a best practice to summarize IP routes from the WAN distribution layer towards the core (not implemented here).


R12 Hub Configuration:

!--------------------------------------------------------------------
! OSPF
!--------------------------------------------------------------------
!
router ospf 1
 redistribute connected subnets
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 network 10.0.0.0 0.255.255.255 area 0
!
!
!--------------------------------------------------------------------
! BGP
!--------------------------------------------------------------------
!
router bgp 10
 bgp router-id 10.1.0.12
 bgp log-neighbor-changes
 bgp listen range 192.168.200.0/24 peer-group INET-SPOKES
 neighbor INET-SPOKES peer-group
 neighbor INET-SPOKES remote-as 10
 neighbor INET-SPOKES timers 20 60
 !
 address-family ipv4
  bgp redistribute-internal
  network 0.0.0.0 route-map BGP-DEFAULT-ROUTE
  aggregate-address 10.1.0.0 255.255.0.0 summary-only
  aggregate-address 10.0.0.0 255.0.0.0 summary-only
  redistribute connected route-map REDIST-CONNECTED-TO-BGP
  neighbor INET-SPOKES activate
  neighbor INET-SPOKES send-community
  neighbor INET-SPOKES route-reflector-client
  neighbor INET-SPOKES weight 50000
  neighbor INET-SPOKES soft-reconfiguration inbound
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-OUT out
  distance bgp 201 19 19
 exit-address-family
!
!
!
!--------------------------------------------------------------------
! BGP ROUTE MAP
!--------------------------------------------------------------------
!
route-map REDIST-CONNECTED-TO-BGP deny 10
 description Block redistribution of DMVPN Tunnel Interfaces
 match interface Tunnel200
!
route-map REDIST-CONNECTED-TO-BGP permit 20
 description Redistribute all other prefixes
!
route-map BGP-INET-SPOKES-OUT permit 10
 description Third Priority INET via R12
 description Second Priority INET via R12
 set local-preference 20000
!
route-map BGP-DEFAULT-ROUTE permit 10
 description Set Next Hop Address to Tunnel IP
 set ip next-hop 192.168.200.12
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Modify Metric to Prefer MPLS over Internet
 set metric 2000
 set metric-type type-1
!

Notes:

  • All spokes are iBGP peers
  • R12 listens from incoming connections from range 192.168.200.0/24
  • R12 advertises datacenter prefix summaries, enterprise network summary and default route
  • An outbound route-map is used to tag BGP announcements to the spokes with a local preference
  • R12 redistribute BGP routes in OSPF with metric 2000 to be the secondary next hop from the hub site. It is a best practice to summarize IP routes from the WAN distribution layer towards the core (not implemented here).


Site2 configurations for R12 and R22 are very similar.


Branch with Single BR

The following example demonstrates a single router spoke site with one interface connected to Internet transport and the other to MPLS, each on different DMVPN clouds. The spoke router is a BGP Route Reflector client peering to a redundant pair of route-reflectors in each DMVPN cloud. We make sure to have a priority order for the next hop. To achieve this, Hub BGP peers set local-preference. Routes redistributed from BGP to OSP are tagged with 1. This is used to filter routes when redistributing from OSPF to BGP.


Tasks include the following on the spoke routers:

  • Originate their local site routes into BGP which include connected router interfaces (LAN and loopback) and any site LAN prefixes that are learned by OSPF from a Layer 3 LAN switch or router.
  • BGP route origination is accomplished either by redistribution of connected and OSPF routes.
  • Outbound route-maps are used to filter INET DMVPN subnets from outbound BGP advertisements towards the MPLS hubs.
  • Outbound route-maps are used to filter MPLS DMVPN subnets from outbound BGP advertisements towards the INET hubs.


R41 Spoke Configuration:

!------------------------------------------------------------
! ROUTER OSPF
!------------------------------------------------------------
!
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 network 10.0.0.0 0.255.255.255 area 0
 default-information originate
!
!
!--------------------------------------------------------------------
! BGP
!--------------------------------------------------------------------
!
router bgp 10
 bgp router-id 10.4.0.41
 bgp log-neighbor-changes
 neighbor MPLS-HUB peer-group
 neighbor MPLS-HUB remote-as 10
 neighbor MPLS-HUB timers 20 60
 neighbor INET-HUB peer-group
 neighbor INET-HUB remote-as 10
 neighbor INET-HUB timers 20 60
 neighbor 192.168.100.11 peer-group MPLS-HUB
 neighbor 192.168.100.21 peer-group MPLS-HUB
 neighbor 192.168.200.12 peer-group INET-HUB
 neighbor 192.168.200.22 peer-group INET-HUB
 !
 address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
  neighbor MPLS-HUB send-community
  neighbor MPLS-HUB next-hop-self all
  neighbor MPLS-HUB weight 50000
  neighbor MPLS-HUB soft-reconfiguration inbound
  neighbor INET-HUB send-community
  neighbor INET-HUB next-hop-self all
  neighbor INET-HUB weight 50000
  neighbor INET-HUB soft-reconfiguration inbound
  neighbor 192.168.100.11 activate
  neighbor 192.168.100.21 activate
  neighbor 192.168.200.12 activate
  neighbor 192.168.200.22 activate
  distance bgp 201 19 19
 exit-address-family
!
!
!
!--------------------------------------------------------------------
! ROUTE MAPS
!--------------------------------------------------------------------
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
!
route-map REDIST-OSPF-TO-BGP permit 20
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2
!



Branch with Dual BR

The following example demonstrates a dual router spoke site with one router connected to Internet transport and the other to MPLS, each on different DMVPN clouds. BGP peering to a redundant pair of route-reflectors in each DMVPN cloud is shown. Each spoke router runs OSPF to a Layer 3 LAN switch, where it learns the spoke site routes and sends redistributed WAN routes.

Spoke routers are required originate their local site routes into BGP which include connected router interfaces (LAN and loopback) and any site LAN prefixes that are learned by OSPF from a Layer 3 LAN switch or router. BGP route origination can be accomplished by redistribution of OSPF routes, using route-maps that permit only what is necessary. This gives a more generic configuration and remove the specifics of the site, thus allowing an easier deployment.

The configuration on R51 is the following:

!------------------------------------------------------------
! ROUTER OSPF
!------------------------------------------------------------
!
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 network 10.0.0.0 0.255.255.255 area 0
 default-information originate
!
!
!
!------------------------------------------------------------
! ROUTER BGP
!------------------------------------------------------------
!
router bgp 10
 bgp router-id 10.5.0.51
 bgp log-neighbor-changes
 neighbor MPLS-HUB peer-group
 neighbor MPLS-HUB remote-as 10
 neighbor MPLS-HUB timers 20 60
 neighbor 192.168.100.11 peer-group MPLS-HUB
 neighbor 192.168.100.21 peer-group MPLS-HUB
 !
 address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
  neighbor MPLS-HUB send-community
  neighbor MPLS-HUB next-hop-self all
  neighbor MPLS-HUB weight 50000
  neighbor MPLS-HUB soft-reconfiguration inbound
  neighbor 192.168.100.11 activate
  neighbor 192.168.100.21 activate
  distance bgp 201 19 19
 exit-address-family
!
ip bgp-community new-format
!
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
!
route-map REDIST-OSPF-TO-BGP permit 20
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2
!


The configuration on R52 is the following:

!------------------------------------------------------------
! ROUTER OSPF
!------------------------------------------------------------
!
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 network 10.0.0.0 0.255.255.255 area 0
 default-information originate
!
!
!
!------------------------------------------------------------
! ROUTER BGP
!------------------------------------------------------------
!
router bgp 10
 bgp router-id 10.5.0.52
 bgp log-neighbor-changes
 neighbor INET-HUB peer-group
 neighbor INET-HUB remote-as 10
 neighbor INET-HUB timers 20 60
 neighbor 192.168.200.12 peer-group INET-HUB
 neighbor 192.168.200.22 peer-group INET-HUB
 !
 address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
  neighbor INET-HUB send-community
  neighbor INET-HUB next-hop-self all
  neighbor INET-HUB weight 50000
  neighbor INET-HUB soft-reconfiguration inbound
  neighbor 192.168.200.12 activate
  neighbor 192.168.200.22 activate
  distance bgp 201 19 19
 exit-address-family
!
ip bgp-community new-format
!
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
!
route-map REDIST-OSPF-TO-BGP permit 20
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2
!



Check BGP Routing

PfR being not active, the parent routes are BGP based on R11 and R12 as seen below.


On R12 (hub MPLS-DMVPN):



R11-DC1-Hub1#sh bgp
BGP table version is 28, local router ID is 10.1.0.11
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 *>  0.0.0.0          192.168.100.11           1         32768 i
 *>  10.0.0.0         0.0.0.0                            32768 i
 *>  10.1.0.0/16      0.0.0.0                            32768 i
 s>  10.1.0.11/32     0.0.0.0                  0         32768 ?
 s>  10.1.12.0/24     0.0.0.0                  0         32768 ?
 s>  10.1.111.0/24    0.0.0.0                  0         32768 ?
 s>i 10.3.0.31/32     192.168.100.31           0    100  50000 ?
 s>i 10.3.3.0/24      192.168.100.31           0    100  50000 ?
 s>i 10.4.0.41/32     192.168.100.41           0    100  50000 ?
 s>i 10.4.4.0/24      192.168.100.41           0    100  50000 ?
 s>i 10.5.0.51/32     192.168.100.51           0    100  50000 ?
 s>i 10.5.0.52/32     192.168.100.51           2    100  50000 ?
 s>i 10.5.5.0/24      192.168.100.51           0    100  50000 ?
 s>i 10.5.12.0/24     192.168.100.51           0    100  50000 ?
R11-DC1-Hub1#

What to check:

  • All branch routes available in the BGP topology table
  • Single/Best route is directly MPLS-DMVPN
  • Not re-advertised (aggregate route is sent to spokes)


On R12 (hub INET-DMVPN):


R12-DC1-Hub2#sh bgp
BGP table version is 26, local router ID is 10.1.0.12
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 *>  0.0.0.0          192.168.200.12           1         32768 i
 *>  10.0.0.0         0.0.0.0                            32768 i
 *>  10.1.0.0/16      0.0.0.0                            32768 i
 s>  10.1.0.12/32     0.0.0.0                  0         32768 ?
 s>  10.1.12.0/24     0.0.0.0                  0         32768 ?
 s>  10.1.112.0/24    0.0.0.0                  0         32768 ?
 s>i 10.3.0.31/32     192.168.200.31           0    100  50000 ?
 s>i 10.3.3.0/24      192.168.200.31           0    100  50000 ?
 s>i 10.4.0.41/32     192.168.200.41           0    100  50000 ?
 s>i 10.4.4.0/24      192.168.200.41           0    100  50000 ?
 s>i 10.5.0.51/32     192.168.200.52           2    100  50000 ?
 s>i 10.5.0.52/32     192.168.200.52           0    100  50000 ?
 s>i 10.5.5.0/24      192.168.200.52           0    100  50000 ?
 s>i 10.5.12.0/24     192.168.200.52           0    100  50000 ?
R12-DC1-Hub2#

What to check:

  • All branch routes available in the BGP topology table
  • Single/Best route is directly INET-DMVPN
  • Not re-advertised (aggregate route is sent to spokes)



On R31 (spoke) - BGP Peering:


R31-Site3-Spoke#sh bgp sum
BGP router identifier 10.3.0.31, local AS number 10
BGP table version is 18, main routing table version 18
6 network entries using 864 bytes of memory
26 path entries using 2080 bytes of memory
9/4 BGP path/bestpath attribute entries using 1368 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 4312 total bytes of memory
12 received paths for inbound soft reconfiguration
BGP activity 18/12 prefixes, 66/40 paths, scan interval 60 secs

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
192.168.100.11  4           10     195     193       18    0    0 00:57:28        3
192.168.100.21  4           10     199     192       18    0    0 00:58:11        3
192.168.200.12  4           10     203     196       18    0    0 00:59:19        3
192.168.200.22  4           10     200     193       18    0    0 00:58:45        3
R31-Site3-Spoke#


On R31 (spoke) - Routes to the hub:


R31-Site3-Spoke#sh bgp
BGP table version is 18, local router ID is 10.3.0.31
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 *>i 0.0.0.0          192.168.100.11           1 100000  50000 i
 * i                  192.168.100.21           1  20000  50000 i
 * i                  192.168.200.22           1    400  50000 i
 * i                  192.168.200.12           1   3000  50000 i
 *>i 10.0.0.0         192.168.100.11           0 100000  50000 i
 * i                  192.168.100.21           0  20000  50000 i
 * i                  192.168.200.22           0    400  50000 i
 * i                  192.168.200.12           0   3000  50000 i
 *>i 10.1.0.0/16      192.168.100.11           0 100000  50000 i
 * i                  192.168.200.12           0   3000  50000 i
 *>i 10.2.0.0/16      192.168.100.21           0  20000  50000 i
 * i                  192.168.200.22           0    400  50000 i
 *>  10.3.0.31/32     0.0.0.0                  0         32768 ?
 *>  10.3.3.0/24      0.0.0.0                  0         32768 ?
R31-Site3-Spoke#

What to check:

  • Site1 subnets 10.1.0.0/16 advertised from R11 and R12 - MPLS-DMVPN preferred (R11).
  • Site2 subnets 10.2.0.0/16 advertised from R21 and R22 - MPLS-DMVPN preferred (R21).
  • Enterprise summary prefix announced from all DMVPN hubs. R11 preferred due to local preference used.
  • Default route for internet prefixes announced from all DMVPN hubs. R11 preferred due to local preference used.


EIGRP Routing on the Overlay Backbone

EIGRP Routing Overview

Enhanced IGRP (EIGRP) is one of the protocol of choice over DMVPN as the primary routing protocol because it is easy to configure, does not require a large amount of planning. EIGRP has flexible summarization and filtering capability and can scale to large networks. As networks grow, the number of IP prefixes or routes in the routing tables grows as well. By performing IP summarization, you can reduce the amount of bandwidth, processor, and memory necessary to carry large route tables, and reduce convergence time associated with a link failure.

This design uses a single EIGRP autonomous system for the LAN, WAN and all of the remote sites. Every remote site is dual connected for resiliency. However, due to the multiple paths that exist within this topology, you must try to avoid to avoid routing loops and to prevent remote sites from becoming transit sites if WAN failures were to occur. Delay is configured to make sure the WAN interfaces are always preferred and MPLS is the preferred path:

  • Site LAN: delay 25000
  • Site cross link: delay 20000
  • MPLS tunnel: delay 1000
  • INET tunnel: delay 2000

The following logic is used to control the routing.

  • Single EIGRP process for Branch, WAN and POP/hub sites
  • Extend Hello/Hold timers for WAN
  • Adjust tunnel interface “delay” to ensure WAN path preference
    • MPLS primary, INET secondary
  • Hubs
    • Branch prefix summary route for
    • spoke-to-spoke tunnels
  • Spokes
    • EIGRP Stub-Site functionality builds on stub functionality that allows a router to advertise itself as a stub to peers on specified WAN interfaces, but allows for it to exchange routes learned on LAN interface


Hub Configuration

Here is the configuration on R11 - primary hub for MPLS DMVPN.

router eigrp IWAN
 !
 address-family ipv4 unicast autonomous-system 1
  !
  af-interface Tunnel100
   summary-address 10.0.0.0 255.0.0.0
   summary-address 10.1.0.0 255.255.0.0
   hello-interval 20
   hold-time 60
   no split-horizon
  exit-af-interface
  !
  topology base
   distribute-list prefix EIGRPSUMMARY in Tunnel100
   summary-metric 10.1.0.0/16 10000000 1 255 0 1500 distance 250
   summary-metric 10.0.0.0/8 10000000 1 255 0 1500 distance 250
  exit-af-topology
  network 10.1.0.0 0.0.255.255
  network 192.168.100.0
  eigrp router-id 10.1.0.11
 exit-address-family
!

Notes:

  • <TBC>


A similar configuration is applied on R12 - primary hub for INET DMVPN.

router eigrp IWAN
 !
 address-family ipv4 unicast autonomous-system 1
  !
  af-interface Tunnel200
   summary-address 10.0.0.0 255.0.0.0
   summary-address 10.1.0.0 255.255.0.0
   hello-interval 20
   hold-time 60
   no split-horizon
  exit-af-interface
  !
  topology base
   distribute-list prefix EIGRPSUMMARY in Tunnel200
   summary-metric 10.1.0.0/16 10000000 1 255 0 1500 distance 250
   summary-metric 10.0.0.0/8 10000000 1 255 0 1500 distance 250
  exit-af-topology
  network 10.1.0.0 0.0.255.255
  network 192.168.200.0
  eigrp router-id 10.1.0.12
 exit-address-family
!

Notes:

  • <TBC>


Branch with Single BR

EIGRP stub functionality conserves router resources and improves network stability. EIGRP stubs receive routes from other routers, but only advertise the routes directly attached to it. EIGRP stubs do not advertise routes that they learn from other EIGRP peers. But to have a common configuration on all branch sites and to be able to templatize this configuration, the stub site feature is also used for single CPE branch. EIGRP Stub-Site functionality builds on stub functionality that allows a router to advertise itself as a stub to peers on specified WAN interfaces, but allows for it to exchange routes learned on LAN interfaces.

router eigrp IWAN
 !
 address-family ipv4 unicast autonomous-system 1
  !
  af-interface Tunnel100
   hello-interval 20
   hold-time 60
   stub-site wan-interface
  exit-af-interface
  !
  af-interface Tunnel200
   hello-interval 20
   hold-time 60
   stub-site wan-interface
  exit-af-interface
  !
  topology base
  exit-af-topology
  network 10.0.0.0
  network 192.168.100.0
  network 192.168.200.0
  eigrp router-id 10.3.0.31
  eigrp stub-site 1:3
 exit-address-family
!

Notes:

  • <TBC>


Branch with Dual BRs

Stub sites are non-transit and only advertise local routes. EIGRP Stub-Site functionality builds on stub functionality that allows a router to advertise itself as a stub to peers on specified WAN interfaces, but allows for it to exchange routes learned on LAN interfaces. This feature removes the need for a complex routing leaking with route tags and filtering.

EIGRP Stub-Site provides the following key benefits:

  • EIGRP neighbors on WAN links do not send EIGRP queries to the remote site when a route goes Active.
  • Additional routers to be placed further in the site, and still receive routes from the WAN through the stub router.
  • Prevents the EIGRP Stub-Site from being a transit router.

The EIGRP Stub-Site feature works by identifying the WAN interfaces, and then setting an EIGRP Stub-Site identifier. Routes received from a peer on the WAN interface are tagged with EIGRP Stub-Site identifier attribute. When EIGRP advertises network prefixes out of a WAN identified interface, it checks for an EIGRP Stub-Site identifier. If one is found, the route is not advertised; if one is not found then the route is advertised.


R51 Configuration sample would be:

router eigrp IWAN
 !
 address-family ipv4 unicast autonomous-system 1
  !
  af-interface Tunnel100
   hello-interval 20
   hold-time 60
   stub-site wan-interface
  exit-af-interface
  !
  topology base
  exit-af-topology
  network 10.0.0.0
  network 192.168.100.0
  eigrp router-id 10.5.0.51
  eigrp stub-site 1:5
 exit-address-family
!


R52 Configuration sample would be:

router eigrp IWAN
 !
 address-family ipv4 unicast autonomous-system 1
  !
  af-interface Tunnel200
   hello-interval 20
   hold-time 60
   stub-site wan-interface
  exit-af-interface
  !
  topology base
  exit-af-topology
  network 10.0.0.0
  network 192.168.200.0
  eigrp router-id 10.5.0.52
  eigrp stub-site 1:5
 exit-address-family
!

Notes:

  • <TBC>



Check EIGRP Routing

PfR being not active, the parent routes are EIGRP based on R11 and R12 as seen below.

On R84 (hub MPLS-DMVPN):


R11#sh ip eigrp topology

R11#

What to check:

  • Single/Best route is directly DMVPN-MPLS Tunnel 100


On R12 - hub for DMVPN-INET

R12#sh ip eigrp topology 

R12#

What to check:

  • Single/Best route is directly DMVPN-INET Tunnel 200



On R31 - Single CPE branch

R31#sh ip eigrp topology 

R31#

What to check:

  • Site1 subnets 10.1.0.0/16 advertised from R11 and R12 - MPLS-DMVPN preferred (R11).
  • Site2 subnets 10.2.0.0/16 advertised from R21 and R22 - MPLS-DMVPN preferred (R21).
  • Enterprise summary prefix announced from all DMVPN hubs. R11 preferred due to local preference used.
  • Default route for internet prefixes announced from all DMVPN hubs. R11 preferred due to local preference used.



Checking flows (Optional)

This section is optional. There is absolutely no need to configure Flexible NetFlow (FNF) for PfR to run. Flexible Netflow (FNF) is used here to check active flows between datacenters and branch offices. This also shows that FNF can be used as a troubleshooting tool. But keep in mind that PfRv3 is now based on Unified Monitoring (Performance Monitor).


Create the flow record:

!*******************************
! FLOW RECORD
! What data do I want to meter?
! First define the flow record that you want.
! ‘match’ is used to identify key fields, ie, those fields used to define what a flow is
! ‘collect’ is used to define what fields we want to monitor
!
flow record RECORD-STATS
 match ipv4 dscp
 match ipv4 protocol
 match ipv4 source address
 match ipv4 destination address
 match transport source-port
 match transport destination-port
 match interface input
 match flow direction
 collect routing next-hop address ipv4
 collect counter bytes
!

Create the flow monitor

!*******************************
! FLOW MONITOR
! Creates a new NetFlow cache
! Attach the flow record
! Exporter is attached to the cache
! Potential sampling configuration
!
flow monitor MONITOR-STATS
 cache timeout inactive 60
 cache timeout active 60
 cache timeout update 1
 record RECORD-STATS
!

Then apply the flow monitor on the interface:

!
interface Tunnel 100
 description -- TO BORDER ROUTERS --
 ip flow monitor MONITOR-STATS input
 ip flow monitor MONITOR-STATS output
!

You can now check the NetFlow cache on a branch (R31 for example) and check that flows are running across the CPE over the tunnels (voice traffic has DSCP 0x2E and critical application DSCP 0x12):

R31-Site3-Spoke#sh flow monitor MONITOR-STATS cache format table
  Cache type:                               Normal
  Cache size:                                 4096
  Current entries:                              35
  High Watermark:                               38

  Flows added:                                2567
  Flows aged:                                 2532
    - Active timeout      (    60 secs)       2532
    - Inactive timeout    (    60 secs)          0
    - Event aged                                 0
    - Watermark aged                             0
    - Emergency aged                             0

IPV4 SRC ADDR    IPV4 DST ADDR    TRNS SRC PORT  TRNS DST PORT  INTF INPUT            FLOW DIRN  IP DSCP  IP PROT  ipv4 next hop addr       bytes
===============  ===============  =============  =============  ====================  =========  =======  =======  ==================  ==========
10.1.100.10      10.3.3.100               20000          30000  Tu100                 Input      0x2E          17  10.3.3.100              173460
10.3.3.100       10.1.100.10              30000          20000  Gi0/3                 Output     0x2E          17  192.168.100.11          114120
10.3.3.101       10.1.101.10                  0           2048  Gi0/3                 Output     0x00           1  192.168.200.12            4000
10.3.3.103       10.4.4.103               30000          20000  Gi0/3                 Output     0x2E          17  192.168.100.41          111240
10.4.4.103       10.3.3.103               20000          30000  Tu100                 Input      0x2E          17  10.3.3.103              111180
10.4.4.103       10.3.3.103               30000          20000  Tu100                 Input      0x2E          17  10.3.3.103              111060
10.3.3.103       10.4.4.103               20000          30000  Gi0/3                 Output     0x2E          17  192.168.100.41          111060
10.3.3.103       10.4.4.103               30000           1967  Gi0/3                 Output     0x2E          17  192.168.100.41             160
10.4.4.103       10.3.3.103                1967          30000  Tu100                 Input      0x2E          17  10.3.3.103                 104
10.3.3.102       10.1.102.10               7000           1967  Gi0/3                 Output     0x12          17  192.168.100.11            2160
10.1.102.10      10.3.3.102                1967           7000  Tu100                 Input      0x12          17  0.0.0.0                   1404
10.3.3.102       10.1.102.10               7000           7000  Gi0/3                 Output     0x12           6  192.168.100.11            4428
10.1.102.10      10.3.3.102                7000           7000  Tu100                 Input      0x12           6  10.3.3.102                3348
10.1.101.10      10.3.3.101                   0              0  Tu200                 Input      0x00           1  10.3.3.101                2100
10.4.4.103       10.3.3.103               30000           1967  Tu100                 Input      0x2E          17  10.3.3.103                  80
10.3.3.103       10.4.4.103                1967          30000  Gi0/3                 Output     0x2E          17  192.168.100.41              52
10.3.3.100       10.1.100.10              30000           1967  Gi0/3                 Output     0x2E          17  192.168.100.11              80
10.1.100.10      10.3.3.100                1967          30000  Tu100                 Input      0x2E          17  10.3.3.100                  52

R31-Site3-Spoke#



PfR Configuration

Overview

This topology includes two datacenters that can work in two main modes: - Same Prefix advertised: both datacenters advertise a common set of prefixes (10.1.0.0/16 and 10.2.0.0/16). - Different prefix advertised. Site1 announce 10.1.0.0/16 and Site2 announce 10.2.0.0/16.

In the releases prior to XE 3.15 and IOS 15.5(2)T, a prefix can only belong to one site. With the new Transit site support introduced with these releases, a prefix can belong to multiple sites. Along with this, PfRv3 also supports multiple Next Hop over DMVPN. In this lab, DC1 is the hub (POP-ID 0 by default) and DC2 is a Transit Site (POP-ID 1 defined).


IWAN Sites

An IWAN domain includes a mandatory Hub site, optional Transit sites, as well as Branch sites. Each site has a unique identifier called a Site-Id that is derived from the loopback address of the local MC.

Branch Sites

  • These will always be a DMVPN spoke, and are a stub site where traffic transit is not allowed.
  • The local MC peers with the logical domain controller (aka Hub MC) to get its policies, and monitoring guidelines.

Transit Sites

  • Located in an enterprise central site or headquarter location
  • Can act as a transit site to access servers in the datacenters or for spoke-to-spoke traffic.
  • Datacenters may or may not collocated with the transit site
  • A POP Identifier (POP-ID) is configured for each transit site. This POP-ID has to be unique in the domain.
  • The local MC peers with the Hub MC (aka Domain Controller) to get its policies, monitor configuration and timers.

Hub Site

  • The logical domain controller functionality resides on this site’s master controller (MC).
  • Only one Hub site exists per IWAN domain because of the uniqueness of the logical domain controller’s presence. The master controller (MC) for this site is known as the Hub master controller (Hub MC); thereby making this site the Hub site.
  • MCs from all other sites (transit or branch) connect to the Hub MC for PfR configuration and policies.
  • A POP Identifier (POP-ID) 0 is automatically assigned to a Hub site.
  • Can contain all other properties of a Transit site ad defined above.


Device Components and Role

PfR is comprised of two major Cisco IOS components, a Master Controller (MC) and a Border Router (BR). The MC is a policy decision point at which policies are defined and applied to various traffic classes that traverse the BR systems. The MC can be configured to learn and control traffic classes on the network:

  • Border Routers (BRs) are in the data forwarding path. BRs collect data from their Performance Monitor cache and smart probe results, provide a degree of aggregation of this information, and influence the packet forwarding path as directed by the site local MC to manage user traffic.
  • Master Controller (MC) is the policy decision maker. At a large site, such as data center or campus, the MC is a standalone chassis. For smaller branch locations, the MC is typically collocated (configured) on the same platform as the border router. As a general rule, the large locations manage more network prefixes and applications than a branch deployment, thus consuming more CPU and memory resources for the master controller function. Therefore, it makes a good design practice to dedicate a chassis for the master controller at large sites.

Each site in the PfR Domain must include a local MC and at least one BR.

There are five different roles a device can be in an IWAN domain:

  • Hub Master Controller (Hub MC) – This is the MC at the hub site that acts as MC for the site and makes optimization decision for that site and provides the Path Control policies for all the other MCs. The Hub MC contains the logical PfR domain controller role.
  • Transit Master Controller (Transit MC) – This is the MC at transit sites and makes optimization decision for that site. There is no policy configuration on Transit MCs because they receive their policy from the Hub MC.
  • Branch Master Controller (Branch MC) - The branch MC is the MC for branch sites, and makes optimization decisions for that branch site.. There is no policy configuration on branch MCs because they receive their policy from the Hub MC.
  • Transit Border Router (Transit BR) - The border controller at a Hub or Transit site. WAN interface terminates in the BRs. PfR is enabled on these interfaces. At the time of this writing, only one WAN interface is supported on a BR. This limitation is overcome by using multiple BRs devices.


Policy Configuration

Policy is configuration on the hub MC and then distributed to all MC peers (including Transit MC). Configuring policies for PfRv3 involves two instances:

  • Identify the traffic based on either application or DSCP that you want to optimize.
  • Determine the priority and the threshold value for network parameters delay, loss and/or jitter. You can either use pre-defined sets of priorities and threshold or customize as per the requirement.


Hub Master Controller (R10)

The hub master controller is the master controller at the hub-site (Site1 is our deployment example). This is the device where all policies are configured. It also acts as master controller for that site and makes optimization decision. It is important to note that the Hub MC is NOT a centralised Master Controller for all Border Routers on all sites. This is the central point of provisioning for the entire Enterprise Domain.

Configuring Master Controller (MC) for hub includes the following:

  • Domain name definition
  • Policy definition globally for the entire domain

You can use the global routing table (default VRF) or define specific VRFs for hub MC.

In this deployment example, Site1 is the primary datacenter and R10 is configured as the Hub MC.


The basic configuration includes the following:

domain IWAN
 vrf default
  master hub
   source-interface Loopback0
   enterprise-prefix  prefix-list ENTERPRISE_PREFIX
   site-prefixes prefix-list SITE_PREFIX
   monitor-interval 4 dscp ef
   monitor-interval 4 dscp af41
   monitor-interval 4 dscp cs4
   monitor-interval 4 dscp af31
   load-balance
   advanced
    channel-unreachable-timer 10
   collector 10.151.1.95 port 2055
   class VOICE sequence 10
    match dscp ef policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    path-preference MPLS fallback INET
   class VIDEO sequence 20
    match dscp af41 policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    match dscp cs4 policy custom
     priority 2 loss threshold 5
     priority 1 one-way-delay threshold 150
    path-preference MPLS fallback INET
   class CRITICAL sequence 30
    match dscp af31 policy custom
     priority 2 loss threshold 10
     priority 1 one-way-delay threshold 600
    path-preference MPLS fallback INET
  !
! 
ip prefix-list ENTERPRISE_PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
!

Notes:

  • Enterprise-prefix : the main use of the enterprise prefix list is to determine the enterprise boundary.
    • With enterprise-prefix: if a prefix doesn't match any site-prefix but matches enterprise-prefix then the prefix belongs to a site that is not participating in PfRv3 but it does belong to the enterprise. PfR will not influence traffic towards sites that has NOT enabled PFR.
    • Without enterprise-prefix: all the traffic that would be going towards a spoke that is NOT PfR enabled will be learnt as internet traffic class and therefore subjected to load balancing.
  • Site-prefix : It is a set of prefixes that belongs to particular site. This allows configuring site-prefix manually instead of learning. This configuration should be used at the site if the site is used for transit. For example, Site A reaches Site B via Hub-Site, where Hub-Site is transit site. The configuration is used to prevent learning of Site A prefix as Hub-Site prefix when it is transiting from Hub.
  • Use of le/ge or "deny" is not supported with Site-prefix or Enterprise-prefix list
  • Domain policies are only defined on the Hub Master Controller and then sent over the peering infrastructure to all MC peers. Policies can be defined per application or per DSCP. You cannot mix and match DSCP and application based policies in the same class group. Traffic that doesn't match any of the classification and match statements falls into a default group which is load-balanced (no performance measurements done).
    • You can either select an existing template as domain type for policy or a custom mode. The available templates for domain policy types are listed below:
      • best-effort
      • bulk-data
      • low-latency-data
    • * real-time-video
      • scavenger
      • voice
      • custom - Defines customized user-defined policy values.
  • Configures policy on per DSCP basis only - The assumption is that DSCP marking is done on ingress (LAN interface of the BRs) or even within the site (access switch).
  • Path preference for MPLS for all voice/video and critical applications.
  • Predefined or custom policies can be used.
  • Monitor interval set to 2 sec for critical applications. Default is 30 seconds. You can lower the monitor interval for a couple of critical applications in order to achieve a fast failover to the secondary path. This is called quick monitor.
  • load-balance: if this is enabled, then all traffic that falls in the default class will be load balanced. If this is NOT enabled then default class is un-controlled and traffic will follow the routing information.



Hub Border Routers (R11 and R12)

A Hub Border Router is a border controller at the hub-site. This is the device where WAN interface terminates. PfR is enabled on these interfaces. There could be one or more WAN interface on the same device. There can be one or more Hub BRs.

On the Hub Border Routers, PfR must be configured with:

  • The address of the local MC
  • The path name on external interfaces
  • The path identifier on external interfaces. Must be unique per site.

The border routers on the central site register to the central MC with their external interface definition together with their path names. You can use the global routing table (default VRF) or define specific VRFs for hub border routers.

R11 example configuration:

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master 10.1.0.10
!
interface tunnel100
 domain IWAN path MPLS path-id 1
!


R12 example configuration:

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master 10.1.0.10
!
interface tunnel200
 domain IWAN path INET path-id 2
!

This is a one-time configuration. Once done, all changes will be centralized on the Hub MC.


Transit Master Controller (R20)

The Transit Master Controller is the master controller at Datacenter2 in our deployment example.

Configuring Master Controller (MC) for Transit includes the following:

  • Domain name definition
  • POP ID configuration
  • Peering with the hub MC
  • You can use the global routing table (default VRF) or define specific VRFs


The configuration is as follows:

domain IWAN
 vrf default
  master transit 1
   source-interface Loopback0
   site-prefixes prefix-list SITE_PREFIX
   hub 10.1.0.10
!
!
ip prefix-list SITE_PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list SITE_PREFIX seq 20 permit 10.2.0.0/16
!
!

Notes:

  • Transit Master Controller is configured with a POP-ID that must be unique per domain. R93 configured with POP-ID of 1.
  • Transit MC peers with the Hub MC to get the policies and monitor configurations.
  • Site-prefix : It is a set of prefixes that belong to particular site. This allows configuring site-prefix manually instead of learning. This configuration must at the site if the site is used for transit. For example, Site A reaches Site B via DC2. The configuration is used to prevent learning of Site A prefix as DC2 prefix when it is transiting across DC2.


Transit Border Routers (R21 and R22)

A Transit Border Router is a border controller at a Transit site. This is the device where WAN interface terminates. PfR is enabled on these interfaces. There could be one or more WAN interface on the same device. There can be one or more Hub BRs.

On a Transit Border Routers, PfR must be configured with:

  • The address of the local MC
  • The path name on external interfaces
  • The path identifier on external interfaces. Must be unique per site.

The border routers on the central site register to the central MC with their external interface definition together with their path names. You can use the global routing table (default VRF) or define specific VRFs for hub border routers.

R21 example configuration:

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master 10.2.0.20
!
interface tunnel100
 domain IWAN path MPLS path-id 1
!


R12 example configuration:

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master 10.2.0.20
!
interface tunnel200
 domain IWAN path INET path-id 2
!

This is a one-time configuration. Once done, all changes will be centralized on the Hub MC.


Branch Routers

The Branch Master Controller is the master controller at the branch-site. There is no policy configuration on this device. It receives policy from the Hub MC. This device acts as master controller for that site for making optimization decision. The configuration includes the IP address of the hub MC.


Example configuration for single CPE branch (R31, R41):

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master local
  master branch
   source-interface Loopback0
   hub 10.1.0.10
!


Example configuration for dual CPE branch (R51 and R52):

R51 Configuration - Includes the MC definition (with the IP address of the Hub MC, ie R10) and the BR definition (using local MC):

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master local
  master branch
   source-interface Loopback0
   hub 10.1.0.10
!

R52 Configuration - Includes the BR definition that points to the site MC, ie R51:

!
domain IWAN
 vrf default
  border
   source-interface Loopback0
   master 10.5.0.51
!



Checking Domain Discovery

Hub Site

First check Hub MC status.

R10-DC1-MC#sh domain IWAN master status

  *** Domain MC Status ***

 Master VRF: Global

  Instance Type:    Hub
  Instance id:      0
  Operational status:  Up
  Configured status:  Up
  Loopback IP Address: 10.1.0.10
  Global Config Last Publish status: Peering Success
  Load Balancing:
   Admin Status: Enabled
   Operational Status: Up
   Enterprise top level prefixes configured: 1
   Max Calculated Utilization Variance: 7%
   Last load balance attempt: never
   Last Reason:  Variance less than 20%
   Total unbalanced bandwidth:
         External links: 0 Kbps  Internet links: 0 Kbps
  External Collector: 10.151.1.95 port: 2055
  Route Control: Enabled
  Transit Site Affinity: Enabled
  Load Sharing: Enabled
  Mitigation mode Aggressive: Disabled
  Policy threshold variance: 20
  Minimum Mask Length: 28
  Syslog TCA suppress timer: 180 seconds
  Traffic-Class Ageout Timer: 5 minutes
  Channel Unreachable Threshold Timer: 10 seconds
  Minimum Packet Loss Calculation Threshold: 15 packets
  Minimum Bytes Loss Calculation Threshold: 1 bytes

  Borders:
    IP address: 10.1.0.12
    Version: 2
    Connection status: CONNECTED (Last Updated 01:47:49 ago )
    Interfaces configured:
      Name: Tunnel200 | type: external | Service Provider: INET path-id:2 | Status: UP | Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 2


    Tunnel if: Tunnel0

    IP address: 10.1.0.11
    Version: 2
    Connection status: CONNECTED (Last Updated 01:47:39 ago )
    Interfaces configured:
      Name: Tunnel100 | type: external | Service Provider: MPLS path-id:1 | Status: UP | Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 2


    Tunnel if: Tunnel0

--------------------------------------------------------------------------------
R10-DC1-MC#

Notes:

  • Check Operational status is up
  • Check Configured status is up
  • Check that all Border Routes are correctly listed
  • Check external interfaces are correctly defined with appropriate path names
  • Check load-balancing is the correct state:
    • Disabled – MC will un-control default class Traffic Classes
    • Enabled – MC will control and load-share default-class Traffic Classes among all external interfaces


Check BR status on the hub. If everything is fine on the MC, this step is optional:


R11-DC1-Hub1#sh domain IWAN border  status

Mon Jan 25 18:06:19.702
--------------------------------------------------------------------
  **** Border Status ****

Instance Status: UP
Present status last updated: 01:48:18 ago
Loopback: Configured Loopback0 UP (10.1.0.11)
Master: 10.1.0.10
Master version: 2
Connection Status with Master: UP
MC connection info: CONNECTION SUCCESSFUL
Connected for: 01:48:17
External Collector: 10.151.1.95  port: 2055
Route-Control: Enabled
Asymmetric Routing: Disabled
Minimum Mask length: 28
Sampling: off
Channel Unreachable Threshold Timer: 10 seconds
Minimum Packet Loss Calculation Threshold: 15 packets
Minimum Byte Loss Calculation Threshold: 1 bytes
Monitor cache usage: 2000 (20%) Auto allocated
Minimum Requirement: Met
External Wan interfaces:
     Name: Tunnel100 Interface Index: 10 SNMP Index: 7 SP: MPLS path-id: 1 Status: UP Zero-SLA: NO Path of Last Resort: Disabled

Auto Tunnel information:

   Name:Tunnel0 if_index: 11
   Virtual Template: Not Configured
   Borders reachable via this tunnel:  10.1.0.12
--------------

Notes:

  • Check external interfaces are correctly defined
  • Check that connection status with MC is up
  • Check that remote BR is discovered
  • Check that Minimum requirement met


Check remote sites that are being discovered. All sites that belong to the Domain should appear. This shows that SAF peering is correctly set up.


R10-DC1-MC#show eigrp service-family ipv4 neighbors
EIGRP-SFv4 VR(#AUTOCFG#) Service-Family Neighbors for AS(59501)
H   Address                 Interface              Hold Uptime   SRTT   RTO  Q  Seq
                                                   (sec)         (ms)       Cnt Num
5   10.3.0.31               Lo0                     508 01:47:15    5   100  0  14
4   10.5.0.51               Lo0                     505 01:47:51    5   100  0  38
3   10.4.0.41               Lo0                     500 01:48:26    5   100  0  20
2   10.2.0.20               Lo0                     543 01:49:11    1   100  0  24
1   10.1.0.11               Lo0                     536 01:49:23    1   100  0  11
0   10.1.0.12               Lo0                     500 01:49:34    1   100  0  8
R10-DC1-MC#

If a site is missing, check that its site-id (loopback address of its Master Controller) is reachable.

You can list all sites together with DSCP discovered:

R10-DC1-MC#sh domain IWAN master discovered-sites

  *** Domain MC DISCOVERED sites ***

  Number of sites:  5
 *Traffic classes [Performance based][Load-balance based]

 Site ID: 10.2.0.20
  Site Discovered:01:48:39 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :af31[26]-Number of traffic classes[0][0]
    DSCP :cs4[32]-Number of traffic classes[0][0]
    DSCP :af41[34]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[0][0]

 Site ID: 10.3.0.31
  Site Discovered:01:46:44 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[1][1]
    DSCP :af21[18]-Number of traffic classes[1][1]
    DSCP :af31[26]-Number of traffic classes[0][0]
    DSCP :cs4[32]-Number of traffic classes[0][0]
    DSCP :af41[34]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[1][0]

 Site ID: 10.4.0.41
  Site Discovered:01:47:55 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :af31[26]-Number of traffic classes[0][0]
    DSCP :cs4[32]-Number of traffic classes[0][0]
    DSCP :af41[34]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[0][0]

 Site ID: 10.5.0.51
  Site Discovered:01:47:19 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :af31[26]-Number of traffic classes[0][0]
    DSCP :cs4[32]-Number of traffic classes[0][0]
    DSCP :af41[34]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[0][0]

 Site ID: 255.255.255.255
  Site Discovered:01:49:04 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :af31[26]-Number of traffic classes[0][0]
    DSCP :cs4[32]-Number of traffic classes[0][0]
    DSCP :af41[34]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[0][0]
--------------------------------------------------------------------------------
R10-DC1-MC#

At this point the Hub BRs have the remote site-ids. They can generate Discovery Probes (Smart Probes) to all remote sites to help them discover their external interfaces and their path names. These Smart Probes will have the local Site-Id as source IP Address and the destination Site-Id as the destination IP Address. So its' very important to check that all destination Site-Id addresses are correctly discovered.


Check Branch Sites

This checking includes the configured and operational state, the correct discovery of external interfaces, the correct path names and that domain policies are received.

First check the site MC status:


R31-Site3-Spoke#sh domain IWAN master status

  *** Domain MC Status ***

 Master VRF: Global

  Instance Type:    Branch
  Instance id:      0
  Operational status:  Up
  Configured status:  Up
  Loopback IP Address: 10.3.0.31
  Load Balancing:
   Operational Status: Up
   Max Calculated Utilization Variance: 43%
   Last load balance attempt: 00:00:08 ago
   Last Reason:  No Controlled Traffic Classes Yet for load balancing
   Total unbalanced bandwidth:
         External links: 86 Kbps  Internet links: 0 Kbps
  External Collector: 10.151.1.95 port: 2055
  Route Control: Enabled
  Transit Site Affinity: Enabled
  Load Sharing: Enabled
  Mitigation mode Aggressive: Disabled
  Policy threshold variance: 20
  Minimum Mask Length: 28
  Syslog TCA suppress timer: 180 seconds
  Traffic-Class Ageout Timer: 5 minutes
  Minimum Packet Loss Calculation Threshold: 15 packets
  Minimum Bytes Loss Calculation Threshold: 1 bytes
  Minimum Requirement: Met

  Borders:
    IP address: 10.3.0.31
    Version: 2
    Connection status: CONNECTED (Last Updated 01:52:17 ago )
    Interfaces configured:
      Name: Tunnel200 | type: external | Service Provider: INET | Status: UP | Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 4

          Path-id list: 1:2 0:2

      Name: Tunnel100 | type: external | Service Provider: MPLS | Status: UP | Zero-SLA: NO | Path of Last Resort: Disabled
          Number of default Channels: 4

          Path-id list: 0:1 1:1

    Tunnel if: Tunnel0

--------------------------------------------------------------------------------
R31-Site3-Spoke#


External Interfaces: Check that external interfaces are listed with their correct path names. That means smart probes are correctly received and decoded by local BR’s. If external interfaces are not correctly discovered, that means smart probes are not correctly received:

  • Check that remote MC address is reachable over all external interfaces
  • Check that Smart Probes are correctly received


Path Names: Check path names are correct. If path names are not listed check that smart probes are received from the hub. Branch MC loopback address has to be announced and routable from the hub Border Routers. To check the smart probes packets (SMP) are correctly received on each external interfaces, you can use the Flow Monitor defined in a previous section or define an access-list to match SMP packets:

access-list 100 permit udp any eq 18000 any eq 19000

Then use the debug command:

# debug ip cef packet tunnel 100 in 100 rate 0 detail


Policies: Check Policy is received from the hub MC: show domain IWAN master policy

R31-Site3-Spoke#show domain IWAN master policy
--------------------------------------------------------------------------------

  class VOICE sequence 10
    path-preference MPLS fallback INET
    class type: Dscp Based
      match dscp ef policy custom
        priority 2 packet-loss-rate threshold 5.0 percent
        priority 1 one-way-delay threshold 150 msec
        priority 2 byte-loss-rate threshold 5.0 percent
        Number of Traffic classes using this policy: 2

  class VIDEO sequence 20
    path-preference MPLS fallback INET
    class type: Dscp Based
      match dscp af41 policy custom
        priority 2 packet-loss-rate threshold 5.0 percent
        priority 1 one-way-delay threshold 150 msec
        priority 2 byte-loss-rate threshold 5.0 percent
      match dscp cs4 policy custom
        priority 2 packet-loss-rate threshold 5.0 percent
        priority 1 one-way-delay threshold 150 msec
        priority 2 byte-loss-rate threshold 5.0 percent

  class CRITICAL sequence 30
    path-preference MPLS fallback INET
    class type: Dscp Based
      match dscp af31 policy custom
        priority 2 packet-loss-rate threshold 10.0 percent
        priority 1 one-way-delay threshold 600 msec
        priority 2 byte-loss-rate threshold 10.0 percent

  class default
      match dscp all
        Number of Traffic classes using this policy: 2
--------------------------------------------------------------------------------
R31-Site3-Spoke#

Notes:

  • Domain Policies are defined on the Hub MC and sent over the SAF infrastructure to all MC peers.
  • Each time Domain policies are updated on the hub MC, they are refreshed and sent over to all MC peers.
  • Default class is listed which means that traffic that belong to the default class is controlled and load balanced.


Check local BR status:


R31-Site3-Spoke#sh domain IWAN border  status

Mon Jan 25 18:10:11.814
--------------------------------------------------------------------
  **** Border Status ****

Instance Status: UP
Present status last updated: 01:53:18 ago
Loopback: Configured Loopback0 UP (10.3.0.31)
Master: 10.3.0.31
Master version: 2
Connection Status with Master: UP
MC connection info: CONNECTION SUCCESSFUL
Connected for: 01:53:18
External Collector: 10.151.1.95  port: 2055
Route-Control: Enabled
Asymmetric Routing: Disabled
Minimum Mask length: 28
Sampling: off
Channel Unreachable Threshold Timer: 10 seconds
Minimum Packet Loss Calculation Threshold: 15 packets
Minimum Byte Loss Calculation Threshold: 1 bytes
Monitor cache usage: 2000 (20%) Auto allocated
Minimum Requirement: Met
External Wan interfaces:
     Name: Tunnel200 Interface Index: 11 SNMP Index: 8 SP: INET Status: UP Zero-SLA: NO Path of Last Resort: Disabled Path-id List: 1:2, 0:2
     Name: Tunnel100 Interface Index: 10 SNMP Index: 7 SP: MPLS Status: UP Zero-SLA: NO Path of Last Resort: Disabled Path-id List: 0:1, 1:1

Auto Tunnel information:

   Name:Tunnel0 if_index: 12
   Virtual Template: Not Configured
   Borders reachable via this tunnel:
--------------------------------------------------------------------
R31-Site3-Spoke#

Step 1. Check that external interfaces are listed with their correct path names. That means smart probes are correctly received and decoded by local BRs.


Step 2. Check path names are correct. If path names are not listed check that smart probes are received from the hub. Branch MC loopback address has to be announced and routable from the hub Border Routers.


Step 3. Check that Minimum requirement is met

If the minimum requirement is not MET, check the SAF peering on the site Master Controller – it should correctly peer with the hub MC:


R31-Site3-Spoke#show eigrp service-family ipv4 neighbors
EIGRP-SFv4 VR(#AUTOCFG#) Service-Family Neighbors for AS(59501)
H   Address                 Interface              Hold Uptime   SRTT   RTO  Q  Seq
                                                   (sec)         (ms)       Cnt Num
1   10.1.0.10               Lo0                     512 01:50:25    8   100  0  36
R31-Site3-Spoke#


Step 4. Check Monitor specification is received from the hub MC -> show domain IWAN border pmi

R31-Site3-Spoke#show domain IWAN border pmi

****CENT PMI INFORMATION****


Ingress policy CENT-Policy-Ingress-0-2:
Ingress policy activated on:
  Tunnel200 Tunnel100
-------------------------------------------------------------------------
PMI[Ingress-per-DSCP]-FLOW MONITOR[MON-Ingress-per-DSCP-0-0-0]
    monitor-interval:30
    key-list:
      pfr site source id ipv4
      pfr site destination id ipv4
      ip dscp
      interface input
      policy performance-monitor classification hierarchy
      pfr label identifier
    Non-key-list:
      transport packets lost rate
      transport bytes lost rate
      pfr one-way-delay
      network delay average
      transport rtp jitter inter arrival mean
      counter bytes long
      counter packets long
      timestamp absolute monitoring-interval start
    DSCP-list:N/A

    Exporter-list:
      10.151.1.95
-------------------------------------------------------------------------
PMI[Ingress-per-DSCP-quick ]-FLOW MONITOR[MON-Ingress-per-DSCP-quick -0-0-1]
    monitor-interval:4
    key-list:
      pfr site source id ipv4
      pfr site destination id ipv4
      ip dscp
      interface input
      policy performance-monitor classification hierarchy
      pfr label identifier
    Non-key-list:
      transport packets lost rate
      transport bytes lost rate
      pfr one-way-delay
      network delay average
      transport rtp jitter inter arrival mean
      counter bytes long
      counter packets long
      timestamp absolute monitoring-interval start
    DSCP-list:
      ef-[class:CENT-Class-Ingress-DSCP-ef-0-2]
        packet-loss-rate:react_id[2]-priority[2]-threshold[5.0 percent]
        one-way-delay:react_id[3]-priority[1]-threshold[150 msec]
        network-delay-avg:react_id[4]-priority[1]-threshold[300 msec]
        byte-loss-rate:react_id[5]-priority[2]-threshold[5.0 percent]
      af41-[class:CENT-Class-Ingress-DSCP-af41-0-3]
        packet-loss-rate:react_id[6]-priority[2]-threshold[5.0 percent]
        one-way-delay:react_id[7]-priority[1]-threshold[150 msec]
        network-delay-avg:react_id[8]-priority[1]-threshold[300 msec]
        byte-loss-rate:react_id[9]-priority[2]-threshold[5.0 percent]
      cs4-[class:CENT-Class-Ingress-DSCP-cs4-0-4]
        packet-loss-rate:react_id[10]-priority[2]-threshold[5.0 percent]
        one-way-delay:react_id[11]-priority[1]-threshold[150 msec]
        network-delay-avg:react_id[12]-priority[1]-threshold[300 msec]
        byte-loss-rate:react_id[13]-priority[2]-threshold[5.0 percent]
      af31-[class:CENT-Class-Ingress-DSCP-af31-0-5]
        packet-loss-rate:react_id[14]-priority[2]-threshold[10.0 percent]
        one-way-delay:react_id[15]-priority[1]-threshold[600 msec]
        network-delay-avg:react_id[16]-priority[1]-threshold[1200 msec]
        byte-loss-rate:react_id[17]-priority[2]-threshold[10.0 percent]

    Exporter-list:None
-------------------------------------------------------------------------

Egress policy CENT-Policy-Egress-0-3:
Egress policy activated on:
  Tunnel200 Tunnel100
-------------------------------------------------------------------------
PMI[Egress-aggregate]-FLOW MONITOR[MON-Egress-aggregate-0-0-2]
    monitor-interval:30
    Trigger Nbar:No
    minimum-mask-length:28
    key-list:
      ipv4 destination prefix
      ipv4 destination mask
      pfr site destination prefix ipv4
      pfr site destination prefix mask ipv4
      ip dscp
      interface output
    Non-key-list:
      timestamp absolute monitoring-interval start
      counter bytes long
      counter packets long
      ip protocol
      pfr site destination id ipv4
      pfr site source id ipv4
      pfr br ipv4 address
      interface output physical snmp
    DSCP-list:N/A
    Class:CENT-Class-Egress-ANY-0-6

    Exporter-list:
      10.3.0.31
      10.151.1.95
-------------------------------------------------------------------------
PMI[Egress-prefix-learn]-FLOW MONITOR[MON-Egress-prefix-learn-0-0-3]
    monitor-interval:30
    minimum-mask-length:28
    key-list:
      ipv4 source prefix
      ipv4 source mask
      routing vrf input
    Non-key-list:
      counter bytes long
      counter packets long
      timestamp absolute monitoring-interval start
      interface input
    DSCP-list:N/A
    Class:CENT-Class-Egress-ANY-0-6

    Exporter-list:
      10.3.0.31
-------------------------------------------------------------------------
R31-Site3-Spoke#

Notes:

  • PMI stands for Performance Monitoring Instances.
  • Performance monitor definitions are received from the Hub MC
  • One ingress performance monitor with a 30 sec interval for default TCs
  • One ingress performance monitor with 4 sec interval for all critical applications, with DSCP EF, AF41/CS4, AF21 - this monitor is called "quick" monitor. This monitor collects performance metrics per channel (per pair of site and DSCP value). Note the key fields used to identify a channel.
  • One egress performance monitor to collect bandwidth per Traffic Class - Note the key used to identify a Traffic Class.
  • One egress performance monitor to catch new source prefixes and advertize them to all peers. Again note the key fields that are used to identify new source prefixes.


Step 5. Check that performance monitors are correctly applied on the external interfaces, on ingress and egress:


R31-Site3-Spoke#show domain IWAN border pmi

****CENT PMI INFORMATION****

Ingress policy CENT-Policy-Ingress-0-2:
Ingress policy activated on:
  Tunnel200 Tunnel100                  <---- HERE

[Output omitted for brevity]

Egress policy CENT-Policy-Egress-0-3:
Egress policy activated on:
  Tunnel200 Tunnel100                 <---- HERE

[Output omitted for brevity]

Notes: Performance Monitoring Instances are received on the BR and applied on each external interfaces discovered.


Monitoring Operations

Check Traffic Classes and Sites

One can quickly check Traffic Classes and associated sites. For all sites, a list of DSCPs are listed together with the number of Traffic Classes that are mapped to this DSCP/Site. You can check discovery on all Master Controllers.

R31-Site3-Spoke#show domain IWAN master discovered-sites

  *** Domain MC DISCOVERED sites ***

  Number of sites:  5
 *Traffic classes [Performance based][Load-balance based]

 Site ID: 10.1.0.10
  Site Discovered:01:57:21 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[1][1]
    DSCP :af21[18]-Number of traffic classes[1][1]
    DSCP :ef[46]-Number of traffic classes[1][0]

 Site ID: 10.2.0.20
  Site Discovered:01:57:21 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[1][1]
    DSCP :af21[18]-Number of traffic classes[1][1]
    DSCP :ef[46]-Number of traffic classes[1][0]

 Site ID: 10.4.0.41
  Site Discovered:01:57:21 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[1][0]

 Site ID: 10.5.0.51
  Site Discovered:01:57:21 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[0][0]

 Site ID: 255.255.255.255
  Site Discovered:02:00:49 ago
    Off-limits: Disabled
    DSCP :default[0]-Number of traffic classes[0][0]
    DSCP :af21[18]-Number of traffic classes[0][0]
    DSCP :ef[46]-Number of traffic classes[0][0]
-----------------

Notes:

  • Check Internet based Traffic Classes (If any) - Site ID = 255.255.255.255
  • Check Traffic Classes. Quickly check that you have traffic for Voice, Video, Critical Apps
  • Check Load Balanced TCs - in this deployment this is only DSCP0
  • Check Performance Based TCs - in this deployment, this would be Traffic Classes with DSCP EF, AF21 (no video traffic running).


One can also check the site prefix database and check the mapping between Site ID and destination prefixes. Make sure there is no errors. You can check the site prefix database on all Master Controllers:

R31-Site3-Spoke#sh domain IWAN master site-prefix
  Change will be published between 5-60 seconds
  Next Publish 00:02:14 later
  Prefix DB Origin: 10.3.0.31
  Last publish Status : Peering Success
  Total publish errors : 0
  Prefix Flag: S-From SAF; L-Learned; T-Top Level; C-Configured; M-shared

Site-id              Site-prefix          Last Updated         DC Bitmap  Flag
--------------------------------------------------------------------------------
10.1.0.10             10.1.0.10/32         00:00:29 ago         0x1         S
10.1.0.10             10.1.0.0/16          00:00:04 ago         0x3         S,C,M
10.2.0.20             10.1.0.0/16          00:00:04 ago         0x3         S,C,M
10.2.0.20             10.2.0.20/32         00:00:04 ago         0x2         S
10.2.0.20             10.2.0.0/16          00:00:04 ago         0x2         S,C,M
10.3.0.31             10.3.0.31/32         02:01:44 ago         0x0         L
10.3.0.31             10.3.3.0/24          00:00:25 ago         0x0         L
10.4.0.41             10.4.0.41/32         01:58:16 ago         0x0         S
10.4.0.41             10.4.4.0/24          01:58:16 ago         0x0         S
10.5.0.51             10.5.0.51/32         00:01:13 ago         0x0         S
255.255.255.255      *10.0.0.0/8           00:00:29 ago         0x1         S,T
--------------------------------------------------------------------------------
R31-Site3-Spoke#


Monitor Operation – Traffic Classes

Traffic Class Summary

Check Traffic-class learning and control. This has to be done on Master Controllers. Remember that all Master Controllers are making local path decision, there is no Centralized MC making global decisions. Start with the traffic-class summary to get a summary view of all traffic and how it is controlled as well as their current path.

Here are the possible states for a Traffic Class:

  • UNCONTROLLED: No parent route is found
  • CONTROLLED: found a path that meets the criteria
  • OUT OF POLICY: No path meets the criteria set in the policy


The traffic-class summary command helps to quickly check all Traffic Classes:

Here is an example from branch R10:


R31-Site3-Spoke#show domain IWAN master traffic-classes summary

APP - APPLICATION, TC-ID - TRAFFIC-CLASS-ID, APP-ID - APPLICATION-ID
SP - SERVICE PROVIDER, PC = PRIMARY CHANNEL ID,
BC - BACKUP CHANNEL ID, BR - BORDER, EXIT - WAN INTERFACE
UC - UNCONTROLLED, PE - PICK-EXIT, CN - CONTROLLED, UK - UNKNOWN

Dst-Site-Pfx      Dst-Site-Id       APP           DSCP    TC-ID APP-ID    State  SP      PC/BC       BR/EXIT

10.4.4.0/24       10.4.0.41         N/A            ef      1     N/A       CN     MPLS    6/5         10.3.0.31/Tunnel100
10.1.0.0/16       10.1.0.10         N/A            af21    4     N/A       CN     INET    12/11       10.3.0.31/Tunnel200
10.1.0.0/16       10.1.0.10         N/A            default 3     N/A       CN     INET    2/3         10.3.0.31/Tunnel200
10.1.0.0/16       10.1.0.10         N/A            ef      2     N/A       CN     MPLS    8/7         10.3.0.31/Tunnel100
 Total Traffic Classes: 4 Site: 4  Internet: 0
R31-Site3-Spoke#

Notes:

  • Check destination Prefix (column Dst-Site-Pfx)
  • Check App Id if Application policies are used - Will N/A if DSCP based policies are used
  • Check DSCP value
  • Check the associated state - should be controlled (CN)
  • Check that each TC has a Primary and a Backup channel. If a channel is missing you may want to check the reason. This could be the channel itself is unreachable or that there is no channel that complies with the policies. See section on Check Channels.
  • Check the path used - this is the path for all traffic going to this prefix with this DSCP value, ie Traffic Class. In our example all performance based policies should go to the preferred path MPLS.
  • Check the local Border Router used and the external exit used.


Traffic Class Details

Check Traffic Class to get all details.


R31-Site3-Spoke#show domain IWAN master traffic-classes

[SNIP]

 Dst-Site-Prefix: 10.1.0.0/16         DSCP: af21 [18] Traffic class id:4
  Clock Time:                 18:20:10 (CET) 01/25/2016
  TC Learned:                 00:32:39 ago
  Present State:              CONTROLLED
  Current Performance Status: not monitored (default class)
  Current Service Provider:   INET since 00:32:08
  Previous Service Provider:  Unknown
  (Network event - traffic class will be re-evaluated 00:00:47 later)
  BW Used:                    0 Kbps
  Present WAN interface:      Tunnel200 in Border 10.3.0.31
  Present Channel (primary):  12 INET pfr-label:0:2 | 0:0 [0x20000]
  Backup Channel:             11 MPLS pfr-label:0:1 | 0:0 [0x10000]
  Destination Site ID bitmap: 3
  Destination Site ID:        10.1.0.10
  Alternate Destination site: 10.2.0.20
  Class-Sequence in use:      default
  Class Name:                 default
  BW Updated:                 00:00:09 ago
  Reason for Latest Route Change:    Uncontrolled to Controlled Transition
  Route Change History:
             Date and Time                   Previous Exit                                     Current Exit                                Reason

    1:  17:48:02 (CET) 01/25/16   None(0:0|0:0)/0.0.0.0/None (Ch:0)                  INET(0:2|0:0)/10.3.0.31/Tu200 (Ch:12)              Uncontrolled to Controlled Transition

[SNIP]

R10# 

Notes:

  • Check present state
  • Check current and (if any) previous provider
  • Check the Traffic Class bandwidth
  • Check current WAN interface
  • Check present and backup channels. Performance measurements are extracted from channels. You will not get performance directly from the traffic class itself. If a channel is missing you may want to check the reason (see section Monitoring Channels).
  • Check that TC is correctly mapped to the policy
  • With the new release, note that we have now the last 5 alerts listed under each Traffic Class. If a specific Traffic Class has experienced performance issues, you will be able to check it in the Traffic Class report. Here One One Delay was the reason.


In the Traffic Class display you can get the primary and backup channels and check the associated performance (see section Monitoring Channels)


Monitor Operation – Channels

Channel Overview

PfRv3 monitors performance based on channels. A channel represents all traffic for a specific destination site, for a specific DSCP value and a next-hop. With the Transit Site support, PfRv3 is now able to track individual BR performance on transit sites.


Direction from Transit SItes to Spokes:

  • Each POP is a unique site by itself and so it will only control traffic towards the spoke on the WAN’s that belong to that Transit Sites.
  • PfRv3 will NOT be redirecting traffic between Transit Sites across the DCI or WAN Core. If it is required that all the links are considered from POP to spoke, then you will need to use a single MC.


Direction from Spoke to HUB: Traffic can be load balanced from a branch to multiple central sites (transit sites) if the destination prefix is shared and advertised by both.

  • The spoke considers all the paths (multiple NH’s) towards the Transit Sites and will be maintaining a list of Active/Standby candidate next hops per prefix and WAN interface, that will be derived based on the routing configuration.
    • Active next hop: A next hop is considered active for a given prefix if it has the best metric.
    • Standby next hop: A next hop is considered standby for a given prefix if it advertises a route for the prefix but does not have the best metric.
    • Unreachable next hop: A next hop is considered unreachable for a given prefix if it does not advertise any route for the prefix.
  • With Path Preference
    • PfR will be giving path preference more priority and so our candidate channels for a particular traffic-class will first consider all the links belonging to the preferred path preference (i.e it will include the active and then the standby links belonging to the preferred path) and will then go to the fallback provider links.
  • Without Path Preference
    • PfR will give preference to the active channels and then the standby channels (active/standby will be per prefix) with respect to the performance and policy decisions.
  • Note that the Active and Standby channels per prefix will span across the POP’s.
  • Spoke will randomly (hash) choose the active channel


Pfrv3-channel.png


Channel - Check State

If a traffic for a destination prefix has a DSCP value marked, PfR will create the corresponding channel on the Border Router over all path to the corresponding destination site(s).


Let's assume you have traffic to 10.8.0.0/16 with DSCP EF (voice), the corresponding destination site is DC1 (site-id 10.8.3.3) or DC2 (10.9.3.3). Both advertise 10.8.0.0 prefix.


R10#show domain IWAN master site-prefix 
  Change will be published between 5-60 seconds 
  Next Publish 01:34:28 later
  Prefix DB Origin: 10.2.10.10
  Prefix Flag: S-From SAF; L-Learned; T-Top Level; C-Configured; M-shared

Site-id              Site-prefix          Last Updated         DC Bitmap  Flag      
--------------------------------------------------------------------------------
10.2.10.10            10.1.10.0/24         00:00:28 ago         0x0         L
10.2.11.11            10.1.11.0/24         00:25:35 ago         0x0         S
10.2.12.12            10.1.12.0/24         00:20:05 ago         0x0         S
10.2.10.10            10.2.10.10/32        00:25:32 ago         0x0         L
10.2.11.11            10.2.11.11/32        00:25:35 ago         0x0         S
10.2.12.12            10.2.12.12/32        00:20:05 ago         0x0         S
10.8.3.3              10.8.3.3/32          00:27:42 ago         0x1         S
10.8.3.3              10.8.0.0/16          00:27:31 ago         0x3         S,C,M  <-- HERE
10.9.3.3              10.8.0.0/16          00:27:31 ago         0x3         S,C,M  <-- HERE
10.9.3.3              10.9.3.3/32          00:27:31 ago         0x2         S
10.8.3.3              10.9.0.0/16          00:27:31 ago         0x3         S,C,M
10.9.3.3              10.9.0.0/16          00:27:31 ago         0x3         S,C,M
255.255.255.255      *10.0.0.0/8           00:27:42 ago         0x1         S,T
--------------------------------------------------------------------------------
R10#

In our example both R84 and R94 (Tunnel 100) advertise 10.8.0.0/16 with Local Preference of 200. So both are active for PfR:


R10#test domain border prefix-lookup interface tunnel 100 prefix 10.8.0.0/16 next-hop 10.0.100.84
Prot: BGP, Network: 10.8.0.0/16, Gateway(A): 10.0.100.84,10.0.100.94, Gateway(s):  Interface: Tunnel100
 Prefix API Status : Active
 Overall status for this prefix lookup is Active
R10#

Notes:

  • Gateway(A): active next-hop
  • Gateway(s): standby next-hop


Same for R85 and R95 (Tunnel 200):


R10#test domain border prefix-lookup interface tunnel 200 prefix 10.8.0.0/16 next-hop 10.0.200.85
Prot: BGP, Network: 10.8.0.0/16, Gateway(A): 10.0.200.85,10.0.200.95, Gateway(s):  Interface: Tunnel200
 Prefix API Status : Active
 Overall status for this prefix lookup is Active
R10#


Let's check the Traffic Class for 10.8.0.0/16 with DSCP EF:


R10#show domain IWAN master traffic-classes  summary 

APP - APPLICATION, TC-ID - TRAFFIC-CLASS-ID, APP-ID - APPLICATION-ID
SP - SERVICE PROVIDER, PC = PRIMARY CHANNEL ID, 
BC - BACKUP CHANNEL ID, BR - BORDER, EXIT - WAN INTERFACE
UC - UNCONTROLLED, PE - PICK-EXIT, CN - CONTROLLED, UK - UNKNOWN

Dst-Site-Pfx      Dst-Site-Id APP            DSCP    TC-ID APP-ID    State  SP      PC/BC       BR/EXIT

10.9.0.0/16       10.8.3.3    N/A            default 3     N/A       CN     INET    6/11        10.2.10.10/Tunnel200
10.9.0.0/16       10.9.3.3    N/A            ef      1     N/A       CN     INET    16/8        10.2.10.10/Tunnel200
10.8.0.0/16       10.8.3.3    N/A            default 4     N/A       CN     INET    6/11        10.2.10.10/Tunnel200
10.8.0.0/16       10.9.3.3    N/A            ef      2     N/A       CN     INET    16/8        10.2.10.10/Tunnel200   <--- HERE
10.8.0.0/16       10.9.3.3    N/A            af31    5     N/A       CN     MPLS    12/9        10.2.10.10/Tunnel100
 Total Traffic Classes: 5 Site: 5  Internet: 0
R10#

Notes:

  • Destination Site is 10.9.3.3, so this is DC2
  • Corresponding channels for this Traffic Class are 16 (primary) and 8 (secondary). The next step is to check the channels on the Master Controller:

R10#show domain IWAN master channels | beg Id: 16                
Channel Id: 16  Dst Site-Id: 10.9.3.3  Link Name: INET  DSCP: ef [46] pfr-label: 1:2 | 0:0 [0x1020000] TCs: 2
  Channel Created: 1d00h ago
  Provisional State: Initiated and open
  Operational state: Available
  Channel to hub: TRUE
  Interface Id: 12 
  Supports Zero-SLA: Yes
  Muted by Zero-SLA: No
  Estimated Channel Egress Bandwidth: 32 Kbps
  Immitigable Events Summary: 
   Total Performance Count: 0, Total BW Count: 0
  Site Prefix List
    10.8.0.0/16 (Active)
    10.9.3.3/32 (Active)
    10.9.0.0/16 (Active)
  ODE Stats Bucket Number: 1
   Last Updated  : 00:00:00 ago
    Packet Count  : 162
    Byte Count    : 9792
    One Way Delay : 2 msec*
    Loss Rate Pkts: 0.0 %
    Loss Rate Byte: 0.0 %
    Jitter Mean   : 0 usec
    Unreachable   : FALSE
  ODE Stats Bucket Number: 2
   Last Updated  : 00:00:02 ago
    Packet Count  : 162
    Byte Count    : 9792
    One Way Delay : 1 msec*
    Loss Rate Pkts: 0.0 %
    Loss Rate Byte: 0.0 %
    Jitter Mean   : 1666 usec
    Unreachable   : FALSE
   TCA Statistics:
      Received:0 ; Processed:0 ; Unreach_rcvd:0
     
[SNIP]
      

Notes:

  • Check that Provisional State is "Initiated and open" (BR created this channel) or "Discovered and open" (BR received traffic and therefore created the channel).
  • Check that Operational state is "Available"
  • Note estimated bandwidth for channel 16 is 16 Kbps
  • You can check the performance metrics for the channels.
  • Note that no TCA has been received and no unreachable for this channel.

The Transit Site support introduce a more granular channel. A channel represents all traffic to a site for a specific DSCP but also per next-hop. A channel is represented by a label that you can check under the channel details. The label is based on the POP ID (upper field) and Path Id (lower Field). In this example the pfr-label is: 1:2 | 0:0 - which means POP-ID 1 (DC2) and Path-Id 2, which represents R95 on DC2.


You can also check channel 16 on the Border Router where you get more details (R10 is also the BR on this branch):


R10#show domain IWAN border channels
--------------------------------------------------------------------
Border Smart Probe Stats:

 Smart probe parameters:
   Source address used in the Probe: 10.2.10.10
   Unreach time: 1000 ms
   Probe source port: 18000
   Probe destination port: 19000
   Interface Discovery: ON
   Probe freq for channels with traffic :0 secs
   Discovery Probes: OFF
   Number of transit probes consumed :0
   Number of transit probes re-routed: 0
   DSCP's using this: [26] [32] [34] [46] [64] 
   All the other DSCPs use the default interval: 10 secs

[SNIP]

 Channel id: 16
  Channel create time: 1d00h ago
  Site id : 10.9.3.3
  DSCP : ef[46]                                      <--- Channel for DSCP EF
  Service provider : INET                            <--- Channel over INET Transport
  Pfr-Label : 1:2 | 0:0 [0x1020000]
  exit path-id: 0
  Exit path-id sent on wire: 0
  Number of Probes sent : 1575114                        <--- Check this channel send and receive probes
  Number of Probes received : 1540466                        <--- Check this channel send and receive probes
  Last Probe sent : 00:00:00 ago
  Last Probe received : 00:00:00 ago
  Channel state : Initiated and open
  Channel next_hop : 10.0.200.95
  RX Reachability : Reachable                        <--- Check this is OK
  TX Reachability : Reachable                        <--- Check this is OK
  Channel is sampling 0 flows
  Channel remote end point: 10.0.200.95
  Channel to hub: TRUE
  Version: 3
  Supports Zero-SLA: Yes
  Muted by Zero-SLA: No
  Probe freq with traffic : 1 in 666 ms
 
[SNIP]
         

Notes:

  • Note that source address for smart probes is 10.2.10.10 (loopback address of R10, MC for this site).
  • Note that source and destination ports are 18000/19000
  • Check reachability (RX and TX).
  • Check number of probes sent and received. Everything is fine for channel 16



You can check the backup channel 8. Only relevant information for DSCP EF is displayed:

R10#show domain IWAN master channels

[SNIP]

Channel Id: 8  Dst Site-Id: 10.8.3.3  Link Name: INET  DSCP: ef [46] pfr-label: 0:2 | 0:0 [0x20000] TCs: 0
  Channel Created: 1d00h ago
  Provisional State: Initiated and open
  Operational state: Available
  Channel to hub: TRUE
  Interface Id: 12 
  Supports Zero-SLA: Yes
  Muted by Zero-SLA: No
  Estimated Channel Egress Bandwidth: 0 Kbps
  Immitigable Events Summary: 
   Total Performance Count: 0, Total BW Count: 0
  Site Prefix List
    10.8.3.3/32 (Active)
    10.8.0.0/16 (Active)
    10.9.0.0/16 (Active)
  ODE Stats Bucket Number: 1
   Last Updated  : 00:00:01 ago
    Packet Count  : 36
    Byte Count    : 3024
    One Way Delay : 54 msec*
    Loss Rate Pkts: 0.0 %
    Loss Rate Byte: 0.0 %
    Jitter Mean   : 3388 usec
    Unreachable   : FALSE
  ODE Stats Bucket Number: 2
   Last Updated  : 00:00:05 ago
    Packet Count  : 38
    Byte Count    : 3192
    One Way Delay : 55 msec*
    Loss Rate Pkts: 0.0 %
    Loss Rate Byte: 0.0 %
    Jitter Mean   : 1921 usec
    Unreachable   : FALSE
   TCA Statistics:
      Received:4 ; Processed:4 ; Unreach_rcvd:0
  Latest TCA Bucket
   Last Updated  : 11:46:41 ago
    One Way Delay : NA
    Loss Rate Pkts: 5.40 %
    Loss Rate Byte: NA
    Jitter Mean   : NA
    Unreachability: FALSE

[SNIP]

Notes:

  • In this example the pfr-label is: 0:2 | 0:0 - which means POP-ID 0 (DC1) and Path-Id 2, which represents R85 on DC1.
  • Note that this channel has experienced a High Loss Rate and received a TCA from the destination site.


Channel - Troubleshooting State

If there is no backup channel for a specific traffic class information, you want to check the reason. From the traffic class output, you know the site-id and DSCP, you can then check the corresponding channel information. Each Border Router should have a parent route corresponding to each destination site-id. As an example on the POP, each BR should have a parent route for site10 (10.2.10.10), site11 (10.2.11.11) and site12 (10.2.12.12).

On R84 – connected to MPLS:

R84#show domain one border parent
Border Parent Route Details:

Prot: BGP, Network: 10.2.10.10/32, Gateway: 10.0.100.10, Interface: Tunnel100, Ref count: 2
Prot: BGP, Network: 10.1.11.0/24, Gateway: 10.0.100.11, Interface: Tunnel100, Ref count: 1
Prot: BGP, Network: 10.2.11.11/32, Gateway: 10.0.100.11, Interface: Tunnel100, Ref count: 2
Prot: BGP, Network: 10.2.12.12/32, Gateway: 10.0.100.12, Interface: Tunnel100, Ref count: 2
Prot: BGP, Network: 10.9.3.3/32, Gateway: 10.0.100.94, Interface: Tunnel100, Ref count: 1
R84#

On R85 – connected to INET:

R85#sh domain one border parent-route 
Border Parent Route Details:

Prot: BGP, Network: 10.2.10.10/32, Gateway: 10.0.200.10, Interface: Tunnel200, Ref count: 2
Prot: BGP, Network: 10.2.11.11/32, Gateway: 10.0.200.11, Interface: Tunnel200, Ref count: 2
Prot: BGP, Network: 10.2.12.12/32, Gateway: 10.1.12.252, Interface: Tunnel200, Ref count: 2
Prot: BGP, Network: 10.1.13.0/24, Gateway: 10.0.200.13, Interface: Tunnel200, Ref count: 1
Prot: BGP, Network: 10.9.3.3/32, Gateway: 10.0.200.95, Interface: Tunnel200, Ref count: 1
R85#


Each channel should also have a corresponding parent route over each external interface.


Let’s check channel default and EF for destination site12 (10.2.12.12) over the preferred path MPLS.

R84#sh domain one border channels parent-route 
Border Channel Parent Route Details:

Channel id: 6, Dscp: defa [0], Site-Id: 10.2.12.12, Path: MPLS, Interface: Tunnel100
  Nexthop: 10.0.100.12
  Protocol: BGP

Channel id: 11, Dscp: ef   [2E], Site-Id: 10.2.12.12, Path: MPLS, Interface: Tunnel100
  Nexthop: 10.0.100.12
  Protocol: BGP


Let’s check channel default and EF for destination site12 (10.2.12.12) over the secondary path INET.


R85#sh domain one border channels parent-route 
Border Channel Parent Route Details:

Channel id: 5, Dscp: defa [0], Site-Id: 10.2.12.12, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.13
  Protocol: BGP

Channel id: 18, Dscp: ef   [2E], Site-Id: 10.2.12.12, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.13
  Protocol: BGP


If Primary or Backup Channel is not available (N/A) for a Traffic Class, it means something is wrong with the corresponding channel. You have to check the channel state on the MC first, then on each Border Routers if needed.

In the following output, TC (10.1.12.0, DSCP EF) has no backup channel:


R83#show domain one master traffic-classes summ

APP - APPLICATION, TC-ID - TRAFFIC-CLASS-ID, APP-ID - APPLICATION-ID
SP - SERVICE PROVIDER, PC = PRIMARY CHANNEL ID, 
BC - BACKUP CHANNEL ID, BR - BORDER, EXIT - WAN INTERFACE
UC - UNCONTROLLED, PE - PICK-EXIT, CN - CONTROLLED, UK - UNKNOWN

Dst-Site-Pfx      Dst-Site-Id APP            DSCP    TC-ID APP-ID    State  SP      PC/BC       BR/EXIT

10.1.12.0/24      10.2.12.12  N/A            ef      5     N/A       CN     MPLS    11/NA       10.8.4.4/Tunnel100     <--- HERE
10.2.12.12/32     10.2.12.12  N/A            default 4     N/A       CN     MPLS    5/NA        10.8.4.4/Tunnel100
10.1.11.0/24      10.2.11.11  N/A            ef      7     N/A       CN     MPLS    14/15       10.8.4.4/Tunnel100
10.1.10.0/24      10.2.10.10  N/A            ef      6     N/A       CN     MPLS    13/12       10.8.4.4/Tunnel100
 Total Traffic Classes: 4 Site: 4  Internet: 0
R83#


The corresponding site is 10.2.12.12:


R83#show domain one master site-prefix 
  Change will be published between 5-60 seconds 
  Next Publish 01:06:15 later
  Prefix DB Origin: 10.8.3.3
  Prefix Flag: S-From SAF; L-Learned; T-Top Level; C-Configured;

Site-id              Site-prefix          Last Updated         Flag      
--------------------------------------------------------------------------------
10.2.10.10            10.1.10.0/24         00:51:31 ago         S,
10.2.11.11            10.1.11.0/24         00:51:33 ago         S,
10.2.12.12            10.1.12.0/24         00:30:29 ago         S,   <--- HERE
10.2.12.12            10.1.13.0/24         00:30:29 ago         S,
10.2.10.10            10.2.10.10/32        00:51:31 ago         S,
10.2.11.11            10.2.11.11/32        00:51:33 ago         S,
10.2.12.12            10.2.12.12/32        00:30:29 ago         S,
10.8.3.3              10.8.3.3/32          00:53:59 ago         L,
10.8.3.3              10.8.0.0/16          00:53:59 ago         C,
10.9.3.3              10.9.3.3/32          00:52:04 ago         S,
10.9.3.3              10.9.0.0/16          00:52:04 ago         S,C,
255.255.255.255      *10.0.0.0/8           00:53:59 ago         T,
--------------------------------------------------------------------------------
R83#


We are now able to find and check the channel that goes to this remote site with the corresponding DSCP value EF.


R83#show domain one master channels 
Channel Id: 18  Dst Site-Id: 10.2.12.12  Link Name: INET  DSCP: ef [46] TCs: 0
  Channel Created: 00:11:29 ago
  Provisional State: Initiated and open
  Operational state: Not-Available(no next hop)(Channel in Initial state)   <--- HERE
  Interface Id: 11 
  Estimated Channel Egress Bandwidth: 0 Kbps
  Immitigable Events Summary: 
   Total Performance Count: 0, Total BW Count: 0
   TCA Statitics:
      Received:0 ; Processed:0 ; Unreach_rcvd:0
          
[SNIP]


Next step is to check on the corresponding BR. We can notice that there is no next-hop for 10.2.12.12. Routing information is missing.


R84#show domain one border channel parent
Border Channel Parent Route Details:

Channel id: 2, Dscp: defa [0], Site-Id: 255.255.255.255, Path: INET, Interface: Tunnel200
  Nexthop: 0.0.0.0
  Protocol: None

Channel id: 4, Dscp: defa [0], Site-Id: 10.9.3.3, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.95
  Protocol: BGP

Channel id: 6, Dscp: defa [0], Site-Id: 10.2.12.12, Path: INET, Interface: Tunnel200
  Nexthop: 0.0.0.0 (Next lookup in 18432 msec)
  Protocol: None

Channel id: 8, Dscp: defa [0], Site-Id: 10.2.10.10, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.10
  Protocol: BGP

Channel id: 10, Dscp: defa [0], Site-Id: 10.2.11.11, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.11
  Protocol: BGP

Channel id: 12, Dscp: ef   [2E], Site-Id: 10.2.10.10, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.10
  Protocol: BGP

Channel id: 15, Dscp: ef   [2E], Site-Id: 10.2.11.11, Path: INET, Interface: Tunnel200
  Nexthop: 10.0.200.11
  Protocol: BGP

Channel id: 18, Dscp: ef   [2E], Site-Id: 10.2.12.12, Path: INET, Interface: Tunnel200
  Nexthop: 0.0.0.0 (Next lookup in 28672 msec)                                                        <--- HERE
  Protocol: None

R84#


Check the routing topology – 10.2.12.12 is missing:


R85#sh ip bgp
BGP table version is 19, local router ID is 10.8.5.5
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, 
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter, 
              x best-external, a additional-path, c RIB-compressed, 
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
     0.0.0.0          0.0.0.0                                0 i
 *>i 10.1.10.0/24     10.0.200.10              0    100      0 i
 *>i 10.1.11.0/24     10.0.200.11              0    100      0 i
 *>i 10.1.13.0/24     10.0.200.13              0    100      0 i   <-- No 10.2.12.12 (Branch12 site-id)
 *>i 10.2.10.10/32    10.0.200.10              0    100      0 i
 *>i 10.2.11.11/32    10.0.200.11              0    100      0 i
 *>i 10.2.13.13/32    10.0.200.13              0    100      0 i
 *>  10.8.3.3/32      10.8.25.2               21         32768 i
 *>  10.8.101.0/24    10.8.25.2               21         32768 i
 *>  10.8.102.0/24    10.8.25.2               21         32768 i
 *>  10.8.103.0/24    10.8.25.2               21         32768 i
 *>  10.8.104.0/24    10.8.25.2               21         32768 i
 *>i 10.9.3.3/32      10.0.200.95             21    100      0 i
 *>i 10.9.101.0/24    10.0.200.95             21    100      0 i
     Network          Next Hop            Metric LocPrf Weight Path
 *>i 10.9.102.0/24    10.0.200.95             21    100      0 i
 *>i 10.9.103.0/24    10.0.200.95             21    100      0 i
 *>i 10.9.104.0/24    10.0.200.95             21    100      0 i
R85#



Monitor Operation - Load Balancing

You can check all external link utilization and associated Traffic Classes from the local Master Controller.

R83#sh domain one master exits 

  BR address: 10.8.5.5 | Name: Tunnel200 | type: external | Path: INET | 
      Egress capacity: 1000 Kbps | Egress BW: 660 Kbps | Ideal:649 Kbps | over: 11 Kbps | Egress Utilization: 66 %
      DSCP: default[0]-Number of Traffic Classes[3]

  BR address: 10.8.4.4 | Name: Tunnel100 | type: external | Path: MPLS | 
      Egress capacity: 1000 Kbps | Egress BW: 538 Kbps | Ideal:649 Kbps | under: 111 Kbps | Egress Utilization: 53 %
      DSCP: default[0]-Number of Traffic Classes[1]
      DSCP: af31[26]-Number of Traffic Classes[4]
      DSCP: ef[46]-Number of Traffic Classes[4]
  
--------------------------------------------------------------------------------
R83#

Notes:

  • Check all Egress link utilization - the percentage of link utilization from all external interfaces should be in the 20% range (this is the default).
  • All performance policies are on the preferred path. This is in line with the defined policies. Only best effort traffic classes will be load balanced.



More information


Key Takeaways

IWAN Intelligent Path Control pillar is based upon Performance Routing (PfR)

  • Maximizes WAN bandwidth utilization
  • Protects applications from performance degradation
  • Enables the Internet as a viable WAN transport
  • Provides multisite coordination to simplify network wide provisioning.
  • Application-based policy driven framework and is tightly integrated with existing AVC components.
  • Smart and Scalable multi-sites solution to enforce application SLAs while optimizing network resources utilization.


PfRv3 is the 3rd generation Multi-Site aware Bandwidth and Path Control/Optimization solution for WAN/Cloud based applications. Available now on:

  • ASR1k, 4451-X and CSR1000v with IOS-XE 3.13
  • ISR-G2 with 15.4(3)M



Rating: 4.8/5 (52 votes cast)

Personal tools