PfRv3 Technology Overview
- PfRv3 Home page
- PfRv3 Technology Overview
- PfRv3 Solution Guides home page
- PfR Home page (original PfRv1 and PfRv2)
Intelligent WAN (IWAN)
The Cisco Intelligent WAN (IWAN) solution provides design and implementation guidance for organizations looking to deploy a transport independent WAN with intelligent path control, application optimization, and secure connectivity to the Internet and branch locations while reducing the operating cost of the WAN. IWAN takes full advantage of premium WAN and cost-effective Internet services to increase bandwidth capacity without compromising performance, reliability, or security of collaboration or cloud-based applications.
For more information, refer to IWAN Overview
Performance Routing is the Intelligent Path Control pillar and a key component of IWAN.
Moving to Performance Routing version 3 (PfRv3)
Performance Routing has been available for more than 10 years, from the original Optimized Edge Routing (OER) to Performance Routing phase 2 that introduced Application Routing based on real-time performance metrics.
PfR is now evolving to the next generation with phase 3 also called PfRv3:
This major release has a number of key improvements:
- Domain Based centralized policy provisioning
- Passive monitoring based on the Unified Performance Monitor already used in Application Visibility.
- DSCP or Application based policies with Cisco deep packet inspection engine NBAR2.
- Smart Probing: to measure performance in absence of real user traffic and for automatic discovery.
PfRv3 was designed with configuration simplification and enhanced scalability from the ground up.
Intelligent Path Control - PfRv3 Principles
Intelligent Path Control using Cisco Performance Routing (PfR) improves application delivery and WAN efficiency. PfR enables intelligence of Cisco IOS routers to improve application performance and availability. PfR allows customers to protect critical applications from fluctuating WAN performance while intelligently load balancing traffic over all WAN paths. PfR monitors network performance and selects the best path for each application based upon advanced criteria such as reachability, delay, jitter and loss. PfR can evenly distribute traffic to maintain equivalent link utilization levels using an advanced load balancing technique - even over links with differing bandwidth capacities. IWAN Intelligent Path Control is key to providing a business-class WAN over Internet transports.
- PfR monitors network performance and routes applications based on application performance policies
- PfR load balances traffic based upon link utilization levels to efficiently utilize all available WAN bandwidth
Device Setup and Role
PfR is comprised of two major Cisco IOS components, a Master Controller (MC) and a Border Router (BR). The Master Controller is a policy decision point at which policies are defined and applied to various traffic classes that traverse the Border Router systems. The Master Controller can be configured to learn and control traffic classes on the network.
- Border Routers (BRs) are in the data forwarding path. Border Routers collect data from their Performance Monitor cache and smart probe results, provide a degree of aggregation of this information, and influence the packet forwarding path as directed by the Master Controller to manage user traffic.
- ￼Master Controller (MC) is the policy decision maker. At a large site, such as data center or campus, the master controller is a standalone chassis. For smaller branch locations, the master controller is typically collocated (configured) on the same platform as the border router. As a general rule, the large locations manage more network prefixes and applications than a branch deployment, thus consuming more CPU and memory resources for the master controller function. Therefore, it makes a good design practice to dedicate a chassis for the master controller at large sites.
The branch typically manages fewer active network prefixes and applications. And due to the costs associated with dedicating a chassis at each branch, the network manager can co-locate the master controller and border router on the same router platform. CPU and memory utilization should be monitored on platforms operating as master controllers, and if utilization is high the network manager should consider a master controller platform with a higher capacity CPU and memory. The master controller communicates with the border routers over an authenticated TCP socket, but has no requirement for populating its own IP routing table with anything more than a route to reach the border routers.
Because PfR is an intelligent path selection technology, there must be at least two external interfaces under the control of PfR and at least one internal interface. There must be at least one border router configured. If only one border router is configured, then both external interfaces are attached to the single border router. If more than one border router is configured, then the two or more external interfaces are configured across these border routers. External links, or exit points, are therefore owned by the border router; they may be logical (tunnel interfaces) or physical links (serial, Ethernet, etc.).
There are four different roles a device can play in PfRv3 configuration:
- Hub Master Controller: The hub master controller is the master controller at the hub-site, which is either a data center or a head quarter. This is the device where all policies are configured. It also acts as master controller for that site and makes optimization decision.
- Hub Border Router: This is a border router at the hub-site. This is the device where WAN interface terminates. PfRv3 is enabled on these interfaces. There could be one or more WAN interface on the same device. There can be one or more Hub BRs. On the Hub Border Routers, PfRv3 must be configured with:
- The address of the local MC
- The path name on external interfaces
- Branch Master Controller: The Branch Master Controller is the master controller at the branch-site. There is no policy configuration on this device. It receives policy from the Hub MC. This device acts as master controller for that site for making optimization decision. The configuration includes the IP address of the hub MC.
- Branch Border Router: This is a border router at the branch-site. There is no configuration other than enabling of PfR Border Router functionality on the device. The WAN interface that terminates on the device is/are detected automatically.
Improvements over PfRv2: What's different
Enterprise Domain: all sites belong to an Enterprise Domain and are connected with peering. The peering mechanism is used for service exchange between sites that belongs to the same domain. It was introduced with PfRv2 as a feature called Target Discovery (TD) but has been greatly enhance to become the core of PfRv3. This allows service exchange and coordination at WAN Edge. This is also used for network automatic discovery and single touch provisioning.
Application centric: The PfRv3 solution is more focused on applications. It provides a simple way to provisioning policies based on application (classification based on Cisco's deep packet inspection engine NBAR2). It provides visibility into application by integrating with Unified Monitoring (Performance Monitor). Application visibility includes bandwidth, performance, correlation to QoS queues, etc.
Network Based Application Recognition (NBAR2):
Simple Provisioning: PfRv3 has simplified policies with pre-existing template to choose from. The policy configuration is in central location and is distributed to all sites via peering. This not only simplifies provisioning substantially compare to PfR but also makes it consistent across the entire network. It also makes it easier to manage for the administrator.
Automatic Discovery: Enterprise sites are discovered using peering. Each site peers with the hub site. Every other site via peering detects the new site. Prefixes specific to sites are advertised along with site-id. The site-prefix to site-id mapping is used in monitoring and optimization as specified in detail in later sections. This mapping is also used for creating reports for specific sites. WAN Interfaces at each site are discovered using special probing mechanism. This further reduces provisioning on the branch sites. The WAN Interface discovery also creates mapping of the interface to particular Service Provider. The mapping is used in monitoring and optimization. It can also be used to draw the WAN topology in NMS GUI.
Scalable Passive Monitoring: PfRv3 uses Unified Monitor (also called Performance Monitor) to monitor traffic going into WAN links and traffic coming from the WAN links. It monitors performance metrics per DSCP rather than monitoring on per flow or per prefix basis. When Application based policies are used, the MC will use a mapping table between the Application Name and the DSCP discovered. This reduces the number of records significantly. PfRv3 also relies on performance data measured on the existing data traffic on all paths whenever it can, thereby reducing the need of synthetic traffic. Furthermore, the measurement data is not exported unless there is a violation, which further reduces control traffic and processing of those records.
Smart Probing: PfRv3 uses less synthetic traffic but in a smart way. It has lightweight probing mechanism that will generate traffic only when there is no traffic. The traffic it generates is RTP traffic, which allows measuring jitter and packet-loss via regular Performance Monitors. It reduces the need for control message carrying statistics back to the sender. It also relies on informing the PfRv3 Master Control when there is violation. Unlike PfR, the probes are setup one time instead of setting up and tearing down every now and then. Even though probes are setup one time the probe traffic is not sent unless it is needed
Scaling: constant probing on all exists with software assisted probing on PfR is one of the scaling issues for large deployments. PfRv3 uses the platform hardware wherever possible to generate the probes on the Border Routers. Apart from artificial probes, PfRv3 also uses the existing traffic for probing. When there is no traffic however, PfRv3 uses it’s own probes to measure important metrics such as delay and jitter. PfRv3 should scale to thousands of branches. To achieve scalability, PfRv3 uses Scalable Passive Monitoring and Smart Probes.
VRF Support: PfRv3 is VRF aware. Instances of Master Controllers work under a VRF. The VRF use case that PfRv3 support is of Enterprise segmenting its network into different logical networks. For example, enterprise network is divided into Engineering (Red), Sales (Green) and Guest (Blue) VRFs. PfRv3 instances are created for each VRF. On each device (MC and BR) there will be one instance of PfRv3 per VRF. In this example, there will be three PfRv3 MC configuration and process, one for Red, one for Green and one for Blue.
Enterprise Domain Provisioning
- Every MC and BR subscribes to the peering service to access domain services
- Hub site propagates policy to all branch sites.
- Hub site propagates monitors to all branch sites
Enterprise HQ - Define the Hub Master Controller
The Hub Master Controller defines the Domain Controller and is the central location for provisioning.
Enterprise HQ - Define the Path Names
Path names are allocated on each external interfaces on the Hub site. These names are distributed to all sites in the domain for automatic discovery.
Branch MCs Join the Domain
Point each Branch MC to the PfR Domain Hub. Note that there is nothing specific to a site – Copy/paste the same configuration on all branch sites.
Domain policies are configured on the Hub MC. These policies are then distributed to branch MCs using the peering infrastructure. All sites that are in the same domain will share the same set of policies. Policies can be based on DSCP or on Application names. For the latter, PfRv3 enables NBAR2 and automatically classifies applications based on the Protocol Pack located on the Border Routers.
Policies can be based on pre existing templates available with IOS, or can be customized. One can then manually define the various thresholds for delay loss and jitter.
Unified Monitors (Performance Monitors) are automatically configured and activated in the background by PfRv3. There is no need for manual configuration. Performance Monitor definitions are pre-defined on the Hub MC and distributed to branch BRs using the domain infrastructure.
Three different Performance Monitors are automatically enabled on all Border Routers in the domain:
- Performance Monitor 1: to learn site prefixes (applied on external interfaces on egress)
- Performance Monitor 2: to monitor bandwidth on egress (applied on external interfaces on egress)
- Performance Monitor 3: to monitor performance on ingress (applied on external interfaces on ingress)
WAN Topology Discovery
The discovery phase includes the following steps:
- Branch WAN interface discovery
- Site Prefixes database
- Channel creation
Smart Probes are used to help with the discovery but also for measurement when there is no user traffic. These probes are generated from the dataplane. Smart Probe traffic is RTP and measured by Unified Monitoring just like other data traffic. The Probes (RTP packets) are sent over added Channels to the sites discovered via the prefix database. Without actual traffic, BR sends 10 probes spaced 20ms apart in the first 500ms and another similar 10 probes in the next 500ms, thus achieving 20pps for channels without traffic. With actual traffic, lower frequency when real traffic is observed over the channel. Probes sent every 1/3 of [Monitor Interval], ie every 10 sec by default.
Branch WAN Interface Discovery
WAN Discovery is essential for discovering the right interface for a service provider and then activate monitoring to collect Statistics on the Branch Sites. The path names are specified via configurations on the Hub and Regional-hub sites only. All remaining branch sites will automatically discover their WAN interface with the use of Smart Probes.
The Hub Border Routers start sending smart probes to the remote sites over the existing channels.
When a branch site receives a smart probe:
- If the probe was meant for that site, BR will extract the Path Name, DSCP, and Timestamp value from the packet and then drop the probe packet.
- BR sends a message to the local Master Control to convey a new egress interface has been discovered
- The Master Controller (MC) discovers the path name the interface is connected to.
- The Master Controller (MC) updates its database and then sends a signal to the Border Routers to add the interface and activate the Performance Monitor instances (PMI).
At this point, a new exit is added and corresponding Channels are added containing the unique combination of DSCP value received, site id and exit.
Every time a packet is received on a channel, the receive timestamp is updated and every time a probe is sent out, the transmit timestamp is updated. If a branch site hasn’t received a probe on a given channel for a given period of time (the receive timestamp exceeds the time to determine reachable status), the channel is tagged as un-reachable. The time taken to declare a channel unreachable is twice the time difference between probe packets (2 X 500 ms).
If all the channels become unreachable on the branch site, external interfaces are deleted as well as Performance Monitor instances. If anytime another probe is received on the interface again, the discovery process repeats itself.
Example of output for branch site10 which has two external interfaces connected to a primary path MPLS and a secondary path INET:
R10#sh domain one master status *** Domain MC Status *** Master VRF: Global Instance Type: Branch Instance id: 0 Operational status: Up Configured status: Up Loopback IP Address: 10.2.10.10 [SNIP] Borders: IP address: 10.2.10.10 Connection status: CONNECTED (Last Updated 02:18:00 ago ) Interfaces configured: Name: Tunnel100 | type: external | Service Provider: MPLS | Status: UP Number of default Channels: 1 Name: Tunnel200 | type: external | Service Provider: INET | Status: UP Number of default Channels: 1 Tunnel if: Tunnel0 -------------------------------------------------------------------------------- R10#
Site Prefixes database
Site Prefixes are inside prefixes for each site. The site prefix database is central to the site concept in PfRv3. Site prefix database resides both on the MCs and BRs.
- The site prefix database located at MC learns/manages the site prefixes and their origins from both local egress flow and advertisements from remote peers.
- The site prefix database located at BR learns/manages the site prefixes and their origins only from the advertisements from remote peers.
Learning local inside prefixes
Site-prefixes are learned from monitoring traffic moving in egress direction on WAN interface. Site-prefix Performance Monitor is activated for site-prefix-learning-period (30 second). This monitor only contains src-prefix and src-mask as key field. The values are collected by the BRs and the record is exported to the local MC. Local MC updates the site-prefix database. The site-prefix record and the site-prefix timer values are distributed to BRs via the Domain peering infrastructure. BR manages the timer and activates the Performance Monitor (Performance Monitor 1 on egress).
Learning Remote Site Prefixes
In order to learn from advertisements via the peering infrastructure from remote peers, every MC and BR subscribes to the peering service for the sub-service of site prefix. MCs publish and receive site prefixes. BRs only receive site prefixes. MC publishes the list of site prefixes learned from local egress flows by encoding the site prefixes and their origins into a message. This message can be received by all the other MCs and BRs that subscribe to the peering service. The message is then decoded and added to the site prefix databases at those MCs and BRs.
By default, MCs and BRs age out all the site prefixes at a frequency of 24 hours.
Example of output for hub MC:
R83#sh domain one master site-prefix Change will be published between 5-60 seconds Next Publish 00:24:50 later Prefix DB Origin: 10.8.3.3 Prefix Flag: S-From SAF; L-Learned; T-Top Level; C-Configured; Site-id Site-prefix Last Updated Flag -------------------------------------------------------------------------------- 10.2.10.10 10.1.10.0/24 01:30:15 ago S, 10.2.11.11 10.1.11.0/24 01:29:44 ago S, 10.2.12.12 10.1.12.0/24 01:29:44 ago S, 10.2.13.13 10.1.13.0/24 01:29:44 ago S, 10.2.10.10 10.2.10.10/32 01:30:15 ago S, 10.2.11.11 10.2.11.11/32 01:29:44 ago S, 10.2.12.12 10.2.12.12/32 01:29:44 ago S, 10.2.13.13 10.2.13.13/32 01:29:44 ago S, 10.8.3.3 10.8.3.3/32 1d21h ago L, 10.8.3.3 10.8.0.0/16 1d21h ago C, 10.9.3.3 10.9.3.3/32 01:35:13 ago S, 10.8.3.3 10.9.0.0/16 1d21h ago C, 255.255.255.255 *10.0.0.0/8 1d21h ago T, -------------------------------------------------------------------------------- R83#
The smart probe system introduces a concept of Channel. Each Channel is a “unique” combination of:
- Peer-Site id
Channels are added every time there is a new DSCP or a new interface or a new site has been added to the prefix database. The probes are then sent over each one of these channels and a timer guides their sending in the smart probe instance.
PfRv3 considers a channel reachable as long as the site receives a packet on that channel:
- When there is no traffic on the Channel, Smart Probes are the only way of detecting unreachability. If no probe is received within one second, PfRv3 detects unreachability.
- When there is traffic on the Channel, if PfRv3 does not see any packet for more than a second on a channel, PfRv3 detects unreachability.
There is always a default channel which is the channel with DSCP 0 for all sites. Default Channels are created from Hub to Branch sites even though there may not be any traffic. This is to assist interface discovery on the Branch. However, interface can be discovered on a non-default channel as well
Example of output for a specific channel corresponding to site11, DSCP ef and path name INET:
R10#sh domain one master channels Legend: * (Value obtained from Network delay:) Channel Id: 14 Dst Site-Id: 10.2.11.11 Link Name: INET DSCP: ef  TCs: 0 Channel Created: 00:04:02 ago Provisional State: Initiated and open Operational state: Available Interface Id: 12 Estimated Channel Egress Bandwidth: 20 Kbps Immitigable Events Summary: Total Performance Count: 0, Total BW Count: 0 ODE Stats Bucket Number: 1 Last Updated : 00:00:03 ago Packet Count : 321 Byte Count : 22244 One Way Delay : 253 msec* Loss Rate Pkts: 0.0 % Loss Rate Byte: 0.0 % Jitter Mean : 19 usec Unreachable : FALSE [SNIP]
Learning Traffic Classes
PfRv3 (like all previous versions of PfR) manages aggregation of flows called Traffic Classes. A Traffic Class is an aggregation of flow going to the same destination prefix, with the same DSCP and Application name (if application based policies are used).
Traffic Classes are learnt by monitoring traffic moving in egress direction on WAN interface. This is based on Performance Monitor instance #2 in the picture below:
- Automatically activates the egress aggregate monitor (Performance Monitor Instance #2)
- Manage the Performance Monitor timer
- Collect Traffic Classes.
- Export the records to the local Master Controller.
Local Master Controller:
- Updates the Traffic Class database with the new received records
- Maps the destination prefix with a site-id based on destination prefix using its site prefix database.
- Maps to primary and backup channels for performance monitoring.
Traffic Classes are divided in two groups:
- Performance Traffic Classes (TCs): this is all Traffic Classes with performance metrics defined (delay, loss, jitter).
- Non Performance Traffic Classes: this is basically the default Traffic Classes – ie TCs that do not match any of the match statements. They have no performance metrics defined
One important note is that Load Balancing only affects non-performance Traffic Classes.
PfRv3 performance measurement is based on the Unified Monitor infrastructure (Performance Monitor instances). PfRv3 monitoring sub-system interacts with Unified Monitor to achieve:
- threshold crossing Alerts
- Out of policy report & resolver
- Learning of site prefixes and applications
- Traffic Class performance
Smart Probing helps to collect performance when there is no traffic for a specific channel.
Performance Monitor (Unified Monitoring)
- Passive Monitoring based on actual traffic
- Bandwidth – Egress monitor #2
- Performance – Ingress Monitor #3
Benefits of Smart Probing
- Help gathering performance metrics even when there is no actual user traffic
- Generate traffic only when there is no traffic
- Less synthetic traffic
Basically performance measurement in PfRv3 always involve a pair of sites: the source site where the traffic originates and the destination site when the traffic goes. Bandwidth measurement is performed on the source site (BRs) and performance metrics gathered on the destination BRs.
Egress Monitoring Details
Egress Aggregate monitoring is activated on external interface automatically (Performance Monitor Instance #2). The purpose of this monitor is to learn and monitor traffic-class.
Border Routers collect Traffic Class bandwidth (bytes and packets) and export the records to the local Master Controller.
The exporter’s destination is set as local MC’s source-interface address by default. If an external collector is configured at hub site then an additional exporter is configured and added to the egress aggregate monitor.
Below is an example of the Performance Monitor Instance #2:
R10#sh domain one border pmi [SNIP] Egress policy DOMAIN-Policy-Egress-0-3: Egress policy activated on: Tunnel200 Tunnel100 ------------------------------------------------------------------------- PMI[Egress-aggregate]-FLOW MONITOR[MON-Egress-aggregate-0-0-2] monitor-interval:30 Trigger Nbar:No minimum-mask-length:28 key-list: ipv4 destination prefix ipv4 destination mask pfr site destination prefix ipv4 pfr site destination prefix mask ipv4 ip dscp interface output timestamp absolute monitoring-interval start Non-key-list: counter bytes long counter packets long ip protocol DSCP-list:N/A Class:DOMAIN-Class-Egress-ANY-0-2 Exporter-list: 10.2.10.10
Ingress Monitoring Details
Ingress aggregate monitoring is activated on external interface automatically (Performance Monitor Instance #3). It is used for measuring performance of the traffic and triggering alert when threshold is crossed. Thresholds are determined on per DSCP basis. One class-map is created for each DSCP and react (threshold crossing alert) are provisioned under each class.
PfRv3 policies are either applied to application or DSCP. However, performance measurement is measured per DSCP because Service Providers only differentiate traffic based on DSCP and not based on application. Hence a mapping between applications to DSCP is generated on hub MC. This mapping is distributed to all sites. The match, collect fields used by Performance Monitor and exporter configuration are pre-defined in Hub MC. Performance Thresholds are attained from HUB MC’s PfRv3 Policy. The specification is then distributed to all BRs via the peering infrastructure.
Below is an example of the Performance Monitor Instance #2:
R10#sh domain one border pmi [SNIP] Ingress policy DOMAIN-Policy-Ingress-0-2: Ingress policy activated on: Tunnel200 Tunnel100 ------------------------------------------------------------------------- PMI[Ingress-per-DSCP]-FLOW MONITOR[MON-Ingress-per-DSCP-0-0-0] monitor-interval:30 key-list: pfr site source id ipv4 pfr site destination id ipv4 ip dscp interface input policy performance-monitor classification hierarchy Non-key-list: transport packets lost rate transport bytes lost rate pfr one-way-delay network delay average transport rtp jitter inter arrival mean counter bytes long counter packets long timestamp absolute monitoring-interval start DSCP-list:N/A Exporter-list:None [SNIP]
Smart Probing is used to help collecting performance metrics when there is no actual user traffic. One example would be when a preferred path is used for voice traffic. All user voice traffic will flow over the preferred path leaving the secondary path without any voice traffic. In that case, source site BRs will generate smart probes to the corresponding channel over the secondary path. Probe traffic is RTP and measured by Unified Monitoring just like other data traffic.
Without actual traffic: BR sends 10 probes spaced 20ms apart in the first 500ms and another similar 10 probes in the next 500ms, thus achieving 20pps for channels without traffic. With actual traffic, a lower frequency is used when real traffic is observed over the channel, Probes are sent every 1/3 of monitor-interval, ie every 10 sec by default.
Threshold Crossing Alerts (TCA)
TCA are notifications of traffic out of policy events. The generation of TCAs from Unified Monitoring (Performance Monitor) happens on destination border router that is receiving flows across various service providers. The destination border router passively monitors the ingress flows specified by the local MC that it peers with. The actions on the TCA are taken by the source MC.
TCA notifications are generated from Performance Monitor that are attached on BRs and Smart-Probing.
TCA from Performance Monitor
Destination BR receives performance TCA notifications from Performance Monitor (Performance Monitor Instance #2), which monitors the ingress traffic statistics and reports TCA alerts once any threshold crossing events occured. And then BR forwards the performance TCA notifications to the selected source MCs that actually generate the traffic crossing thresholds via multiple paths for reliable delivery. Thus the source MCs can receive the TCA notifications from destination BR successfully. And MCs translate the TCA notifications that contain performance statistics to OOP (out of policy).
TCA from Smart Probes
The Unreachable TCA is generated by Smart Probe per Path Name, source-site-id and DSCP. The notification is sent to source MC via UDP and local MC via TCP. Periodic notification is sent to source MC until the channel becomes reachable. The attempt will be made to piggy back unreachable TCA with other performance TCA. If there are no performance-TCAs then unreachable TCA will be sent.
TCA reception on Source MC
Source MC receives the TCA notifications from destination BR
- Extracts the DSCP value and path name that are carried in TCA
- Stores under corresponding channel based on DSCP and destination site id.
- Starts a Timer for On-Demand-Export to arrive
- Move the affected Traffic Classes in TCA to alternate path
- If TC is based on Application Id,
- Application ID to DSCP mapping is used to determine the DSCP of the TC.
- DSCP value is cached in the TC itself to avoid a further lookup
On Demand Export (ODE)
Performance measurement is done on the remote end, therefore under normal conditions, a Master Controller will not have the performance metrics for a channel. The following output illustrates a channel that did not experience any performance issue. Therefore the MC has no performance metrics:
R83#sh domain one master channels Channel Id: 85 Dst Site-Id: 10.2.10.10 Link Name: MPLS DSCP: ef  TCs: 1 Channel Created: 00:06:03 ago Provisional State: Initiated and open Operational state: Available Interface Id: 11 Estimated Channel Egress Bandwidth: 24 Kbps Immitigable Events Summary: Total Performance Count: 0, Total BW Count: 0 TCA Statistics: Received:0 ; Processed:0 ; Unreach_rcvd:0 [SNIP]
When TCA occurs, the Source BR will trigger an on-demand export of performance data associated with DSCP and source-site. The source-site MC uses this data to make a decision. Performance metrics are now available on the source-site MC which keeps the two last buckets of the ODE metrics.
R83#sh domain one master channels Channel Id: 85 Dst Site-Id: 10.2.10.10 Link Name: MPLS DSCP: ef  TCs: 1 Channel Created: 00:10:03 ago Provisional State: Initiated and open Operational state: Available Interface Id: 11 Estimated Channel Egress Bandwidth: 24 Kbps Immitigable Events Summary: Total Performance Count: 0, Total BW Count: 0 ODE Stats Bucket Number: 1 Last Updated : 00:00:00 ago Packet Count : 19 Byte Count : 1340 One Way Delay : 254 msec* Loss Rate Pkts: 0.0 % Loss Rate Byte: 0.0 % Jitter Mean : 4222 usec Unreachable : FALSE ODE Stats Bucket Number: 2 Last Updated : 00:00:01 ago Packet Count : 19 Byte Count : 1340 One Way Delay : 255 msec* Loss Rate Pkts: 0.0 % Loss Rate Byte: 0.0 % Jitter Mean : 3666 usec Unreachable : FALSE TCA Statitics: Received:69 ; Processed:68 ; Unreach_rcvd:0 Latest TCA Bucket Last Updated : 00:00:01 ago One Way Delay : 255 msec* Loss Rate Pkts: NA Loss Rate Byte: NA Jitter Mean : NA Unreachability: FALSE [SNIP]
PfRv3 can support 2 monitoring timer values. Default is 30 seconds, but a Quick Monitor can be added for critical applications when failover time is critical. Quick Monitor is activated with the following commands on the hub MC:
domain one vrf default master hub source-interface Loopback0 monitor-interval 2 dscp af31 monitor-interval 2 dscp cs4 monitor-interval 2 dscp af41 monitor-interval 2 dscp ef
Performance Routing v3 Zero SLA
PfRv3 leverages the use of Smart Probes to help gathering performance metrics on secondary path where you may not have actual user traffic for a specific DSCP. That's especially true when you have a preferred path for Critical business applications and voice or video traffic. PfRv3 sends Smart Probes for each DSCP and destination site, ie per channel.
The Zero SLA support feature enables PfRv3 to reduce probing bandwidth on various ISP links, such as 3G, 4G, and LTE. When the Zero SLA (0-SLA) feature is configured on an ISP link, only the channel with the DSCP (Differentiated Services Code Point) value 0 is probed. For all other DSCPs, channels are created only if there is traffic, but no probing is performed. Performance metrics are extrapolated from channel DSCP 0.
The configuration is performed on the Hub Border Routers, over the external interfaces. This is an additional parameter for the path name command:
interface Tunnel 200 domain one path INET zero-sla
PfRv3 Route Control
Contrary to the previous version of PfR, PfRv3 does not make any change in the route table. PfRv3 route control is a feature in the datapath:
PfRv3 Route Control provides the following features:
- Activated on all but External interface.
- The feature is in input direction.
- Maintains a single database of traffic-class that can be identified by combination of destination-prefix (prefix and prefix length), nbar-app-id or dscp. Each traffic-class entry contains output channel id, interface and a nexthop ip address.
- On each packet, a lookup is performed to obtain egress channel, output interface and next hop information. This information is used to forward the packet. If entry is not found then normal routing takes over.
Next hop information is available on the site Master Controller with the Traffic Class summary show command:
MC#sh domain one master traffic-classes summary APP - APPLICATION, TC-ID - TRAFFIC-CLASS-ID, APP-ID - APPLICATION-ID SP - SERVICE PROVIDER, PC = PRIMARY CHANNEL ID, BC - BACKUP CHANNEL ID, BR - BORDER, EXIT - WAN INTERFACE UC - UNCONTROLLED, PE - PICK-EXIT, CN - CONTROLLED, UK - UNKNOWN Dst-Site-Pfx Dst-Site-Id APP DSCP TC-ID APP-ID State SP PC/BC BR/EXIT 10.1.11.0/24 10.2.11.11 N/A default 3 N/A CN INET 9/NA 10.8.5.5/Tunnel200 10.1.13.0/24 10.2.13.13 N/A default 5 N/A CN INET 11/12 10.8.5.5/Tunnel200 10.1.10.0/24 10.2.10.10 N/A default 2 N/A CN INET 7/8 10.8.5.5/Tunnel200 10.1.13.0/24 10.2.13.13 N/A ef 18 N/A CN MPLS 42/39 10.8.4.4/Tunnel100 10.1.12.0/24 10.2.12.12 N/A ef 17 N/A CN MPLS 40/41 10.8.4.4/Tunnel100 10.1.11.0/24 10.2.11.11 N/A ef 20 N/A CN MPLS 46/45 10.8.4.4/Tunnel100 10.1.10.0/24 10.2.10.10 N/A ef 19 N/A CN MPLS 48/47 10.8.4.4/Tunnel100 10.1.13.0/24 10.2.13.13 N/A af31 16 N/A CN MPLS 37/38 10.8.4.4/Tunnel100 10.1.12.0/24 10.2.12.12 N/A af31 14 N/A CN MPLS 36/35 10.8.4.4/Tunnel100 10.1.10.0/24 10.2.10.10 N/A af31 15 N/A CN MPLS 44/43 10.8.4.4/Tunnel100 10.1.11.0/24 10.2.11.11 N/A af31 13 N/A CN MPLS 34/33 10.8.4.4/Tunnel100 10.1.12.0/24 10.2.12.12 N/A default 7 N/A CN MPLS 6/NA 10.8.4.4/Tunnel100 Total Traffic Classes: 12 Site: 12 Internet: 0 MC#
This output gives the Border Router and the external interface used for each Traffic Class.
It is also very useful to be able to get the next hop information on a Border Router, especially when MC and BRs are on separate chassis. Next Hop information is available with the Traffic Class show command on the Border Router.
Example 1 on BR1:
BR1#sh domain one border traffic-classes [SNIP] ------------------------------------------------------------------------------------- Src-Site-Prefix: ANY Dst-Site-Prefix: 10.1.11.0/24 DSCP: ef  Traffic class id: 20 TC Learned: 1d21h ago Present State: CONTROLLED Destination Site ID: 10.2.11.11 If_index: 11 Primary chan id: 46 Primary chan Presence: LOCAL CHANNEL Primary interface: Tunnel100 Primary Nexthop: 10.0.100.11 (BGP) Backup chan id: 45 Backup chan Presence: NEIGHBOR_CHANNEL via border 10.8.5.5 Backup interface: Tunnel0 -------------------------------------------------------------------------------------
- Primary chan Presence is set to LOCAL CHANNEL, which means that this BR is actually forwarding the Traffic Class to one of its local external interfaces.
- External interface used is Tunnel100
- Next Hop is 10.0.100.11 and parent route information is from BGP
Example 2 on BR1:
BR1#sh domain one border traffic-classes [SNIP] ------------------------------------------------------------------------------------- Src-Site-Prefix: ANY Dst-Site-Prefix: 10.1.13.0/24 DSCP: default  Traffic class id: 5 TC Learned: 1d23h ago Present State: CONTROLLED Destination Site ID: 10.2.13.13 If_index: 12 Primary chan id: 11 Primary chan Presence: NEIGHBOR_CHANNEL via border 10.8.5.5 Primary interface: Tunnel0 Backup chan id: 12 Backup chan Presence: LOCAL CHANNEL Backup interface: Tunnel100 -------------------------------------------------------------------------------------
- Primary chan Presence is set to NEIGHBOR_CHANNEL, which means that Traffic Class is actually forwarded by the other BR (10.8.5.5)
- Packets are forwarding using the automatic mGRE tunnel0.
Two methods can be used to verify that PfR has initiated changes in the network:
- PfR show commands - PfR show commands can be used to verify that network changes have occurred and that traffic classes are in-policy. The output from the traffic-class command includes information about the current exit interface, current performance metrics, egress and ingress interface bandwidth, reason for last route change, and path information sourced from a specified border router. You can get more information from the PfRv3 Solution Guides home page
- NetFlow version 9 exports - Master Controllers and Border Routers export information using NetFlow v9 to the NetFlow collector. A Network Management application can build reports based on information received. A recommended solution for reporting is LiveAction 4.1. See PfRv3 Reporting for more information.
IWAN Intelligent Path Control pillar is based upon Performance Routing (PfR)
- Maximizes WAN bandwidth utilization
- Protects applications from performance degradation
- Enables the Internet as a viable WAN transport
- Provides multisite coordination to simplify network wide provisioning.
- Application-based policy driven framework and is tightly integrated with existing AVC components.
- Smart and Scalable multi-sites solution to enforce application SLAs while optimizing network resources utilization.
PfRv3 is the 3rd generation Multi-Site aware Bandwidth and Path Control/Optimization solution for WAN/Cloud based applications.
- Available on ASR1000 Series, ISR-4000-X and CSR1v (MC only) with IOS-XE 3.13
- Available on ISR-G2 with IOS 15.4(3)M