AVC-Export:Monitoring
From DocWiki
(→QoS Class-ID, Queue Drops and Queue Hierarchy) |
(→QoS Class-ID, Queue Drops and Queue Hierarchy) |
||
Line 548: | Line 548: | ||
| | | | ||
* 65501 | * 65501 | ||
- | | No CLI is needed. | + | | |
+ | * No CLI is needed. | ||
| Timestamp when monitoring interval expires. Added automatically with ‘collect policy qos queue drops' | | Timestamp when monitoring interval expires. Added automatically with ‘collect policy qos queue drops' | ||
|} | |} |
Revision as of 12:46, 11 December 2012
Contents |
Application Name
NBAR Application name provides the information regarding the L7 level information for a particular flow, e.g HTTP, FTP, SIP etc.. There is an ID exported by ART, which explains which application this flow belongs to. The Application-ID is divided into two parts: Engine-ID:Classification-id.
First 8 bits provides the information about engine, which classified this flow. For example: IANA-L4, CANA-L3 etc. The rest of the 24 bits provides information about the application, for example 80 (HTTP) etc. The CLI for this field is:
flow record type mace <pa-record> collect application name
It is exported against FNF field ID 95. It is supported in both the export formats i.e. netflow-v9 and IPFIX. With this CLI, only the application ID would be exported. The application-id would be a number which may not be understood by collector. To resolve this issue, there is an option to export the mapping table between the application ID and the application name. This option is configurable under flow exporter command. Here is the cli:
flow exporter <my-exporter> option application-table
If you want to get more information regarding the various attributes of a particular application, you can configure the following option under the flow exporter:
flow exporter <my-exporter> option application-attribute
The option command results in periodic export of Network Based Application Recognition (NBAR) application attributes to the collector. The following application attributes are sent to the collector per protocol:
Attribute | Description |
Category | Provides first-level categorization for each application. |
Sub-Category | Provides second-level categorization for each application. |
Application-Group | Groups applications that belong to the same networking application. |
P2P-Technology | Specifies whether the application is based on peer-to-peer technology or not. |
Tunnel-Technology | Specifies whether the application tunnels the traffic of other protocols or not. |
Encrypted | Specifies whether the application is an encrypted networking protocol or not. The optional timeout can alter the frequency at which reports are sent. |
The following command can be used to display the option tables exported to the collector for application name mapping and attributes:
R9#show flow exporter templates Flow Exporter MYEXPORTER: [SNIP] Client: Option options application-name Exporter Format: IPFIX (Version 10) Template ID : 261 Source ID : 0 Record Size : 83 Template layout _____________________________________________________________________________ | Field | ID | Ent.ID | Offset | Size | ----------------------------------------------------------------------------- | APPLICATION ID | 95 | | 0 | 4 | | application name | 96 | | 4 | 24 | | application description | 94 | | 28 | 55 | ----------------------------------------------------------------------------- Client: Option options application-attributes Exporter Format: IPFIX (Version 10) Template ID : 262 Source ID : 0 Record Size : 130 Template layout _____________________________________________________________________________ | Field | ID | Ent.ID | Offset | Size | ----------------------------------------------------------------------------- | APPLICATION ID | 95 | | 0 | 4 | | application category name | 12232 | 9 | 4 | 32 | | application sub category name | 12233 | 9 | 36 | 32 | | application group name | 12234 | 9 | 68 | 32 | | p2p technology | 288 | | 100 | 10 | | tunnel technology | 289 | | 110 | 10 | | encrypted technology | 290 | | 120 | 10 | ----------------------------------------------------------------------------- [SNIP]
The following show commands can be used to display NBAR option tables:
- display all the application ID and the name mapping
R9#show flow exporter option application table Engine: prot (IANA_L3_STANDARD, ID: 1) appID Name Description ----- ---- ----------- 1:8 egp Exterior Gateway Protocol 1:47 gre General Routing Encapsulation 1:1 icmp Internet Control Message Protocol 1:88 eigrp Enhanced Interior Gateway Routing Protocol [SNIP]
- display the NBAR field extraction format which concatenates App ID|Sub-classification ID|Value
R9#show ip nbar parameter extraction Protocol Parameter ID -------- --------- -- smtp sender 4666370 smtp server 4666369 pop3 server 2176001 sip source 4273154 sip destination 4273153 nntp group-name 1848321 http referer 209924 http user-agent 209923 http host 209922 http url 209921 R9#
PA Metrics
PA provides the basic metrics for both TCP and UDP protocols and for both IPv4 and IPv6. Some of the metrics are dynamically exported in the form of the delta value for the interval. These include the client/server bytes and packets metrics. In addition, PA server/client bytes/packets metrics are for layer-3 measurements and in a TCP flow are counted up to the second FIN. The rest of the metrics are relatively static and remain the same across different export intervals.
PA keeps exporting the measurements as long as the flow stays active. As a consequence, the collector might occasionally observe zero values for dynamic metrics such as the client/server bytes/packets. For all the UDP flows, the TCP related metrics such as ART metrics would be zero. Another note for the Input/Output interface metrics is that these are corresponding to the interface from which the flow enters/leaves the box.
All the PA metrics can be exported either through Netflow v9 or IPFIX protocol. PA metrics are summarized below.
Field Name | Export ID | CLI | Description |
Application ID | 95 | collect application name | exports application ID field (coming from NBAR2) to reporting tool. |
Client Bytes | 1 | collect counter client bytes | Total bytes sent by initiator of the connection. Counted up to the second FIN if for a TCP flow. |
Client Packets | 2 | collect counter client packets | Total packets sent by initiator of the connection. Counted up to the second FIN if for a TCP flow. |
Interface Input | 10 | collect interface input | Interface name from which flow is entering the box. |
Interface Output | 14 | collect interface output | Interface name from which flow is exiting out the box. |
Server Bytes | 23 | collect counter server bytes | Total bytes sent by responder of the connection. Counted up to the second FIN if for a TCP flow. |
Server Packets | 24 | collect counter server packets | Total packets sent by responder of the connection. Counted up to the second FIN if for a TCP flow. |
Datalink Mac Source Address Input | 56 | collect datalink mac source address input | MAC address of source device from Input side |
IPv4 DSCP | 195 | collect ipv4 dscp | IPv4 DSCP value |
IPv6 DSCP | 195 | collect ipv6 dscp | IPv6 DSCP value |
ART Metrics
Field Name | Export ID | CLI | Description |
Client Network Time [sum/min/max] |
|
| The round trip time between SYN-ACK & ACK and also called Client Network Delay (CND). CND = T8 – T5 |
Server Network Time [sum/min/max] |
|
| The round trip time between SYN & SYN-ACK and also called Server Network Delay (SND).
SND = T5 - T2 |
Network Time [sum/min/max] |
|
| The round trip time that is the summation of CND and SND. It is also called Network Delay (ND). |
Server Response Time [sum/min/max] |
|
| The time taken by an application to respond to a request. It is also called Application Delay (AD) or Application Response Time.
|
Response Time [sum/min/max] |
|
| The amount of time between the Client REQ and the 1st Server RESP. The Client request could contain multiple packets and we consider the time of last received client packet. |
Total Response Time [sum/min/max] |
|
| The total time taken from the moment the client sends the request until the 1st response packet from the server is delivered to the client. It is also known as Total Delay (TD).
|
Total Transaction Time [sum/min/max] |
|
| The amount of time between the client request and the final response packet from the server. It is measured and exported on receiving either a new request from client (which indicates end of current transaction) or the first FIN packet. |
ART Client Bytes / Packets |
|
| Byte & Packet count for all the client packets.
|
ART Server Bytes / Packets |
|
| Byte & Packet count for all the server packets.
|
ART Count New Connections |
|
| Number of TCP sessions established (3-way handshake). It is also called Number of connections (sessions). |
ART Count Responses |
|
| Number of Req-Rsp pair received within the monitoring interval |
Responses histogram buckets (7- bucket histogram)) |
|
| Number of responses by response time in 7-bucket histogram.
|
Art Count Late Responses |
|
| Number of responses received after the max Response Time. Current threshold of timeout is 1 second. Also called Number of late responses (timeouts) |
Art Count Transactions |
|
| Total number of Transactions for all TCP connections.
|
Art Count Retransmissions |
|
| Packet count for possible retransmitted packets with the same sequence number as the last received packet. The metric is for client retransmission only. |
Art All Metrics |
|
| Single CLI to collect all the ART related metrics in mace. This CLI works as a replacement of all the ART related collect statements in a flow record. |
Top Domain, URL Hit Count Report
The list of new NBAR related fields that PA supports 15.2(4)M2 onwards are:
- HTTP Host
- URI and URI Statistics
If the user configures one or more of the above fields, PA control plane will explicitly activate NBAR. The HTTP fields can be only exported through IPFIX protocol (NetFlow version9 doesn't support variable field length).
Field Name | Export ID | CLI | Description |
application http host |
|
| Host name |
application http uri statistics |
|
| URI Statistics |
art count new connections |
|
| Number of TCP sessions established (3-way handshake). It is also called Number of connections (sessions). |
Host
HTTP Host field is exported with Netflow export id 45003. It would be encoded as:
0 31 32 47 48 +---------------------+--------------------+---------------- | NBAR Application ID | Sub-Application ID | Value (host) +---------------------+--------------------+----------------
Collectors will be able to identify the extracted field name and type based on the application ID and sub-application ID that are embedded in it.
- For HTTP Host, the application-id is generally 0x03000050 (see previous chapter on application ID to know how to get this value). The first byte (03) specifies engine-id (i.e. IANA-L4), next 3 bytes (000050) are for selector-id (i.e. decimal 80 for HTTP).
- Sub- application-id for host is 0x0002. From show ip nbar parameters extraction and sub-application-table option template. Only take the last two bytes, 0x3402 = HTTP Host
- Value is the host string.
PA exports one host-name for each L7 flow. The CLI for host name is:
flow record type mace <pa-record> collect application http host
URI and Count
PA will collect and export URI and hit-count in the format “uri:count::uri:count.....”. The delimiters colon (:) and double colon (::) are written here just for the demonstration of the format. The actual delimiter would be NULL (\0). URI and count is always represented in binary format using fixed length 2bytes. The collector has to parse the URI by parsing on the basis of delimiters i.e. NULL (\0). URI count is read as a 2byte binary number and there is no '\0' delimiter after the count. A special PA specific FNF export field (42125) would be used to export the list of URIs and the corresponding hit-counts. The encoding would be done as follows:
{URI\0countURI\0count}
Please note that the URI being collected and exported is limited to the first ‘/’. For example, if the URL is http://www.cisco.com/router/isr/g2, the URI collected is ‘router’. Example: The following flows are collected on the router: 10.1.1.1 to 10.2.2.2, destination port 80, protocol TCP www.cisco.com/US - 2 flows www.cisco.com/WORLD - 1 flow
The data would then be exported as: [10.1.1.1, 10.2.2.2, 80, TCP, www.cisco.com, US\02WORLD\01]
The CLI to collect uri and its hit-count (statistics) is:
flow record type mace <pa-record> collect application http uri statistics
QoS Class-ID, Queue Drops and Queue Hierarchy
The list of new QoS related fields that PA supports 15.2(4)M2 onwards are:
- The QoS classification hierarchy
- The QoS queue index
- The QoS queue drops
Field Name | Export ID | CLI | Description |
QoS Policy Classification Hierarchy |
|
| Report application class of service hierarchy |
QoS Queue Drops |
|
| Number of drops in the queue |
Timestamp absolute monitoring- interval |
|
| Timestamp when monitoring interval expires. Added automatically with ‘collect policy qos queue drops' |
QoS policy Classification Hierarchy
To get the QoS queue for a particular flow, PA will export the hierarchy of the class that the flow matched on. This hierarchy will be exported in the flow record as a list of IDs: {Policy ID, Class ID 1, Class ID 2, Class ID 3, Class ID 4, Class ID 5}. Each of these IDs is a 4-byte integer representing a C3PL policy-map or class-map, with any missing or unnecessary fields being 0. Total length of the field is 24 bytes. The ID to name mapping will be exported as an option template under a flow exporter.
FNF export ID 41000 would be used for this metric. It is supported by netflow-v9 as well as IPFIX. Here is the CLI:
flow record type mace <pa-record> collect policy qos classification hierarchy
Class hierarchy will be collected in configurations where PA and QoS are applied on the same L3 interface or in configurations where PA is applied on a dialer interface. Example:
class-map match-all C1 match any class-map match-all C11 match ip dscp ef class-map match-all C12 match ip dscp cs2 ! policy-map Child-P11 class child-class-C11 bandwidth remaining percent 10 class child-class-C12 bandwidth remaining percent 70 class class-default bandwidth remaining percent 20 ! policy-map Parent-P1 class child-class-C1 shaping average 16000000 service-policy Child-P11 ! interface e0/0 service-policy out Parent-P1 !
PA will export the QoS class hierarchy information as follows:
Flow-ID | Class Hierarchy (41000) | Queue id (42128) |
Flow-1 | P1, C1, C11, 0, 0, 0 | 1 |
Flow-2 | P1, C1, C11, 0, 0, 0 | 1 |
Flow-3 | P1, C1, C12, 0, 0, 0 | 2 |
Two option templates would be used to export the class and policy information. One template would be for class-id and class-name mapping, a second one is for policy- id/policy-name mapping..The sample CLI is as follows:
flow exporter <my-exporter> option c3pl-class-table option c3pl-policy-table
The following command can be used in exec mode to display the option tables exported to the collector for class and policy mapping:
show flow exporter <my-exporter> templates
Example:
- Class-map Option Template
router#show flow exporter MYEXPORTER templates [SNIP] Flow Exporter MYEXPORTER: Client: Option classmap option table Exporter Format: NetFlow Version 9 Template ID : 261 Source ID : 0 Record Size : 304 Template layout _____________________________________________________________________ | Field | Type | Offset | Size | --------------------------------------------------------------------- | v9-scope system | 1 | 0 | 4 | | c3pl class cce-id | 41001 | 4 | 4 | | c3pl class name | 41002 | 8 | 40 | | c3pl class type | 41003 | 48 | 256 | --------------------------------------------------------------------- [SNIP]
- Policy-map Option Template
router#show flow exporter MYEXPORTER templates [SNIP] Flow Exporter MYEXPORTER: Client: Option policymap option table Exporter Format: NetFlow Version 9 Template ID : 262 Source ID : 0 Record Size : 304 Template layout _____________________________________________________________________ | Field | Type | Offset | Size | --------------------------------------------------------------------- | v9-scope system | 1 | 0 | 4 | | c3pl policy cce-id | 41004 | 4 | 4 | | c3pl policy name | 41005 | 8 | 40 | | c3pl policy type | 41006 | 48 | 256 | --------------------------------------------------------------------- [SNIP]
QoS Queue drops
The queue statistics that are exported will be the drops seen due to the queuing action configured under a class-map. These drops will be collected per export interval from the queuing feature and will be exported in a separate option template table. This option template table will identify the drops seen for a particular queue id, and the flow record will identify the id of the queue that a flow was queued in. As we’re exporting the drops at a queue level, the drops seen under a queue will be a summation of the drops experienced by multiple flows that matched on the same class-map hierarchy.
FNF export ID 42129 would be used for this metric. It is supported by netflow-v9 as well as IPFIX. Here is the CLI:
flow record type mace <pa-record> collect policy qos queue drops
As a result of above command, two sets of export will happen.
The first one is the data that is exported as part of each flow entry. Queue-id and timestamp are exported per flow entry. Here is the example:
Flow-ID | Queue id (42128) | Timestamp (65501) |
Flow-1 | 1 | 50000 |
Flow-2 | 2 | 50000 |
The second part is the data which is exported as part of option template. It is exported once when the PA timer expires. Queue-id, timestamp and queue-drops are included in the template. Here is the example:
Queue-id (42128) | Timestamp (65501) | Packet Drops (42129) |
1 | 50000 | 100 |
2 | 50000 | 20 |
From the above two tables, user can match queue-id and timestamp and figure out the queue-drops from the queue to which a particular FLOW belongs to.
In configurations where there are no explicit queues specified under a class-map, the drop count collected for this class-map will be the drop count from the default queue. This behavior follows the QoS behavior of directing flows without a queuing action to the default queue. As such, if there are multiple classes without a queuing action, flows that match any of these class-maps will all see the same drop count (drops seen in the default queue).