ASA5580-40 TCP Throughput Performance Single Context 4 Interfaces Configuration Example

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m (1 revision)
 
Line 259: Line 259:
   
   
'''Avalanche Load Specifications''' [[Image:TCP Throughput Setup 7.jpg]]
'''Avalanche Load Specifications''' [[Image:TCP Throughput Setup 7.jpg]]
-
[[Category: Configuration Examples]]
+
[[Category: Security and VPN Configuration Examples]]

Latest revision as of 21:34, 18 November 2011

Test Details Test Details
Goal of Test

The purpose of this test is to get the maximum throughput the ASA can process using HTTP traffic. This traffic model is more close to the real world traffic.

In order to produce the TCP traffic the Spirent Avalanche 2900 was used ( 4, 2900's with 1 ten-gigabit interface each). When looking at the diagram the numbers 155, 156, 157, 158 are the individual 2900 chassis. To produce bi-directional traffic through the ASA one client port is placed on the outside pulling a 512K byte object from one server port on the inside, and one client port is placed on the inside pulling a 512K byte object form one server port on the outside. The Avalanche Tool is configured with 10672 clients and 16 servers. This comes down to 667 clients each pointing to one of the 16 servers. Each client walks an action list of 10 gets to the servers address. With HTTP 1.1 with persistence this results in 10 transactions per tcp connection. For each get the server responds with a 512K byte object. Below are screen shots of the test tool setup.


Data to Record

1. show cpu
2. show conn count
3. show io-bridge
4. Capture results from the test tool


Estimated Time Needed 60 minutes

Contents

Topology

TCP Throughput 4ports.jpg


Procedures

DESCRIPTION

1. On the client side configure (Avalanche Clients):

a. 1500 SimUsers for the load on each avalanche 2900

b. 16 subnets, with 667 hosts (10.100 to 12.254) pointing to one server on the reflector and assign one to each port.

c. 10 GETs on the action profile

2. On the server side, configure:

a. One server per port

b. 512k Object size

3. Bidirectional traffic is used (Clients from the inside to the outside and vice-versa)
4. While traffic is at steady state take a screen shot of the live Client Stats.

Configurations

ares# sh run
: Saved
:
ASA Version 8.1(1)
!
hostname ares
enable password 8Ry2YjIyt7RRXU24 encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
names
!
interface Management0/0
 shutdown
 no nameif
 no security-level
 no ip address
 management-only
!
interface Management0/1
 shutdown
 no nameif
 no security-level
 no ip address
 management-only
!
interface GigabitEthernet3/0
 shutdown
 no nameif   
 no security-level
 no ip address
!
interface GigabitEthernet3/1
 shutdown
 no nameif
 no security-level
 no ip address
!
interface GigabitEthernet3/2
 shutdown
 no nameif
 no security-level
 no ip address
!
interface GigabitEthernet3/3
 shutdown
 no nameif
 no security-level
 no ip address
!
interface TenGigabitEthernet5/0
 nameif outside_gi_1
 
 security-level 0
 ip address 10.22.0.1 255.255.0.0
!
interface TenGigabitEthernet5/1
 nameif inside_gi_1
 security-level 100
 ip address 10.32.0.1 255.255.0.0
!
interface TenGigabitEthernet7/0
 
 nameif inside_gi_2
 
 security-level 100
 ip addresss 10.20.0.1 255.255.0.0
!
interface TenGigabitEthernet7/1
 nameif outside_gi_2
 security-level 0
 ip address 10.30.0.1 255.255.0.0
!
ftp mode passive
access-list in extended permit ip any any
access-list out extended permit ip any any
pager lines 24
logging enable
logging buffered warnings
mtu inside_gi 1500
mtu outside_gi 1500
no failover
icmp unreachable rate-limit 1 burst-size 1
icmp permit any echo inside_gi
icmp permit any echo-reply inside_gi
icmp permit any echo outside_gi
icmp permit any echo-reply outside_gi
asdm image disk0:/asdm-611.bin
no asdm history enable
arp timeout 14400
access-group out in interface inside_gi_1
access-group out in interface outside_gi_1
access-group out in interface inside_gi_2
access-group out in interface outside_gi_2
 
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
dynamic-access-policy-record DfltAccessPolicy
no snmp-server location
no snmp-server contact
snmp-server enable traps snmp authentication linkup linkdown coldstart
telnet timeout 5
ssh timeout 5
console timeout 0
no threat-detection basic-threat
no threat-detection statistics access-list
!
class-map inspection_default
 match default-inspection-traffic
!
!
policy-map type inspect dns preset_dns_map
 parameters
  message-length maximum 512
policy-map global_policy
 class inspection_default
  inspect dns preset_dns_map
  inspect ftp
  inspect h323 h225
  inspect h323 ras
  inspect rsh
  inspect rtsp
  inspect esmtp
  inspect sqlnet
  inspect skinny 
  inspect sunrpc
  inspect xdmcp
  inspect sip 
  inspect netbios
  inspect tftp
!
prompt hostname context
Cryptochecksum:03cbf5e0557d3c2abac442316f5900b1
: end

Results

19,554.443 Mbps incoming to the clients and 299.43Mbps outgoing from the clients.
A total of 19853.873 of HTTP Throughput were achieved.
 
Remember that half the clients were connected on the inside of the ASA, and half the clients on the outside of the ASA, so througput was roughly 9.7 Gig in each direction. !TCP_Throughput_4ports_Results1.JPG|width=32,height=32!\\
 
ares# sh cpu
CPU utilization for 5 seconds = *79%*; 1 minute: 62%; 5 minutes: 21%
ares# sh conn count
*1044 in use*, 1049 most used
ares# sh conn count
*1044 in use*, 1049 most used
ares# sh io-bridge
I/O Bridge-0 slot usage
  Slot 00: 0 pps, 0 bps
  Slot 01: Ignored
  Slot 02: Ignored
  Slot 03: 0 pps, 0 bps
  Slot 04: Ignored
  *Slot 05: 2264848 pps, 19678943960 bps*
  Slot 06: Ignored
 
I/O Bridge-1 slot usage
  *Slot 07: 2252144 pps, 19602570400 bps*
  Slot 08: Ignored
 
Load distribution - Packets-per-second (10 seconds)
  I/0 Bridge 00:  50%\|************************\*
  I/0 Bridge 01:  50%\|************************\*
 
Load distribution - Bits-per-second (10 seconds)
  I/0 Bridge 00:  50%\|************************\*
  I/0 Bridge 01:  50%\|************************\*
 
Legend:
  bps - bits per second
  pps - packets per second 


Screenshots

Test Tool Setup
Spirent Avalanche Network configuration
Client Network Tab TCP Throughput Setup 1.jpg


Server Network Tab TCP Throughput Setup 2.jpg


Spirent Avalanche Client Configuration

Client Associations TCP Throughput Setup 3.jpg Client Action List TCP Throughput Setup 4.jpg

Spirent Avalanche Server Configuration

Server Association TCP Throughput Setup 5.jpg


Server Transactions TCP Throughput Setup 6.jpg


Avalanche Load Specifications TCP Throughput Setup 7.jpg

Rating: 0.0/5 (0 votes cast)

Personal tools