UC Virtualization Storage System Design Requirements

From DocWiki

Revision as of 15:56, 19 December 2013 by Aspauldi (Talk | contribs)
Jump to: navigation, search

Go to: Before You Buy or Deploy - Considerations for Design and Procurement



Contents

General Guidelines for SAN/NAS

  • Adapters for storage access must follow supported hardware rules (click here).
  • Cisco UC apps use a 4 kilobyte block size to determine bandwidth needs.
  • Design your deployment in accordance with the UCS High Availability guidelines (see http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/white_paper_c11-553711.html).
  • 10GbE networks for NFS, FCoE or iSCSI storage access should be configured using Cisco Platinum Class QOS for the storage traffic.
  • Ethernet ports for LAN access and ethernet ports for storage access may be separate or shared. Separate ports may be desired for redundancy purposes. It is the customer's responsibility to ensure external LAN and storage access networks meet UC app latency, performance and capacity requirements.
  • In absence of UCS 6100/6200, normal QoS (L3 and L2 marking) can be used starting from the first upstream switch to the storage array.
  • With UCS 6100/6200
    • FC or FCoE: no additional requirements. Automatically handled by Fabric Interconnect switch.
    • iSCSI or NFS: Follow these best practices:
      • Use a L2 CoS between the chassis and the upstream switch.
      • For the storage traffic, recommend a Platinum class QoS, CoS=5, no drop (Fiber Channel Equivalent)
      • L3 DSCP is optional between the chassis and the first upstream switch.
      • From the first upstream switch to the storage array, use the normal QoS (L3 and L2 marking). Note that iSCSI or NFS traffic is typically assigned a separate VLAN.
      • iSCSI or NFS: Ensure that the traffic is prioritized to provide the right IOPS. For a configuration example, see the FlexPod Secure Multi-Tenant (SMT) documentation  (http://www.imaginevirtuallyanything.com/us/).
  • The storage array vendor may have additional best practices as well.
  • If disk oversubscription or storage thin provisioning are used, note that UC apps are designed to use 100% of their allocated vDisk, either for UC features (such as Unity Connection message store or Contact Center reporting databases) or critical operations (such as spikes during upgrades, backups or statistics writes). While thin provisioning does not introduce a performance penalty, not having physical disk space available when the app needs it can have the following harmful effects
    • degrade UC app performance, crash the UC app and/or corrupt the vDisk contents.
    • lock up all UC VMs on the same LUN in a SAN


SAN/NAS Link Provisioning and High Availability

Consider the following example to determine the number of physical Fiber Channel (FC) or 10Gig Ethernet links required between your storage array (such as the EMC Clariion CX4 series or NetApp FAS 3000 Series) and SAN switch for example, Nexus or MDS Series SAN Switches), and between your SAN switch and the UCS Fabric Interconnect Switch. This example is presented to give a general idea of the design considerations involved. You should contact your storage vendor to determine the exact requirement.

Assume that the storage array has a total capacity of 28,000 Input/output Operations Per Second (IOPS). Enterprise grade SAN Storage Arrays have at least two service processors (SPs) or controllers for redundancy and load balancing. That means 14,000 IOPS per controller or service processor. With the capacity of 28,000 IOPS, and assuming a 4 KByte block size, we can calculate the throughput per storage array controller as follows:

  • 14,000 I/O per second * (4000 Byte block size * 8) bits = 448,000,000 bits per second
  • 448,000,000/1024 = 437,500 Kbits per second
  • 437,500/1024 = ~428 Mbits per second

Adding more overhead, one controller can support a throughput rate of roughly 600 Mbps. Based on this calculation, it is clear that a 4 Gbps FC interface is enough to handle the entire capacity of one Storage Array. Therefore, Cisco recommends putting four FC interfaces between the storage array and storage switch, as shown in the following image, to provide high availability.

254783.jpg

Note Note: Cisco provides storage networking and switching products that are based on industry standards and that work with storage array providers such as EMC, NetApp, and so forth. Virtualized Unified Communications is supported on any storage access and storage array products that are supported by Cisco UCS and VMware. For more details on storage networking, see http://www.cisco.com/en/US/netsol/ns747/networking_solutions_sub_program_home.html.

Best Practices for Storage Array LUNs for Unified Communications Applications

There are various ways to create partitions or Logical Unit Numbers (LUNs) in the storage array to meet the IOPS requirement for Cisco Unified Communications applications (see IO Operations Per Second (IOPS)}.

The best practices mentioned below are meant only to provide guidelines. Data Center storage administrators should carefully consider these best practices and adjust them based on their specific data center network, latency, and high availability requirements.

The storage array Hard Disk Drive (HDD) must be a Fibre Channel (FC) class HDD. These hard drives could vary in size. The current most popular HDD (spindle) sizes are:

  • 450 GB, 15K revolutions per minute (RPM) FC HDD
  • 300 GB, 15K RPM FC HDD

Both types of HDD provide approximately 180 IOPS. Regardless of the hard drive size used, it is important to try to balance IOPS load and disk space usage.

LUN size must be less than 2 terabytes (TB) for the virtual machine file system to recognize it. For Cisco Unified Communications virtual applications, the recommendation is to create a LUN size of between 500 GB and 1.5 TB, depending on the size of the disk and RAID group type used. Also as a best practice, select the LUN size so that the number of Unified Communications virtual machines per LUN is between 4 and 8. Do not allocate more than eight virtual machines (VMs) per LUN or datastore. The total size of all Virtual Machines (where total size = VM disk + RAM copy) must not exceed 90% of the capacity of a datastore.

LUN filesystem type must be VMFS. Raw Device Mapping (RDM) is not supported.

The following example illustrates an example of these best practices for UC:

For example, assume RAID5 (4+1) is selected for a storage array containing five 450 GB, 15K RPM drives (HDDs) in a single RAID group. This creates a total RAID5 array size of approximately 1.4 TB usable space. This is lower than the total aggregate disk drive storage space provided by the five 450 GB drives (2.25 TB). This is to be expected because some of the drive space will be used for array creation and almost an entire drive of data will be used for RAID5 striping.

Next, assume two LUNs of approximately 720 GB each are created to store Unified Communications application virtual machines. For this example, between one and three LUNs per RAID group could be created based on need. Creating more than three LUNs per RAID group would violate the previously mentioned recommendation of a LUN size of between 500 GB and 1.5 TB.

A RAID group with RAID 1+0 scheme would also be valid for this example and in fact in some cases could provide better IOPS performance and high availability when compared to a RAID 5 scheme.

The above example of storage array design should be altered based on your specific Unified Communications application IOPS requirements.

Below is a graphic of an example configuration following these best practices guidelines, note there are other designs possible.


Docwiki SAN best practices.png



IOPS and other Performance Requirements

This page illustrates IOPS under various conditions for Unified Communications applications. This area is under construction. Check back frequently for updates.

Storage performance must support the sum of UC VM OVA IOPS. Note that addressing IOPS requirements may require higher disk/spindle counts, which may result in excess storage capacity

IOPS utilization should be monitored for each application to ensure that the aggregate IOPS is not exceeding the capacity of the array. Prolonged buffering of IOPS against an array may result in degraded system performance and delayed reporting data availability.

Unified Communications Manager

Please see Virtualization for Cisco Unified Communications Manager (CUCM).

Cisco Emergency Responder

Please see Virtualization for Cisco Emergency Responder.

Cisco Intercompany Media Engine

Please see Virtualization for Cisco Intercompany Media Engine.

Cisco Unity Connection

Please see Virtualization for Cisco Unity Connection.

TelePresence Applications

For Cisco TelePresence Manager and Cisco TelePresence Multipoint Switch: 100 IOPS is a good typical value to plan around. Otherwise use values for Cisco Unified Communications Manager above (they will be conservatively high).

Cisco TelePresence Video Communication Server (Cisco VCS)

Please see Virtualization for Cisco TelePresence Video Communications Server.


Cisco TelePresence Conductor

Please see Virtualization for Cisco TelePresence Conductor.

Unified Presence

Please see Virtualization for Cisco Unified Presence.

Cisco Unified Attendant Consoles

Please see Virtualization for Cisco Unified Attendant Consoles.


Cisco Customer Collaboration/Contact Center

Cisco Contact Center Enterprise

The SAN must be able to handle the following Unified CCE application disk I/O characteristics.

 

The data below is based on CCE 8.5(3)+ and 9.0(1)+ running on Windows Server 2008 R2 Enterprise  

Unified CCE Component IOPS Disk Read KBytes / sec Disk Write KBytes / sec Operating Conditions
Peak Avg. 95th Pct Peak Avg. 95th Pct Peak Avg. 95th Pct.
Router 27 14 19 619 11 12 436 246 313 12,000 agent;100 CPS, ECC: 5 scalars @ 40 bytes each; 200 Reporting users at max query load
Logger 3662 917 1793 12821 735 11571 38043 2777 11832
HDS 1017 321 693 2748 250 2368 19351 1441 3879
Router 118 11 13 1681 14 5 971 66 172 8,000 agents;60 CPS; ECC:5 scalars @ 40 bytes each; 200 Reporting users at max query load
Logger 1614 502 1065 6137 341 4993 30379 558 5911
HDS 1395 165 541 1995 186 1776 12525 1299 2965
Rogger 1598 250 292 2227 1837 1984 13243 867 2308 4,000 agents; 30 cps; ECC: 5 scalars @ 40 bytes each


The data below is based on CCE 8.0(2) running on Windows Server 2003 R2

Unified CCE Component IOPS Disk Read KBytes / sec Disk Write KBytes / sec Operating Conditions
Peak Avg. 95th Pct. Peak Avg. 95th Pct. Peak Avg. 95th Pct.


Router 20 8 10 520 30 180 400 60 150 8,000 agents; 60 cps; ECC: 5 scalars @ 35 bytes each; No reporting.
Logger 1,000 600 700 4,000 600 2,500 12,000 3,000 7,000
HDS 1,600 1,000 1,100 600 70 400 6,000 2,000 3,800
Agent PG 125 40 70 300 5 20 2,000 1,200 1,500 2,000 agents, 15 cps


HDS 3,900 2,500 3,800 75,000 30,000 50,000 9,500 2,200 5,800 8,000 agents; 60 cps; 200 reporting users at max query load; ECC: 5 scalars @ 35 bytes each.
ROGGER 610 360 425 2,700 400 1,600 7,500 2,150 4,300 4,000 agents; 30 cps; ECC: 5 scalars @ 35 bytes each


The data below is based on CCE 10.0(1) running on Windows Server 2008 R2 Enterprise

Unified CCE Component IOPS Disk Read KBytes / sec Disk Write KBytes / sec Operating Conditions
Peak Avg. 95th Pct. Peak Avg. 95th Pct. Peak Avg. 95th Pct.


Router 9 4 6 1 0 0 134 95 111 12,000 agent;100 CPS, ECC: 5 scalars @ 40 bytes each; 200 Reporting users at max query load.
Logger 2365 1582 2076 19603 10846 16169 56969 31443 47395
HDS 1622 806 1239 81429 13386 35178 29406 8101 22747
DDS 2440 1460 2052 19201 9281 14654 59932 30624 49995


HDS 1561 365 957 61240 963 5374 12792 1068 5593 4,000 agents; 30 cps; ECC: 5 scalars @ 40 bytes each; 200 Reporting Users at max query load.
ROGGER 1400 64 116 14226 204 203 24469 1339 3918
Agent PG 106 75 90 256 9 54 5548 3464 4384 2,000 agents, 15 cps

|}


Cisco Contact Center Express/IPIVR IOPS

See Virtualization for Cisco Unified Contact Center Express.


Workforce Optimization (WFO) IOPS

See Virtualization for Cisco Unified Work Force Optimization Suite for Cisco Unified Contact Center Express.

Cisco MediaSense

See Virtualization for Cisco MediaSense#IOPS_and_Storage_System_Performance_Requirements.


Cisco Finesse 

See Virtualization for Cisco Finesse#IOPS_and_Storage_System_Performance_Requirements.



Virtualization for Unified Email Interaction Manager - Web Interaction Manager

EIM-WIM component IOPS Disk Read KBytes / sec Disk Write KBytes / sec
Peak 95th Pct. Avg. Peak 95th Pct. Avg. Peak 95th Pct. Avg.
Application Server 11.7 3.21 1.55 51 1 0.726 103 19 8.32
File Server 43.65 25.23 14.36 189 2 2 1769 1025.9 446
Database Server 736.9 552 263.31 35450 23184 3903 5737 2872 1625
Messaging Server 9.15 2.71 1.47 57 1 0.7 88 15.1 8
Services Server 11.15 3.155 1.53 55 1 0.73 1.89 17 9
Web Server 11.2 3.36 1.43 55 1 0.8 173 21 10.24
Cisco Media Blender 12.25 7.45 5.51 47 0 0.516 384 289 177.47

 

Cisco Unified Intelligence Center

See Virtualization for Cisco Unified Intelligence Center.


Cisco Unified Customer Voice Portal

See Virtualization for Cisco Unified Customer Voice Portal.


Back to: Unified Communications in a Virtualized Environment


Rating: 4.7/5 (6 votes cast)

Personal tools