Shared Storage Considerations

From DocWiki

(Difference between revisions)
Jump to: navigation, search
(Capacity and Performance Requirements of Storage Solution)
m (1 revision)

Revision as of 20:52, 16 December 2011

Go to: Before You Buy or Deploy - Considerations for Design and Procurement

Go to: QoS Design Considerations for Virtual UC with UCS


Compatible / Supported Storage Arrays (SAN, NAS)

This section assumes you have already followed either the Tested Reference Configurations (TRC) or the Specification-Based Hardware Support policy for UC apps and compute hardware.

Storage arrays used with UC virtualization must align with the following:

No requirement to dedicate arrays or storage groups to UC (vs. non-UC), or to one UC app vs. other UC apps.

Supported Storage for UCS Tested Reference Configurations

Tested Reference Configurations (TRC) only support DAS (with specified RAID configurations), FC SAN and boot from FC SAN (diskless):

  • FC SAN is mandatory for all UCS B200 Tested Reference Configurations
  • FC SAN is optional for UCS C210 Tested Reference Configuration - see Supported Hardware page for which ones support FC SAN.
  • UCS C200 Tested Reference Configurations are DAS only.

Supported Storage for Specs-based Support

Specification-Based Hardware Support supports DAS (with custom RAID configurations), NAS, SAN and boot from SAN (diskless) using NFS, FC, iSCSI and FCoE transport options.

  • There is no UC-specific requirement for NFS version. Use what VMware and the server vendor recommend for the vSphere ESXi version required by UC.

For UC on UCS C-series Specs-based

  • SAN General : Design your deployment in accordance with the UCS High Availability guidelines (see
  • FC:
  • FCoE, iSCSI, or NFS:
    • UC requires provisioning a dedicated 10GbE ethernet port on the server to the upstream switch, using Cisco Platinum Class QOS for the storage traffic. The port for storage access must be separate from the port used for LAN access. Whether the storage access port is on same adapter as LAN access is up to the customer. E.g. it may be desirable to have LAN and storage access on separate adapters for redundancy purposes, but UC does not require this.
    • Normal QoS (L3 and L2 marking) can be used starting from the first upstream switch to the storage array
  • The storage array vendor may have additional best practices as well.

For UC on UCS B-series Specs-based

  • SAN General : Design your deployment in accordance with the UCS High Availability guidelines (see
  • FC or FCoE: No additional requirements for UC VM-to-storage traffic. The UCS 6100 switch handles this requirement by default.
  • iSCSI or NFS: Follow these best practices:
    • Use a L2 CoS between the chassis and the upstream switch.
    • For the storage traffic, recommend a Platinum class QoS, CoS=5, no drop (Fiber Channel Equivalent)
    • L3 DSCP is optional between the chassis and the first upstream switch.
    • From the first upstream switch to the storage array, use the normal QoS (L3 and L2 marking). Note that iSCSI or NFS traffic is typically assigned a separate VLAN.
    • iSCSI or NFS: Ensure that the traffic is prioritized to provide the right IOPS. For a configuration example, see the FlexPod Secure Multi-Tenant (SMT) documentation  (
  • The storage array vendor may have additional best practices as well.

Capacity and Performance Requirements of Storage Solution

Storage capacity must support the sum of UC VM OVA vDisks, plus overhead for VMware and RAID on the array.

Storage performance must support the sum of UC VM OVA IOPS, documented in the IO Operations Per Second (IOPS) page (this may result in storage configurations with excess storage capacity due to extra disks/spindles added for higher maximum IOPS). IOPS utilization should be monitored for each application to ensure that the aggregate IOPS is not exceeding the capacity of the array. Prolonged buffering of IOPS against an array may result in degraded system performance and delayed reporting data availability.

Cisco UC uses a 4 kilobyte block size to determine bandwidth needs.

Disk oversubscription is not supported. Storage thin provisioning is not supported (neither at the VM level nor at the array level) and does not result in storage capacity savings. While use of thin provisioning does not impose a VM performance penalty, many UC VMs are designed to fully utilize their allocated vDisk storage for critical operations (such as upgrades or backups) or application-specific functions (such as the Cisco Unity Connection message store). Also, running out of capacity at a critical moment could lock up all UC VMs on the same LUN.

EMC-based storage solutions may use EMC PowerPath/VE multi-pathing (for details, see

SAN/NAS Link Provisioning and High Availability

Consider the following example to determine the number of physical Fiber Channel (FC) or 10Gig Ethernet links required between your storage array (such as the EMC Clariion CX4 series or NetApp FAS 3000 Series) and SAN switch for example, Nexus or MDS Series SAN Switches), and between your SAN switch and the UCS Fabric Interconnect Switch. This example is presented to give a general idea of the design considerations involved. You should contact your storage vendor to determine the exact requirement.

Assume that the storage array has a total capacity of 28,000 Input/output Operations Per Second (IOPS). Enterprise grade SAN Storage Arrays have at least two service processors (SPs) or controllers for redundancy and load balancing. That means 14,000 IOPS per controller or service processor. With the capacity of 28,000 IOPS, and assuming a 4 KByte block size, we can calculate the throughput per storage array controller as follows:

  • 14,000 I/O per second * (4000 Byte block size * 8) bits = 448,000,000 bits per second
  • 448,000,000/1024 = 437,500 Kbits per second
  • 437,500/1024 = ~428 Mbits per second

Adding more overhead, one controller can support a throughput rate of roughly 600 Mbps. Based on this calculation, it is clear that a 4 Gbps FC interface is enough to handle the entire capacity of one Storage Array. Therefore, Cisco recommends putting four FC interfaces between the storage array and storage switch, as shown in the following image, to provide high availability.


Note Note: Cisco provides storage networking and switching products that are based on industry standards and that work with storage array providers such as EMC, NetApp, and so forth. Virtualized Unified Communications is supported on any storage access and storage array products that are supported by Cisco UCS and VMware. For more details on storage networking, see

Best Practices for Storage Array LUNs for Unified Communications Applications

There are various ways to create partitions or Logical Unit Numbers (LUNs) in the storage array to meet the IOPS requirement for Cisco Unified Communications applications (see IO Operations Per Second (IOPS)}.

The best practices mentioned below are meant only to provide guidelines. Data Center storage administrators should carefully consider these best practices and adjust them based on their specific data center network, latency, and high availability requirements.

The storage array Hard Disk Drive (HDD) must be a Fibre Channel (FC) class HDD. These hard drives could vary in size. The current most popular HDD (spindle) sizes are:

  • 450 GB, 15K revolutions per minute (RPM) FC HDD
  • 300 GB, 15K RPM FC HDD

Both types of HDD provide approximately 180 IOPS. Regardless of the hard drive size used, it is important to try to balance IOPS load and disk space usage.

LUN size must be less than 2 terabytes (TB) for the virtual machine file system to recognize it. For Cisco Unified Communications virtual applications, the recommendation is to create a LUN size of between 500 GB and 1.5 TB, depending on the size of the disk and RAID group type used. Also as a best practice, select the LUN size so that the number of Unified Communications virtual machines per LUN is between 4 and 8. Do not allocate more than eight virtual machines (VMs) per LUN or datastore. The total size of all Virtual Machines (where total size = VM disk + RAM copy) must not exceed 90% of the capacity of a datastore.

LUN filesystem type must be VMFS. Raw Device Mapping (RDM) is not supported.

The following example illustrates an example of these best practices for UC:

For example, assume RAID5 (4+1) is selected for a storage array containing five 450 GB, 15K RPM drives (HDDs) in a single RAID group. This creates a total RAID5 array size of approximately 1.4 TB usable space. This is lower than the total aggregate disk drive storage space provided by the five 450 GB drives (2.25 TB). This is to be expected because some of the drive space will be used for array creation and almost an entire drive of data will be used for RAID5 striping.

Next, assume two LUNs of approximately 720 GB each are created to store Unified Communications application virtual machines. For this example, between one and three LUNs per RAID group could be created based on need. Creating more than three LUNs per RAID group would violate the previously mentioned recommendation of a LUN size of between 500 GB and 1.5 TB.

A RAID group with RAID 1+0 scheme would also be valid for this example and in fact in some cases could provide better IOPS performance and high availability when compared to a RAID 5 scheme.

The above example of storage array design should be altered based on your specific Unified Communications application IOPS requirements.

Below is a graphic of an example configuration following these best practices guidelines, note there are other designs possible.

Docwiki SAN best practices.png

Back to: Unified Communications in a Virtualized Environment

Rating: 5.0/5 (4 votes cast)

Personal tools