Shared Storage Considerations
m (1 revision)
m (Protected "Shared Storage Considerations": Virtualization process [edit=sysop:move=sysop] [cascading])
Revision as of 21:01, 7 June 2011
Supported Storage Solutions (SAN, NAS)
UCS Tested Reference Configurations only support DAS and/or Fibre Channel SAN:
- FC SAN is mandatory for all UCS B200 Tested Reference Configurations
- FC SAN is optional for UCS C210 Tested Reference Configuration - see Supported Hardware page for which ones support FC SAN.
- No shared storage is supported for UCS C200 Tested Reference Configuration - DAS only.
Other storage options (such as iSCSI or FCoE SAN, NFS NAS, diskless servers, DAS/RAID variance from Tested Reference Configurations, etc.) are supported by following the Specs-based support policy here.
For UC on UCS C-series deployments,
- FC: You must provision a dedicated HBA (see Tested Reference Configurations (TRC) page for more information).
- FCOE, iSCSI, or NFS: The Ethernet-based transport for storage access (iSCSI, FCoE, NFS) requires provisioning a dedicated 10GbE adapter on the server, using Cisco "Platinum Class QoS."
- The storage array vendor may have additional best practices as well.
For UC on UCS B-series deployments:
- FC or FCOE: No additional requirements for UC VM-to-storage traffic. The UCS 6100 switch handles this requirement by default.
- iSCSI or NFS: Ensure that the traffic is prioritized to provide the right IOPS. For a configuration example, see the FlexPod Secure Multi-Tenant (SMT) documentation (http://www.imaginevirtuallyanything.com/us/).
- SAN General : Design your deployment in accordance with the UCS High Availability guidelines (see http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/white_paper_c11-553711.html).
- The storage array vendor may have additional best practices as well.
Compatible / Supported Storage Arrays (SAN, NAS)
This section assumes you have already followed either the Tested Reference Configurations or the Specs-based VMware Support Policy for UC apps and compute hardware.
Storage arrays used with UC virtualization must align with the following:
- The storage solution must be supported by the Cisco Unified Computing System. For example, visit:
- Cisco UCS storage certification list
- http://www.cisco.com/en/US/docs/unified_computing/ucs/interoperability/matrix/hw_sw_interop_matrix_seriesB_111.pdf for Cisco UCS B-Series servers or
- http://www.cisco.com/en/US/docs/unified_computing/ucs/interoperability/matrix/hw_sw_interop_matrix_seriesC_101.pdf for Cisco UCS C-Series servers.
- The storage solution must be supported by VMware ESXi / vSphere 4. For example, refer to the “SAN/Storage” tab at http://www.vmware.com/resources/compatibility/search.php?sourceid=ie7&rls=com.microsoft:en-us:IE-SearchBox&ie=&oe=
- The storage solution must meet Capacity (GB) and Performance (IOPS) requirements of UC VMs described later on this page.
- Otherwise any storage array vendor or product is supported for use with UC.
- In general, all other details, such as configuration of RAID or disk technology options on the storage array (such as choice of SAS, SATA or FC disks) are up to you. "Tier 1 Storage" is generally recommended for UC deployments.
Capacity and Performance Requirements of Storage Solution
Storage capacity must support the sum of UC VM OVA vDisks, plus overhead for VMware and RAID on the array.
Storage performance must support the sum of UC VM OVA IOPS, documented in the IO Operations Per Second (IOPS) page. IOPS utilization should be monitored for each application to ensure that the aggregate IOPS is not exceeding the capacity of the array. Prolonged buffering of IOPS against an array may result in degraded system performance and delayed reporting data availability.
Cisco UC uses a 4 kilobyte block size to determine bandwidth needs.
Storage thin provisioning is not supported (neither at the VM level nor at the array level). While use of thin provisioning does not impose a VM performance penalty, many UC VMs are designed to fully utilize their allocated vDisk storage for critical operations (such as upgrades or backups) or application-specific functions (such as the Cisco Unity Connection message store).
EMC-based storage solutions may use EMC PowerPath/VE multi-pathing (for details, see http://www.emc.com/products/family/powerpath-family.htm).
SAN/NAS Link Provisioning and High Availability
Consider the following example to determine the number of physical Fiber Channel (FC) or 10Gig Ethernet links required between your storage array (such as the EMC Clariion CX4 series or NetApp FAS 3000 Series) and SAN switch for example, Nexus or MDS Series SAN Switches), and between your SAN switch and the UCS Fabric Interconnect Switch. This example is presented to give a general idea of the design considerations involved. You should contact your storage vendor to determine the exact requirement.
Assume that the storage array has a total capacity of 28,000 Input/output Operations Per Second (IOPS). Enterprise grade SAN Storage Arrays have at least two service processors (SPs) or controllers for redundancy and load balancing. That means 14,000 IOPS per controller or service processor. With the capacity of 28,000 IOPS, and assuming a 4 KByte block size, we can calculate the throughput per storage array controller as follows:
- 14,000 I/O per second * (4000 Byte block size * 8) bits = 448,000,000 bits per second
- 448,000,000/1024 = 437,500 Kbits per second
- 437,500/1024 = ~428 Mbits per second
Adding more overhead, one controller can support a throughput rate of roughly 600 Mbps. Based on this calculation, it is clear that a 4 Gbps FC interface is enough to handle the entire capacity of one Storage Array. Therefore, Cisco recommends putting four FC interfaces between the storage array and storage switch, as shown in the following image, to provide high availability.
|Note:||Cisco provides storage networking and switching products that are based on industry standards and that work with storage array providers such as EMC, NetApp, and so forth. Virtualized Unified Communications is supported on any storage access and storage array products that are supported by Cisco UCS and VMware. For more details on storage networking, see http://www.cisco.com/en/US/netsol/ns747/networking_solutions_sub_program_home.html.|
Best Practices for Storage Array LUNs for Unified Communications Applications
There are various ways to create partitions or Logical Unit Numbers (LUNs) in the storage array to meet the IOPS requirement for Cisco Unified Communications applications (see IO Operations Per Second (IOPS)}.
The best practices mentioned below are meant only to provide guidelines. Data Center storage administrators should carefully consider these best practices and adjust them based on their specific data center network, latency, and high availability requirements.
The storage array Hard Disk Drive (HDD) must be a Fibre Channel (FC) class HDD. These hard drives could vary in size. The current most popular HDD (spindle) sizes are:
- 450 GB, 15K revolutions per minute (RPM) FC HDD
- 300 GB, 15K RPM FC HDD
Both types of HDD provide approximately 180 IOPS. Regardless of the hard drive size used, it is important to try to balance IOPS load and disk space usage.
LUN size must be less than 2 terabytes (TB) for the virtual machine file system to recognize it. For Cisco Unified Communications virtual applications, the recommendation is to create a LUN size of between 500 GB and 1.5 TB, depending on the size of the disk and RAID group type used. Also as a best practice, select the LUN size so that the number of Unified Communications virtual machines per LUN is between 4 and 8. Do not allocate more than eight virtual machines (VMs) per LUN or datastore. The total size of all Virtual Machines (where total size = VM disk + RAM copy) must not exceed 90% of the capacity of a datastore.
Storage for UC VMs must be VMFS. Raw Device Mapping (RDM) is not supported.
The following example illustrates the considerations for designing storage arrays for Unified Communications applications:
For example, assume RAID5 (4+1) is selected for a storage array containing five 450 GB, 15K RPM drives (HDDs) in a single RAID group. This creates a total RAID5 array size of approximately 1.4 TB usable space. This is lower than the total aggregate disk drive storage space provided by the five 450 GB drives (2.25 TB). This is to be expected because some of the drive space will be used for array creation and almost an entire drive of data will be used for RAID5 striping.
Next, assume two LUNs of approximately 720 GB each are created to store Unified Communications application virtual machines. For this example, between one and three LUNs per RAID group could be created based on need. Creating more than three LUNs per RAID group would violate the previously mentioned recommendation of a LUN size of between 500 GB and 1.5 TB.
A RAID group with RAID 1+0 scheme would also be valid for this example and in fact in some cases could provide better IOPS performance and high availability when compared to a RAID 5 scheme.
The above example of storage array design should be altered based on your specific Unified Communications application IOPS requirements.
Below is a graphic of an example configuration following these best practices guidelines, note there are other designs possible.
|Back to: Unified Communications in a Virtualized Environment|