Unified Communications Virtualization Sizing Guidelines

From DocWiki

Revision as of 13:06, 18 October 2012 by Karrande (Talk | contribs)
Jump to: navigation, search

Go to: Guidelines to Edit UC Virtualization Pages



Contents

Introduction

This article provides specifics and examples to aid in sizing Unified Communications applications for the UCS B-series and C-series servers.



Application Co-residency Support Policy

Cisco UC virtualization only supports application co-residency under the specific conditions described below and as clarified in TAC Technote Document ID: 113520..


This policy only covers the rules for physical/virtual hardware sizing, co-resident application mix and maximum VM count per physical server. All other UC virtualization rules still apply (e.g supported VMware vSphere ESXi versions or hardware options). Co-residency rules apply equally to all hardware options:


Note Note: UC app VM performance is only guaranteed when installed on a UC on UCS Tested Reference Configuration, and only if all other conditions in this policy are followed.

"Application co-residency" in this UC support policy is defined as VMs sharing the same physical server and the same virtualization software host:

  • E.g. VMs running on the same VMware vSphere ESXi host on the same physical rack-mount server, such as Cisco UCS C-Series.
  • E.g. VMs running on the same VMware vSphere ESXi host on the same physical blade server in the same blade server chassis, such as Cisco UCS B-Series.
  • "Co-resident application mix" in this UC support policy refers to the set of VMs sharing a physical server and a virtualization software host.
  • VMs running on different virtualization hosts and different physical servers are not co-resident.
    • E.g. VMs running on two different Cisco UCS C-Series rack-mount servers are not co-resident.
    • E.g. VMs running on two different Cisco UCS B-Series blade servers in the same UCS 5100 blade server chassis are not co-resident.

Co-residency defined


Virtual Machines (VMs) are categorized as follows for purposes of this UC support policy:

  • Cisco UC app VMs (or simply UC app VMs): a VM for one of the Cisco UC apps at Unified Communications Virtualization Supported Applications.
  • Cisco non-UC app VMs (or simply non-UC VMs): a VM for a Cisco application not listed at Unified Communications Virtualization Supported Applications, such as the VM for Cisco Nexus 1000V's VSM.
  • 3rd-party application VMs (or simply 3rd-party app VMs): a VM for a non-Cisco, application, such as VMware vCenter, 3rd-party Cisco Technology Developer Program applications, non-Cisco-provided TFTP/SFTP/DNS/DHCP servers, Directories, Groupware, File/print, CRM, customer home-grown applications, etc.


Note Note: Cisco does not support non-UC or 3rd-party applications VMs running on "Cisco UC Virtualization Hypervisor" or "Cisco UC Virtualization Foundation" (as described at Unified Communications VMware Requirements). If you want to deploy non-U / 3rd-party applications, you must deploy on VMware vSphere Standard, Advanced, Enterprise or Enterprise Plus Edition.

Each Cisco UC app supports one of the following four types of co-residency:

Note Note:

  1. None: Co-residency is not supported. The UC app only supports a single instance of itself in a single VM on the virtualization host / physical server. No co-residency with ANY other VM is allowed, whether Cisco UC app VM, Cisco non-UC VM, or 3rd-party application VM.
  2. Limited: The co-resident application mix is restricted to specified VM combinations only. Click on the "Limited" entry in the tables below to see which VM combinations are allowed. Co-residency with any VMs outside these combinations - including other Cisco VMs - is not supported (these applications must be placed on a separate physical server). The deployment must also follow the General Rules for Co-residency and Physical/Virtual Hardware Sizing listed below.
  3. UC with UC only: The co-resident application mix is restricted to VMs for UC apps listed at Unified Communications Virtualization Supported Applications. Co-residency with Cisco non-UC VMs and/or 3rd-party application VMs is not supported; those VMs must be placed on a separate physical server. The deployment must also follow the General Rules for Co-residency and Physical/Virtual Hardware Sizing rules below.
  4. Full: The co-resident application mix may contain UC app VMs with Cisco non-UC VMs with 3rd-party application VMs. The deployment must follow the General Rules for Co-residency and Physical/Virtual Hardware Sizing rules below. The deployment must also follow the Special Rules for non-UC and 3rd-party Co-residency below.


Types of Co-residency


General Rules for Co-residency and Physical/Virtual Hardware Sizing

See the tables after the rules for the co-residency policy of each UC app.

Note Note: Remember that virtualization and co-residency support varies by UC app version, so don't forget to double-check inter-UC-app version compatibility, see Cisco Unified Communications System Documentation.

"Matching" Support Policies

All co-resident applications must "match" in the following areas:

  • Same "server" support for compute/network/storage hardware (see Unified Communications Virtualization Supported Applications)
    • E.g. if you want to host co-resident apps on UCS C260 M2 TRC#1, all co-resident apps must have a hardware support policy that permits this.
    • E.g. if you want to deploy instead as UC on UCS Specs-based with a diskless UCS C260 M2 and a SAN/NAS storage array, all co-resident apps must support this.
    • You must pick a hardware option that all the co-resident apps can support. For example, some UC apps do not support Specification-Based Hardware Support | Specs-based for UC on UCS or HP/IBM]], some UC apps do not support certain Tested Reference Configurations such as UC on UCS C200 M2 TRC#1 (as opposed to UC on UCS C200 M2 specs-based).
  • Same support for virtualization software product and version.
    • E.g. one app supports vSphere 5.0, the other app only supports vSphere 4.1. vSphere 5.0 may not be used for this co-resident application mix.
  • All apps must support a co-residency policy that permits the desired co-resident application mix.
    • E.g. one app has a "Full" policy, another app has "UC with UC" policy. Co-resident non-UC or 3rd-party app VMs are not allowed.
    • E.g. one app has a "UC with UC" policy, another app has "Limited" policy. Even though all apps will be UC, the desired combination may not be allowed by the "Limited" app.
    • E.g. one app has "None" policy. No other apps can be co-resident with this app regardless of their policies.
  • If support policies of a given co-resident app mix do not match, then the "least common denominator" is required.


Virtual Machine Templates

All UC applications must use a supported virtual machine OVA template from Unified Communications Virtualization Downloads (including OVA/OVF Templates).

No Hardware Oversubscription

All VMs require a one to one mapping between virtual hardware and physical hardware. See specifics below.

CPU
  • Must map 1 VM vCPU core to 1 physical CPU core.
    • For example, if you have a host with 12 total physical cores, then you can deploy any combination of virtual machines where the total number of vCPU on those virtual machines adds up to 12.
    • The requirement is based on physical cores, not logical cores.
      • Logical cores may exceed physical cores if CPU hyperthreading is used. See UC Virtualization Supported Hardware for recommendation on hyperthreading and other BIOS settings. See screenshot below for physical cores vs. logical cores (as viewed from either VMware vCenter or vSphere Client) for a UCS C220 M3S server with CPU hyperthreading DISABLED. If hyperthreading is ENABLED, you will see 16 logical cores despite only 8 physical cores, but UC sizing rules are still limited by 8 physical cores.
Physical Cores vs. Logical Cores in VMware management screens
    • The requirement is based on physical cores on CPU architectures that Cisco has verified have equivalent performance (click here for details). E.g. for UC sizing purposes, one core on E5-2600 at 2.5+ GHz is equivalent to one core on E7-2800 at 2.4+ GHz, which are both equivalent to one core on 5600 at 2.53+ GHz.
  • Cisco Unity VMs also require VMware CPU Affinity.
  • If there is at least one live Unity Connection VM on the physical server, then one CPU core per physical server must be left unused (it is actually being used by ESXi scheduler).
    • For example, if you have a host with 12 total physical cores and one or more of the VMs on that host will be Unity Connection, then you can deploy any combination of virtual machines where the total number of vCPU on those virtual machines adds up to 11, with the 12th core left unused. This is regardless of how many Unity Connection VMs are on that host.
Leaving 1 core unused for Cisco Unity Connection
  • CPU reservations on the VMs are not required. Use of CPU reservations in lieu of one-to-one CPU core mappings is not supported.
    • Even if some of the virtual machines have a reservation, the above one-to-one vCPU to physical core rule still applies – it overrides the reservation.
    • For example, if you have a host with a total of 4 physical cores, and you want to run the CUCM 2500 user OVA (which has 800 MHz reservation and requires 1 vCPU) along with other virtual machines, you still must deploy the VMs with a one to one mapping of vCPU to physical core. If you do not follow this rule, your deployment is unsupported.


Memory/RAM
  • Must map 1 GB of VM vRAM to 1 GB of physical RAM. Memory oversubscription is not supported for Cisco UC VMs.
  • The sum of virtual machines' vRAM may not exceed the total physical memory on the physical server.
  • Additional 2 GB of physical RAM must be provisioned for VMware ESXi itself (this is to cover ESXi overhead to run VMs; for more details see "Understanding Memory Overhead" on vmware.com).



Storage

The following apply to supported DAS, SAN and NAS storage

  • Must map 1 GB of VM vDisk to 1 GB of physical storage.
    • Storage thin provisioning is not recommended (whether at VM layer or storage array layer).
    • Any other form of storage oversubscription is not supported.
    • The sum of virtual machines' vDisks may not exceed the physical disk space of the physical server's logical volume capacity (i.e. capacity net of overhead for the VM itself, VMFS in vSphere and physical RAID configuration).
    • Cisco recommends 10% buffer on top of vDisk values to handle overhead within the VM (such as swap files which are the size of the VM's vRAM). See Shared Storage Considerations for more details.
  • The DAS, NAS or SAN storage solution must also supply enough performance to handle the total load of the VMs.
  • If the above capacity or performance requirements are not met, the storage system is overloaded and must be "fixed" by either moving virtual machines to alternate storage, or improving storage hardware.


Network/LAN
  • The aggregate networking load of the co-resident virtual machines must be met with the physical networking interface(s) on the host.
  • See the UC application design guides (http://www.cisco.com/go/srnd) to size network utilization by UC app VMs. In general, most UC app VMs will not saturate a 1GbE link. Deployments leveraging non-FC-storage (iSCSI, NFS or Unified Fabric/FCoE including UCS B-Series FEX) must account for network traffic from both VM LAN access and VM storage access.
  • For other network hardware best practices, see QoS Design Considerations for Virtual UC with UCS.
  • If the above capacity or performance requirements are not met, the networking hardware is congested and must be "fixed" by either moving virtual machines to a host with different network access, or provisioning more physical network interfaces.


Maximum VM Count per Physical Server
  • For hardware other than UCS C200 M2 TRC#1, you may mix and match Cisco UC app VM size and quantity as long as you follow all of the sizing rules described above. The maximum number of virtual machines per physical server that can be supported depends on several factors:
  • "Capacity" of physical server hardware vs. the quantity and resource usage of VM OVA templates.
    • E.g. using the above physical/virtual sizing rules for CPU, a physical server with 8 total physical cores can only host 4 of the "CUCM 7.5K user OVAs" since those are 2 vCPU each. If the physical server instead had 20 total physical cores, it could host 10 of these VMs (assuming memory, network and storage hardware are also sufficient using the UC sizing rules immediately below).
    • All UC on UCS Tested Reference Configurations are sized for co-residency except for UCS C210 M1 Tested Reference Configuration #1 (which is only sized to host a single CUCM VM of 7500 user capacity). Note UCS C200 M2 Tested Reference Configuration #1 has special restrictions on choice of UC VM, and its allowed VMs are at lower capacity per VM than for other Tested Reference Configurations.
    • UC on UCS Specs-based and HP/IBM Specs-based deployments allow hardware options that may support a higher or lower max VM count than a UC on UCS Tested Reference Configuration. E.g. UCS C210 M2 TRC#1 is a dual-4-core CPU, but UCS C210 M2 specs-based could be configured with dual-6-core (for possibly more VMs) or a single 4-core (for possibly a single VM).
  • Note the max VM count may also be further restricted by UC apps that only support "Limited" co-residency as described in the tables after the rules.

Maximum VM count per physical server


Special Requirements for UCS C200 M2 TRC#1 Hardware
Note Note: This section is only for UC on UCS C200 M2 Tested Reference Configuration #1 which uses Intel E5506 / 2.13 GHz CPU. UCS C200 M2 configured with a faster CPU (via Specification-Based Hardware Support) does not need to follow the rules in this section.
  • For UCS C200 M2 TRC #1, there are additional TRC-specific restrictions since it uses a slower CPU than other TRCs or specs-based (i.e. E5506 / 2.13 GHz instead of CPU with 2.53 GHz speed or higher). A C200 M2 configured with a different CPU allowed by UC on UCS Specs-based does not have these TRC-specific restrictions. Follow these rules for UCS C200M2 TRC#1 with E5506 / 2.13 GHz CPU:



Special Rules for non-UC and 3rd-party Co-residency

See the tables after these rules for the co-residency policy of each Cisco UC app.

Non-UC VMs and 3rd-party app VMs that will be co-resident with Cisco UC app VMs are required to align with all of the following:



"Matching" Support Policies
All co-resident VMs must follow the "Matching" Support Polices rule in General Rules for Co-residency and Physical/Virtual Hardware Sizing. www.cisco.com/go/uc-virtualized does not describe policies for Cisco non-UC apps or 3rd-party apps


Virtual Machine Templates
Cisco non-UC VMs and 3rd-party app VMs own definition of their supported VM OVA templates (or specs for one to be created), similar to what Cisco UC app VMs require in General Rules for Co-residency and Physical/Virtual Hardware Sizing. http://www.cisco.com/go/uc-virtualized does not describe VM templates for Cisco non-UC apps or 3rd-party apps.


CPU
All co-resident VMs - including non-UC VMs and 3rd-party app VMs - must follow the No Hardware Oversubscription rules for CPU in General Rules for Co-residency and Physical/Virtual Hardware Sizing.


Memory/RAM
  • To enforce "no memory oversubscription", each co-resident VM - whether UC, non-UC or 3rd-party - must have a reservation for vRAM that includes all the vRAM of the virtual machine. For example, if you have a virtual machine that is configured with 4GB of vRAM, then that virtual machine must also have a reservation of 4 GB of vRAM.
  • Otherwise all co-resident VMs - including non-UC VMs and 3rd-party app VMs - must follow the No Hardware Oversubscription rules for Memory/RAM in General Rules for Co-residency and Physical/Virtual Hardware Sizing. The 2 GB for VMware vSphere is in addition to the sum of the vRAM reservations for the VMs.


Storage
  • Non-UC VMs and 3rd-party app VMs must define their storage capacity requirements (ideally in an OVA template) and storage performance requirements. These requirements are not captured at http://www.cisco.com/go/uc-virtualized.
  • All co-resident VMs - including non-UC VMs and 3rd-party app VMs - must follow the No Hardware Oversubscription rules for Storage in General Rules for Co-residency and Physical/Virtual Hardware Sizing, including provisioning sufficient disk space, IOPS and low latency to handle the total VM load.
  • If DAS storage is to be used with non-UC / 3rd-party app VMs, it is highly recommended that pre-deployment testing be conducted, where all VMs are pushed to their highest level of IOPS generation. This is due to DAS environments being more capacity/performance-constrained in general, more dependent on adapter caches in RAID controllers, and Cisco DAS testing only done for UC apps on UCS Tested Reference Configurations.


Network/LAN



Table of Co-residency Support Policy by Cisco UC Application

Call Processing and System Management Applications

UC Application Co-residency Support
Unified Communications Manager (1)
  • 8.6(2)+: Full
  • 8.0(2) to 8.6(1): UC with UC only
Unified Communications Manager Business Edition 6000 Limited
Cisco Emergency Responder UC with UC only
Session Manager Edition
  • 8.6(2)+: Full
  • 8.5(1) to 8.6(1): UC with UC only
Intercompany Media Engine Full
Unified Attendant Consoles Full
UC Management Suite (OM, SM, SSM, PM) Full


(1) Applicable for publishers, subscribers, standalone TFTP and standalone multicast MOH nodes.

Messaging and Presence Applications

UC Application Co-residency Support
Cisco Unity Connection
  • 8.6(2)+: Full
  • 8.0(2) to 8.6(1): UC with UC only
Cisco Unity Full
Note Note: Cisco Unity requires CPU Affinity which may not be desirable for other applications co-resident with Unity.
Cisco Unified Presence
  • 8.6(1)+: Full
  • 8.0(2) to 8.5: UC with UC only


Contact Center Applications

UC Application Co-residency Support
Unified Contact Center Express / IP IVR
  • 8.5(x)+,9.0(x)+: Full
  • 8.0(x): UC with UC only
Cisco Unified Workforce Optimization (WFO) components (WFM, QM, AQM, CR, etc.)
  • 8.5(2)+: Full
  • 8.0(2) to 8.5(1): UC with UC only
Unified Contact Center Enterprise components and deployment models
  • 8.0(2+), 8.5(x): Limited
  • 9.0(1+): UC with UC only
Unified Intelligence Center
  • 8.0(3+): UC with UC only
  • 8.5+, 9.0+: Full
Unified Contact Center Management Portal 
  • 8.0+: UC with UC only
  • 8.5+, 9.0+: Full
Unified Customer Voice Portal (all components)
  • UC with UC only 
Cisco MediaSense
  • 8.5(X): Not supported
  • 9.0(1+): Full
Cisco SocialMiner
  • Full 
Cisco Finesse
  • Full
Cisco Unified Email Interaction Manager and Web Interaction Manager
  • UC with UC only

TelePresence Applications

UC Application Co-residency Support
Cisco TelePresence Manager

1.9.0:

  • Up to 4 CTS-Manager instances can be installed on a single server, for service provider deployments with no more than 50 endpoints under management per instance of CTS-Manager.
  • 1 CTS-Manager and 1 CTMS can be installed on a single UCS server.
Cisco TelePresence Multipoint Switch
1.9.0:
  • 2 CTMS instances can be installed on a single UCS server.
  • 1 CTMS and 1 CTS-Manager can be installed on a single UCS server.



Cisco TelePresence Video Communication Server (Cisco VCS)
The Cisco VCS can co-reside with applications (any other VMs occupying same host) subject to the following conditions:
  • no oversubscription of CPU: 1:1 allocation of vCPU to physical cores must be used (2 cores required per VCS VM)
  • no oversubscription of RAM: 1:1 allocation of vRAM to physical memory
  • sharing disk storage subsystem is supported subject to correct performance (latency, bandwidth) characteristics


Cisco TelePresence Conductor
The Cisco TelePresence Conductor can co-reside with applications (any other VMs occupying same host) subject to the following conditions:
  • no oversubscription of CPU: 1:1 allocation of vCPU to physical cores must be used (2 cores required per Cisco TelePresence Conductor VM)
  • no oversubscription of RAM: 1:1 allocation of vRAM to physical memory
  • sharing disk storage subsystem is supported subject to correct performance (latency, bandwidth) characteristics


Redundancy and Failover Considerations

Application-layer considerations (such as Unified CM Cluster over WAN or Unified CCE Remote Redundancy) are the same for virtualized (UC on UCS) or non-virtualized (MCS 7800) deployments.

However, since there is no longer a 1:1 relationship between hardware and application instances, "placement logic" must be taken into account to minimize the impact of hardware unavailability or unreachability:

  • Avoid placing a primary VM and a backup VM on the same server, chassis or site
  • For failover groups, avoid placing all actives on the same server, chassis or site
  • Avoid placing all VMs of the same role on the same server, chassis or site


Network, QoS and Shared Storage Design Considerations

See QoS Design Considerations for Virtual UC with UCS and Shared Storage Considerations.


Sizing Examples

This section shows a sample system configuration based on following the High-level Checklist for Design and Implementation for the following set of customer requirements:

  • General Requirements
    • Three sites - Headquarters (HQ) and two Branches (A and B)
    • CUCM and Applications located at each site
    • Up to 30,000 lines per sites
    • 100+ sites
    • Transparent use of PSTN if IP WAN unavailable
  • Headquarters (HQ) Requirements
    • 12K Phones, use Cisco TFTP server
    • 10K Messaging users
    • 10K users equipped with Cisco Unified Personal Communicator (CUPC)
    • Contact Center with 240 agents and 10 supervisors
  • Branch A Requirements
    • 2K Phones, use Cisco TFTP server
    • 2K Messaging users
    • 2K users equipped with Cisco Unified Personal Communicator (CUPC)
    • Contact Center with 145 agents and 5 supervisors
  • Branch B Requirements
    • 500 Phones, use Cisco TFTP server
    • 500 Messaging users
    • 500 users equipped with Cisco Unified Personal Communicator (CUPC)
    • Contact Center with 45 agents and 5 supervisors


After going through the design process, the following servers were selected to host the virtualized UC applications:

  • Six Cisco UCS B200 Blade Servers for HQ (running in a UCS 5100 Chassis connected to UCS 6100 Fabric Interconnect Switches), using TRC#1 (UCS-B200M2-VCS1).
  • Three Cisco UCS C210 General-Purpose Rack-Mount Servers for Branch A, using TRC#1 (UCS-C210M2-VCD2)
  • Two Cisco UCS C200 General-Purpose Rack-Mount Servers for Branch B, using TRC#1 UCS-C200M2-VCD2)
  • Note this example does not include non-UC applications (such as Cisco Nexus 1000V or Cisco Network Registrar) or 3rd-party applications such as customer-provided DNS / DHCP / TFTP servers, directories, email, groupware or other business applications. These applications need to run on separate physical servers and are not allowed to be co-resident with UC at this time. See the Co-residency section on this page for more details.

UCVirtualSizingEx1.jpg


See below for details on the server layout and application/VM placement at each site. Note that Branch B is using UCS C200 M2 TRC #1 so has restrictions on which VM OVAs were able to be used.

HQ server detail:

UCVirtualSizingEx2.jpg

Branch A server detail:

UCVirtualSizingEx3.jpg

Branch B server detail:

UCVirtualSizingEx4.jpg

Sizing and Ordering Tools

The suite of tools listed below can assist you with the sizing, configuring and quoting of Cisco Unified Communications solutions on the Unified Computing System.

Cisco Solution Expert

Cisco Solution Expert assists Cisco field and Cisco Unified Communications specialized channel partners in designing and quoting UC on UCS solutions using the Cisco Unified Workspace Licensing or the traditional design model. Solution Expert delivers a Bill of Materials for the Unified Communications software and the UCS B-series Blade Servers and VMWare ordered as Collaboration SKUS.

Netformx DesignXpert

Netformx DesignXpert is a third party application used to design and quote the Cisco Unified Computing System B-series. DesignXpert has two advisor modules that can be used to quote a Unified Communications solution with the Unified Computing System:

  • UC Advisor – a designing and quoting solution used to quote Unified Communications software. The UCS B-series Blade Servers and VMWare ordered as Collaboration SKUs can be quoted when ordering separate from the Unified Computing System. Other UCS B-series components must be configured via UCS Advisor below.
  • UCS Advisor - a design and quoting solution for all UCS B-series components including Blade Servers ordered as Collaboration or Data Center SKUs, UCS 5100 Blade Server Chassis, UCS 2100 Fabric Extender and UCS 6100 Fabric Interconnect Switch.

Cisco Unified Communications Sizing Tool

Cisco Unified Communications Sizing Tool delivers hardware sizing for complex Enterprise Unified Communications solutions, including Cisco Unified Contact Center Enterprise. The Sizing Tool delivers the virtual machine requirements for Unified Communications applications on the Unified Computing System platform.

Cisco Configuration Tool

Cisco Configuration Tool (need link here) is part of the suite of Internet Commerce Tools for managing online ordering of Cisco products. It enables you to configure products and view lead times and prices for each selection. The Cisco Configuration Tool, also known as the Dynamic Configuration Tool, is used to configure the Unified Communications products and the B series SKU and VMWare ordered as Collaboration SKUs.

Ordering Guides

Ordering Guides for Unified Communications System 8.x releases are available for Cisco sales, partners, and customers.



Back to: Unified Communications in a Virtualized Environment

Rating: 4.3/5 (32 votes cast)

Personal tools