Virtualization for Unified CCE

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m (1 revision)
Line 1,757: Line 1,757:
You must comply with the best practices described in the [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND)] section System Performance Monitoring, and in Chapter 8 Performance Counters in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/configuration/guide/icm80srvg.pdf Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise].  
You must comply with the best practices described in the [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND)] section System Performance Monitoring, and in Chapter 8 Performance Counters in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/configuration/guide/icm80srvg.pdf Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise].  
-
"Example:[[Unified CVP Support for UDP on Virtual Machines]]"
 
<br>
<br>

Revision as of 18:49, 1 March 2012


Contents

Updates to this Page

The following is a list of significant updates to this page:

Date Update
February 14, 2011 Updated the UCS Network Configuration section for UCS C-Series network design.
December 09, 2011 Added PG limited coresidency with UCM/CUP/IPIVR capability.
November 15,2011 Added sample deployment for 12,000 agents.
June 28, 2011 Updated list of Unified CCE components that have not been qualified and are not supported in virtualization.
Updated Section 5.1 ESXI 4.1 Software Requirements to add new link to Disable LRO.
Updated Section 6 Unified CCE Component Capacities and VM Configuration Requirements to add new link to Downloading OVA Templates for UC Applications.
Removed Section 7.2ESXi 4.1 Network Settings.
Changed title from Creating Virtual Machines from OVA VM Templates to Cisco Unified CCE-Specific Information for OVA Templates
Removed Section 12.1 Downloading OVA Templates and added new links to Unified Communications Virtualization Downloads and Unified CCE OVA Templates.
June 1, 2011 Added Section 5.1 ESXi 4.1 Software Requirements.
Added Section 7.2 ESXi 4.1 Network Settings.
Removed references to ESXi 4.0.
Updated UCS Network Configuration section.
May 9, 2011 Updated the Unified CCE Component Co-Residency and Sample Deployments section.
February 17, 2011

Updated the Creating Virtual Machines from OVA VM Templates section.
Removed OVA list from this page and added a link.
Added the Scalability Assumptions section.
Updated the CUIC RAM (GB) numbers in the sample CCE deployments.
Updated the sample deployment tables and highlighted optional items.

December 22, 2010 Updated the pointer links for the Bill of Material, CCMP virtualization page, and the Hybrid Deployment section.
December 20, 2010 Updated the Component Capacities section, the VM configuration requirements table, the Hybrid Deployment Options section, the steps for installing/migrating Unified CCE Components, the Support for Virtualization on the ESXi/UCS Platform section, and the Hardware Requirements section.
December 14, 2010 Added Avaya ACD PG, TDM ACD PG, and Unified Contact Center Gateway to the Unified CCE Component Capacities and VM Configuration Requirements section. Updated the Steps for Installing/Migrating Unified CCE Components on Virtual Machines secton. Added Hybrid Deployment Options section.
December 10, 2010 Added Unified CVP, Unified IC, Contact Center Management Portal (CCMP), and CAD to the sample deployments section.
October 1, 2010 This page, UCCE on UCS Deployment Certification Requirements and Ordering Information, and UCS Network Configuration for UCCE updated to reflect support of UCS C-Series hardware on UCCE.
September 28, 2010 CVP added to list of supported components/deployments.
September 21, 2010 IPIVR added to list of supported components/deployments and as an optional component in the sample deployment tables at Sample CCE Deployments.

Information for Partners about Unified CCE on UCS Deployment Certification and Ordering

It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page UCCE on UCS Deployment Certification Requirements and Ordering Information.

This page contains essential information for partners about the following:

  • Partner Certification Requirements
  • UCS Server Ordering Information
  • Important Notes on Cisco UCS Service and Support


Unified CCE 8.x Support for Virtualization on the ESXi/UCS Platform

Starting with Release 8.0(2), virtualization of the following deployments and Unified CCE components on Cisco Unified Computing Systems (UCS) B200 Series and C210 Series hardware is supported:

  • Router
  • Logger
  • Agent PG
  • MR PG
  • VRU PG
  • Unified Contact Center Gateway
  • Avaya ACD PG (Also supported on virtualized ESXi server on MCS-7845-I3-CCE2)
  • TDM ACD PG (Also supported on virtualized ESXi server on MCS-7845-I3-CCE2)
  • Cisco Agent Desktop (CAD) Server
  • Administration and Data Server with one of the following roles:
  • Administration Server and Real-time Data Server (AW)
  • Configuration-only Administration Server (AW-CONFIG)
  • Administration Server and Real-time and Historical Data Server (AW-HDS)
  • Administration Server, Real-time and Historical Data Server, and Detail Data Server (AW-HDS-DDS)
  • Historical Data Server and Detail Data Server (HDS-DDS)
  • Administration Client
  • Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be collocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)
  • Support Tools
  • Rogger (a Router and a Logger in the same VM)
  • The Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE is supported; see the section Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware for important information.
  • Unified IP-IVR is supported with Unified CCE on UCS B-Series solution and on UCS C-Series with the model (UCS-C210-VCD2) only. Please refer to the IPIVR product specific pages for detail.
  • CVP is supported with CCE on UCS solution. Please refer to the Virtualization for Unified CVP wiki page for details.
  • Contact Center Management Portal (CCMP). See the Virtualization for CCMP with Unified CCE on UCS Hardware wiki page for details.
  • Cisco Unified Intelligence Center (Unified IC). Please refer to the Cisco Unified Intelligence Center wiki page for details.
  • Cisco E-mail Interaction Manager (EIM)/Web Interaction Manager (WIM). See the Virtualization for EIM-WIM wiki page for details


The following deployments and Unified CCE components have not been qualified and are not supported in virtualization:

  • Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.
  • Unified CCH
  • Unified ICMH
  • Outbound Option with SCCP Dialer
  • WebView Server
  • Expert Advisor
  • Remote Silent Monitoring (RSM)
  • Span based Silent Monitoring on UCS B-series chassis
  • Cisco Unified CRM Connector
  • IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.


The following VMware features are not supported with Unified CCE:

  • VMware Physical to Virtual migration
  • VMware snapshots
  • VMware Consolidated Backup
  • VMware High Availability (HA)
  • VMware Site Recovery Manager
  • VMware vCenter Update Manager
  • VMware vCenter Converter


Hardware Requirements for Unified CCE Virtualized Systems

Requirements for Cisco Unified CCE systems using UCS B200 or C210 hardware are located in the Unified Computing System Hardware. For UCS C210, Unified CCE supports only the model (UCS-C210-VCD2) with a specific HDS Virtual Machine (VM) coresidence/population rule. See Sample CCE Deployments.

Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted. Unified CCE does not support UCS C200.

VMware and Application Software Requirements

The following software requirements apply specifically to Unified Contact Center Enterprise:


ESXi 4.1 Software Requirements

When Cisco Unified CCE is running on ESXi 4.1, you must perform the following steps:

  • You must install or upgrade VMware Tools for ESXi 4.1 on each of the VMs and use all of the VMware Tools default settings. For more information, see the section VMware Tools.
  • You must disable Large Receive Offload (LRO). For details, see the section Disable LRO.


Unified CCE Component Capacities and VM Configuration Requirements

For supported Unified CCE component capacities and VM computing resource requirements, see the List of Unified CCE OVA Templates.

Note Note: You must use the OVA VM templates to create the Unified CCE component VMs.



For instructions on how to obtain the OVA templates, see Downloading OVA Templates for UC Applications.

Unified CCE Scalability Impacts

The capacity sizing information is based on the operating conditions published in the Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.x, Chapter 10, Operating Conditions and the Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted',Release 8.x, Section 5. Both documents are available at Cisco.com.

The following features reduce the scalability of certain components below the agent count of the respective OVA capacity sizing information.

  • CTI OS Security - CTI OS Server capacity is impacted when CTI OS Security is enabled; capacity is decreased by 25%.
  • Mobile Agents - Refer to the SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with Mobile Agents.
  • Outbound Option – Refer to SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with Outbound Option.
  • Agent Greeting – Refer to SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with the Agent Greeting feature enabled.
  • Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.

UCS Network Configuration


Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware

You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.

When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND).

In addition, note the following expectations for UCS hardware points of failure:

  • For communication path single point of failure performed by Cisco on the Unified CCE UCS B-series High Availability (HA) deployment, system call handling was observed to be degraded for up to 45 seconds while the system recovered from the fault, depending upon the subsystem faulted. Single points of failure will not cause the built-in ICM software failover to occur. Single points of failure include, but are not limited to, a single fabric interconnect failure, a single fabric extender failure, and single link failures.
  • Multiple points of failure on the Unified CCE UCS HA deployment can cause catastrophic failure, such as ICM software failovers and interruption of service. If multiple points of failure occur, replace the failed redundant components and links immediately.

B-Series Considerations

When deploying Clustering Over the WAN with B-Series hardware, use of the Cisco UCS M81KR Virtual Interface Card (VIC) is required for reference design.

New B Series deployments using Clustering Over the WAN must use a Nexus 7000/5000 Series vPC infrastructure, or a Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720-10G with VSS.

See the configuration guidelines in UCCE on UCS B-Series Network Configuration.

C-Series Considerations

If deploying Clustering Over the WAN with C-Series hardware, do not trunk public and private networks. You must use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in UCCE on UCS C Series Network Configuration.


Notes for Deploying Unified CCE Applications on UCS B Series Hardware with SAN

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

Keep the following considerations in mind when deploying UCCE applications on UCS B Series hardware with SAN.

  • This deployment must comply with the conditions listed in Section 3.1.6 of the appropriate Hardware & System Software Specification (Bill of Materials) for Cisco Unified Contact Center Enterprise. In particular, SAN disk arrays must be configured as RAID 5 or RAID 10.
    • Note: RAID 6/ADG is also supported as an extension of RAID 5.
  • Historical Data Server (HDS) requires a 2 MB datastore block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores (in ESXi 4.0U1, this may change in later versions). The HDS block size is configured in vSphere at datastore creation.
  • To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.
  • The SAN design and configuration must meet the following VMware ESXi disk performance guidelines:
    • Disk Command Latency – It should be 15 mSec or less. 15mSec latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.
    • Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.
  • The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:
    • AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).
    •  %Disktime must remain less than 60%.
  • Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements & guidelines.
  • vSphere will alarm when disk free space is less than 20% free on any datastore. Recommendation is to provision at least 20% free space overhead, with 10% overhead required.
  • Recommend deploying from 4-8 VMs per LUN/datastore so long as IOPS and space requirements can be met, with supported range from 1-10.


See below for an example of SAN configuration for Rogger 2000 agent deployment. This example corresponds to the 2000 agent Sample CCE Deployment for UCS B-Series described in: Unified CCE Component Coresidency and Sample Deployments.

Example of SAN Configuration for Unified CCE ROGGER Deployment up to 2000 Agents

The following SAN configuration was a tested design, though generalized here for illustration. It is not the only possible way in which to provision SAN arrays, LUNs, and datastores to UC applications. However, you must adhere to the guidance given earlier in this section (above).

Rogger Side A

RoggerSideA.jpg

Rogger Side B

RoggerSideB.jpg

Steps for Installing/Migrating Unified CCE Components on Virtual Machines

Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x). For more information, see the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.

  1. Acquire the supported servers for Unified CCE 8.0(2) or later release.
  2. Install, setup, and configure the servers.
  3. Configure the network. See reference at UCS Network Configuration.
    Note Note: Configuring the network for the MCS servers is the same as configuring the network for the UCS C-Series servers.
  4. If VMware VirtualCenter is used for virtualization management, install or update to VMware vCenter Server 4.0 or later.
  5. Install and Boot VMWare ESXi. See the Cisco UCS B-Series Blade Servers VMware Installation Guide or the Cisco UCS C-Series Servers VMware Installation Guide. On C-Series servers or MCS servers, you must configure the ESXi datastore block size for the Administration & Data Server. See Configuring the ESXi Data Store Block Size for Administration and Data Server for instructions.
  6. Create the Unified CCE virtual machines from the OVA templates. See reference at Creating Virtual Machines from OVA VM Templates. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.
  7. Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.
  8. Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines.
    Note Note: Microsoft Windows Server 2003 Standard Edition and Microsoft SQL Server 2005 Standard Edition should be used for virtual machine guests. See related information in the links below.
  9. Install or migrate the Unified CCE Software components on the configured virtual machines, using Fresh Install or Tech Refresh Upgrade, as described in Installing Unified CCE components on virtual machines and Migrating Unified CCE components.

Unified CCE Component VM Coresidency and Sample Deployments

You can have one or more Unified CCE VMs coresident on the same ESXi server (for example, B200M2 blade or C210M2 rack mount server).  However, you must follow the rules described below:

  • You can have any number of Unified CCE virtual machines and combination of coresidency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources. 
  • You must not have CPU overcommit on the ESXi server that is running Unified CCE realtime application components.  The total number of vCPUs among all the virtual machines on an ESXi host must not be greater than the total number of CPUs available on the ESXi server.  In the case of the Cisco UCS B-200 and C-210, the total number of CPUs available is 8.
  • You must not have memory overcommit on the ESXi host running UC realtime applications.  You must allocate a minimum 2GB of memory for the ESXi kernel.  For example, if an ESXi server on B-200 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel you have 34GB available for the virtual machines.  The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.
  • VM coresidency with Unified Communications and third party applications (for example, WFM) is not supported unless it is described in the following subsection.


The following table shows how Unified CCE components can be coresident on the same ESXi server. A diamond indicates that coresidency is allowed. For example, the first row shows that Unified Communications Applications can not be colocated with Contact Center Tier 1 Applicaitons. The third row shows that Third Party Applications can only be colocated with Contact Center Tier 3 Applications.

Unified CCE Component Coresidency Contact Center Tier 1 Applications Contact Center Tier 2 Applications Contact Center Tier 3 Applications Unified Communications Applications Third Party Applications
Contact Center Tier 1 Applications: Router, Logger, Peripheral Gateway, ADS-HDS
Contact Center Tier 2 Applications: CVP Call + VXML Server, CVP Reporting Server, CUIC, CCMP 
Contact Center Tier 3 Applications: ADS/AW (any non-HDS), Admin Client, Support Tools, Windows AD DC, CVP Ops/OAMP Server, CVP Media Server, SocialMiner 
Unified Communications Applications: Communications Manager, Contact Center Express, IPIVR, CUP, Unity, Unity Connection, MediaSense 
Note Note: EXCEPTIONS to the above VM co-residency table:
  • On a C-Series server, the HDS cannot co-reside with a Router, Logger, or a PG.
  • PG (in CCE solutions up to 1000 CTIOS agents or 500 CAD agents) VMs can be co-resident with UCM/CUP/IPIVR VMs on the same ESXi host/server.


Note Note: If Cisco Support determines that a third party application deployed coresident with a Contact Center Tier 3 application causes that application to fail in performance or function, the customer must address the issue by moving the applications to other servers as necessary to alleviate the failure.

For coresidency restrictions specific to individual Unified Communications applications that run on VMs, see the Unified Communications Virtualization Sizing Guideline docwiki.

The following section depicts the CCE sample UCS deployments compliant to the coresidency rules described above.

Sample CCE Deployments

Notes

  1. The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.
  2. Although the sample deployments in these tables reflect the C-Series restriction that the HDS cannot coreside with a Router, Logger, or a PG, this restriction is not present on a B-Series hardware platform.
  3. For deployments where Historical Data Servers (HDSs) are coresident, two RAID 5 groups (one for each HDS) are recommended.
  4. Any deployment > 2k agents requires at least 2 chassis
  5. It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.
  6. ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.
  7. 12,000 agent is supported from 8.5(3) version onwards and requires Windows 2008 R2 Standard/Enterprise Edition and SQL Server 2005 Enterprise Edition. Refer to BOM for more details

ROGGER Example 1

ROGGER (up to 450 CTIOS Agents, or 297 CAD Agents) with 150 IPIVR or 150 CVP ports (N+N), optional 50 CUIC reporting users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Rogger A 4 4 ESXi Server B-1
Rogger B 4 4
Agent PG A (generic PG w/ optional VRU), CTIOS/CAD, optional MR PG 1 2 Agent PG B (generic PG w/ optional VRU), CTIOS/CAD, optional MR PG 1 2
Domain Controller A 1 2 Domain Controller B 1 2
Support Tools
1 2 CVP Op Svr 2
2
ESXi Server A-2 AW-HDS-DDS 1 4 4 ESXi Server B-2 AW-HDS-DDS 2 4 4
CUIC 1 4 6 CUIC 2 4 6
ESXi Server A-3 UCM Subscriber 1 2 6 ESXi Server B-3 UCM Subscriber 2 2 6
UCM Publisher 2 6


IPIVR 1 or CUP Srv 1 2 4 IPIVR 2 or CUP Srv 2 2 4
ESXi Server A-4 CVP Call+VXML Srv 1 4 4 ESXi Server B-4 CVP Call+VXML Srv 2 4 4
CVP Rpt Srv 1 4 4 CVP Media Server 2 2
Legend
Not shaded Required
Shaded Optional

ROGGER Example 2

ROGGER (up to 2,000 CTIOS Agents, or up to 1,000 CAD Agents) with 600 IPIVR or 900 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Rogger A 4 4 ESXi Server B-1 Rogger B 4 4
Agent PG A (generic PG w/ optional VRU, CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG B (generic PG w/ optional VRU, CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Domain Controller A 1 2 Domain Controller B 1 2
Support Tools
1
2
ESXi Server A-2 AW-HDS-DDS 1 4 4 ESXi Server B-2 AW-HDS-DDS 2 4 4
AW-HDS-DDS 3  4 4 AW-HDS-DDS 4  4 4
ESXi Server A-3
UCM Subscriber 1 2 6 ESXi Server B-3
UCM Subscriber 3 2 6
UCM Subscriber 2
2 6 UCM Subscriber 4
2
6
UCM Publisher 2
6
CVP Op Srv
2
2
IPIVR 1 or CUP Srv 1
2
4
IPIVR 2 or CUP Srv 2
2
4
ESXi Server A-4 CVP Call+VXML Srv 1 4 4 ESXi Server B-4 CVP Call+VXML Srv 2 4 4
CVP Rpt Srv 1 4 4 CVP Rpt Srv 2 4 4
ESXi Server A-5
CUIC1 4
6
ESXi Server B-5
CCMP (all in one) 4 4
CVP Media Server 2 2 CUIC 2 4 6
Legend
Not shaded Required
Shaded Optional

ROGGER Example 3

ROGGER (up to 4,000 CTIOS Agents, or up to 2,000 CAD Agents) with 1,200 IPIVR or 1,800 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Rogger A 4 4 ESXi Server B-1 Rogger B 4 4
Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-2
VRU PG A 2 2 ESXi Server B-2
VRU PG B
2 2
Support Tools
1
2
CVP Op Srv 2
2
Domain Controller A  1 2 Domain Controller B  1 2
ESXi Server A-3 Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 ESXi Server B-2

Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
CVP Rpt Srv 1 4 4 CVP Rpt Srv 2 4 4
ESXi Server A-4 AW-HDS-DDS 1 4 4 ESXi Server B-3 AW-HDS-DDS 2 4 4
AW-HDS-DDS 3  4 4 AW-HDS-DDS 4  4 4
ESXi Server A-5 UCM Subscriber 1 2 6 ESXi Server B-4 UCM Subscriber 2 2 6
UCM Subscriber 3 2 6 UCM Subscriber 4 2 6
UCM Publisher 2 6


IPIVR 1 2 4 IPIVR 3 2 4
ESXi Server A-6 UCM Subscriber 5 2 6 ESXi Server B-5 UCM Subscriber 6 2 6
UCM Subscriber 7 2 6 UCM Subscriber 8 2 6
IPIVR 2  2 4 IPIVR 4  2 4
CUP Server 1  2 4 CUP Server 2  2 4
ESXi Server A-7 CVP Call+VXML Srv 1 4 4 ESXi Server B-5 CVP Call+VXML Srv 2 4 4
CVP Call+VXML Srv 3 4 4 CVP Call+VXML Srv 4 4 4
ESXi Server A-7
CVP Media Server 2 2 ESXi Server B-6
CUIC 2 4 6
CUIC 1 4 6 CCMP (all in one) 4 4

Not shaded Required
Shaded Optional

Router/Logger Example 1

Router/Logger (up to 8,000 CTIOS Agents, or up to 4,000 CAD Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Router A 2 4 ESXi Server B-1 Router B 2 4
Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 3A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 3B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Domain Controller A  1 2 Domain Controller B  1 2
Support Tools  1 2      
ESXi Server A-2 Logger A 4 4 ESXi Server B-2 Logger B 4 4
Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 4A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 4B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-3 HDS-DDS 1 4 4 ESXi Server B-3 HDS-DDS 2 4 4
AW-HDS 1 4 4 AW-HDS 2 4 4
ESXi Server A-4 AW-HDS 3  4 4 ESXi Server B-4 AW-HDS 4  4 4
AW-HDS 5  4 4 AW-HDS 6  4 4
ESXi Server A-5 UCM 1 Subscriber 1 2 6 ESXi Server B-5 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 1 2 6 UCM 2 Subscriber 2 2 6
UCM 1 Subscriber 3 2 6 UCM 1 Subscriber 4 2 6
UCM 2 Subscriber 3 2 6 UCM 2 Subscriber 4 2 6
ESXi Server A-6 UCM 1 Subscriber 5 2 6 ESXi Server B-6 UCM 1 Subscriber 6 2 6
UCM 2 Subscriber 5 2 6 UCM 2 Subscriber 6 2 6
UCM 1 Subscriber 7 2 6 UCM 1 Subscriber 8 2 6
UCM 2 Subscriber 7 2 6 UCM 2 Subscriber 8 2 6
ESXi Server A-7
UCM Publisher 1 2 6 ESXi Server B-7
CVP Op Srv 2 2
CUP Server 1  2 4 CUP Server 2  2 4
ESXi Server A-8 UCM Publisher 2 2 6 ESXi Server B-8 CVP Rpt Srv 2 4 4
CVP Rpt Srv 1 4 4


ESXi Server A-9 CVP Call+VXML Srv 1 4 4 ESXi Server B-9 CVP Call+VXML Srv 2 4 4
CVP Call+VXML Srv 3 4 4 CVP Call+VXML Srv 4 4 4
ESXi Server A-10 CVP Call+VXML Srv 5 4 4 ESXi Server B-10 CVP Call+VXML Srv 6 4 4
CVP Call+VXML Srv 7 4 4 CVP Call+VXML Srv 8 4 4
ESXi Server A-11 CVP Media Server A
2
2
ESXi Server B-11
CVP Media Server B
2
2
VRU PG A
2
2
VRU PG B
2
2
ESXi Server A-12 CUIC 1  4 6 ESXi Server B-12
CUIC 2  4 6
CUIC 3  4 6 CUIC 4  4 6
ESXi Server A-13
CCMP DB  8 4 ESXi Server B-13 CCMP Web/App Svr  4 4
Legend
Not shaded Required
Shaded Optional


Router/Logger Example 2


Router/Logger (up to 12,000 CTIOS Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples

Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Logger A 4 8 ESXi Server B-1 Logger B 4 8
Router A 4 8 Router B 4 8
ESXi Server A-2 Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 ESXi Server B-2 Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 3A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 3B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 5A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 5B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-3 Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 ESXi Server B-3 Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 4A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 4B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 6A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 6B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-4 HDS-DDS 1 4 8 ESXi Server B-4 HDS-DDS 2 4 8
AW-HDS 1 4 8 AW-HDS 2 4 8
AW-HDS 3 4 8 AW-HDS 4 4 8
AW-HDS 5 4 8 AW-HDS 6 4 8
ESXi Server A-5 UCM 1 Subscriber 1 2 6 ESXi Server B-5 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 1 2 6 UCM 2 Subscriber 2 2 6
UCM 3 Subscriber 1 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-6 UCM 1 Subscriber 3 2 6 ESXi Server B-6 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 3 2 6 UCM 2 Subscriber 2 2 6
UCM 3 Subscriber 3 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-7 UCM 1 Subscriber 5 2 6 ESXi Server B-7 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 5 2 6 UCM 2 Subscriber 2 2 6
UCM 3 Subscriber 5 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-8 UCM 1 Subscriber 7 2 6 ESXi Server B-8 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 7 2 6 UCM 2 Subscriber 2 2 6
UCM 2 Subscriber 7 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-9 Domain Controller 1 2 ESXi Server B-9 Domain Controller 1 1
UCM Publisher 1 2 6 CVP Op Srv 2 2
CUP Server 1  2 4 CUP Server 2  2 4
Administration Server - AW 1 2 Administration Server - AW 1 2
ESXi Server A-10 UCM Publisher 2 2 6 ESXi Server B-10 CVP Rpt Srv 2 4 4
CVP Rpt Srv 1 4 4 UCM Publisher 3 2 6
ESXi Server A-11 CVP Call+VXML Srv 1 4 4 ESXi Server B-11 CVP Call+VXML Srv 2 4 4
CVP Call+VXML Srv 3 4 4 CVP Call+VXML Srv 4 4 4
ESXi Server A-12 CVP Call+VXML Srv 5 4 4 ESXi Server B-12 CVP Call+VXML Srv 6 4 4
CVP Call+VXML Srv 7 4 4 CVP Call+VXML Srv 8 4 4
ESXi Server A-13 CVP Call+VXML Srv 9 4 4 ESXi Server B-13 CVP Call+VXML Srv 10 4 4
CVP Call+VXML Srv 11 4 4 CVP Call+VXML Srv 12 4 4
ESXi Server A-14 CVP Media Server A 2 2 ESXi Server B-14 CVP Media Server B 2 2
VRU PG A 2 2 VRU PG B 2 2
ESXi Server A-15 CUIC 1  4 6 ESXi Server B-15 CUIC 2  4 6
CUIC 3  4 6 CUIC 4 4 6
ESXi Server A-16 CUIC 5 4 6 ESXi Server B-16 CUIC 6 4 6
CUIC 7 4 6 CUIC 8 4 6
ESXi Server A-17 CCMP DB 8 4 ESXi Server B-17 CCMP Web/App Svr  4 4
Legend
Not shaded Required
Shaded Optional

 

 

Hybrid Deployment Options

Some Unified Contact Center deployments are supported in a "hybrid" fashion whereby certain components must be deployed on (bare-metal) Media Convergence Servers (MCS) or generic servers, and other components are deployed in virtual machine guests on Unified Computing System (UCS) or MCS servers. The following sub-sections provide further details on these hybrid deployment options.

Cisco Unified Contact Center Hosted

  • NAM Rogger is deployed on a (bare-metal) quad CPU server as specified in the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.
  • Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)
  • As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.

Parent/Child Deployments

  • The parent ICM is deployed on (bare-metal) servers as specified in the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.
  • The Unified Contact Center Enterprise (or Express) child may be deployed virtualized according to existing published VM requirements.
  • The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.

Cisco Unified CCE-Specific Information for OVA Templates

See the following websites for more information:


Creating Virtual Machines by Deploying the OVA Templates

In the vSphere client, perform the following steps to deploy the Virtual machines.

  1. Highlight the host or cluster to which you wish the VM to be deployed.
  2. Select File > Deploy OVF Template.
  3. Click the Deploy from File radio button and specify the name and location of the file you downloaded in the previous section OR click the Deploy from URL radio button and specify the complete URL in the field, then click Next.
  4. Verify the details of the template, and click Next.
  5. Give the VM you are about to create a name, and choose an inventory location on your host, then click Next.
  6. Choose the datastore on which you would like the VM to reside - be sure there is sufficient free space to accommodate the new VM, then click Next.
  7. Choose a virtual network for the VM, then click Next.
  8. Verify the deployment settings, then click Finish.

Notes

  • VM CPU affinity is not supported. You may not set CPU affinity for Unified CCE application VMs on vSphere.
  • VM Resource Reservation - VM resource reservation is not supported for Unified CCE application VMs on vSphere. The VM computing resources should have a default reservation setting, which is no resource reservations.
  • You must not change the computing resource configuration of your VM at any time.
  • You must never go below the minimum VM computing resource requirements as defined in the OVA templates.
  • ESXi Server hyperthreading is enabled by default.


Preparing for Windows Installation

In the vSphere client, perform the following steps to prepare for operating system installation.

  1. Right click on the virtual machine you want to edit and select Edit Settings. A Virtual Machine Properties dialog appears.
  2. On the Hardware tab, select CD/DVD Drive 1. Under Device Type, select Datastore ISO File and enter the location of the operating system ISO.
  3. Click OK to save setting changes.
  4. Power up your VM and continue with operating system installation.


Remote Control of the Virtual Machines

For administrative tasks, you can use either Windows Remote Desktop or the VMware Infrastructure Client for remote control. The contact center supervisor can access the ClientAW VM using Windows Remote Desktop.

Installing VMware Tools

The VMware Tools must be installed on each of the VMs and all of the VMware Tools default settings should be used. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with Windows operating system.

Installing Unified CCE Components on Virtual Machines

Install the Unified CCE components after you create and configure the virtual machine. Installation of the Unified CCE components on a virtual machine is the same as the installation of the components on physical hardware.

Refer to the Unified CCE documentation for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

Migrating Unified CCE Components to Virtual Machines

Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise & Hosted.

Configuring the ESXi Data Store Block Size for Administration and Data Server

This section is applicable to storing Virtual Machines on C-210 local storage. The C-210 Server comes with a default local storage configured with two sets of RAID groups. Disk 1-2 is RAID 1, while the remaining disks (3-10) are RAID 5.

The creation of the virtual machine for the Unified CCE Administration and Data Server requires a large virtual disk size. You must follow the steps described below to configure the ESXi data store block size to 2MB for it to handle the Unified CCE Administration and Data Server virtual disk size requirement before you deploy the OVAs for the following Unified CCE components:

  • AW-HDS
  • AW-HDS-DDS
  • HDS-DDS


Steps to configure the ESXi data store block size to 2MB:

  1. After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi 4.0 and use VMware vSphere Client to connect to the ESXi host.
  2. On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.
  3. Right-click on this data store and delete it. We will add the data store back in the following steps.
  4. Click on the “Add Storage…” and select the Disk/LUN.
  5. The data store that was just deleted will now be available to add, select it.
  6. In the configuration for this data store you will now be able to select the block size, select 2MB, and finish the adding of the storage to the ESXi host. This storage is now available for deployment of the virtual machines that requires large disk size, such as the Administration and Data Servers.

Timekeeping Best Practices for Windows

You should follow the best practices outlined in the VMware Knowledge Base article VMware KB: Timekeeping best practices for Windows.

  • ESXi hosts and domain controllers should synchronize the time from the same NTP source.
  • When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.
  • Be sure that Time synchronization between the virtual machine and the host operating system in the VMware Tools tool box GUI of the Windows Server 2003 guest operating system remains deselected; this checkbox is deselected by default.

System Performance Monitoring Using ESXi Counters

  • Make sure that you follow VMware's ESXi best practices and SAN vendor's best practices for optimal system performance. 
  • VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.
  • You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.
  • You can use Unified CCE Serviceability Tools and Unified CCE reports to monitor the operation and performance of the Unified CCE system.
  • The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.


You can use the following ESXi counters as performance indicators.

Category Object Measurement Units Description Performance Indication and Threshold
CPU
  • ESXi Server
  • VM


CPU Usage (Average) Percent CPU Usage Average in percentage for:
  • ESXi server
  • Virtual machine


Less than 60%.
CPU
  • ESXi Server Processor#
  • VM_vCPU#


CPU Usage 0 - 7 (Average) Percent CPU Usage Average for:
  • ESXi server for processors 0 to 7
  • Virtual machine vCPUs


Less than 60%
CPU VM CPU Ready mSec The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU. Less than 150 mSec. If it is greater than 150 mSec doing system failure, you should investigate and understand why the machine is so busy.
Memory
  • ESXi Server
  • VM


Memory Usage (Average) Percent Memory Usage = Active/ Granted * 100 Less than 80%
Memory
  • ESXi Server
  • VM


Memory Active (Average) KB Memory that is actively used or being referenced by the guest OS and its applications. When it exceeds the amound of memory on the host, the server starts swap. Less than 80% of the Granted memory
Memory
  • ESXi Server
  • VM


Memory Balloon (Average) KB ESXi use balloon driver to recover memory from less memory-intensive VMs so it can be used by those with larger active sets of memory. Since we do not over commit the memory, this should be 0 or very low. Note: ESXi performs memory ballooning before memory swap.
Memory
  • ESXi Server
  • VM


Memory Swap used (Average) KB ESXi Server swap usage. Use the disk for RAM swap Since we do not over commit the memory, this should be 0 or very low
Disk
  • ESXi Server
  • VM


Disk Usage (Average) KBps Disk Usage = Disk Read rate + Disk Write rate Ensure that your SAN is configured to handle this amount of disk I/O.
Disk
  • ESXi Server vmhba ID
  • VMbha ID


Disk Usage Read rate  KBps Rate of reading data from the disk Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Usage Write rate KBps Rate of writing data to the disk Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Commands Issued Number Number of disk commands issued on this disk in the period. Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Command Aborts Number Number of disk commands aborted on this disk in the period. Disk command aborts when the disk array is taking too long to respond to the command. (Command timeout)

This counter should be zero. A non-zero value indicates storage performance issue.

Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Command Latency mSec The average amount of time taken for a command from the perspective of a Gust OS. Disk Command Latency = Kernel Command Latency + Physical Device Command Latency. 15ms latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Kernel Disk Command Latency mSec The average time spent in ESXi Server VMKernel per command Kernel Command Latency should be very small in comparison to the Physical Device Command Latency, and it should be close to zero.  Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.
Network
  • ESXi Server
  • VM


Network Usage (Average) KBps Network Usage = Data receive rate + Data transmit rate Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.
Network
  • ESXi Server vmnic ID
  • VM vmnic ID


Network Data Receive Rate KBps Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.
Network
  • ESXi Server vmnic ID
  • VM vmnic ID


Network Data Transmit Rate KBps The average rate at which data is transmitted on this Ethernet port Less than 30% of the available network bandwidth.  For example, it should be less than 300Mps for 1G network.

System Performance Monitoring Using Windows Perfmon Counters

You must comply with the best practices described in the Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND) section System Performance Monitoring, and in Chapter 8 Performance Counters in the Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise.



Back to: Unified Communications in a Virtualized Environment

Rating: 3.8/5 (19 votes cast)

Personal tools