Virtualization for Unified CCE

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m (1 revision)
m (1 revision)
(42 intermediate revisions not shown)
Line 1: Line 1:
 +
<br>
 +
== Updates to this Page  ==
== Updates to this Page  ==
Line 7: Line 9:
! Date  
! Date  
! Update
! Update
 +
|-
 +
| February 1, 2013
 +
| Added details about UCCE 8.5(4)and ICME on UCS on Windows Server 2008.
 +
|-
 +
| January 30, 2013
 +
| Clarified ESXi software requirements.
 +
|-
 +
| January 29, 2013
 +
| Updated information about supported deployments and components, and ESXi software requirements.
 +
|-
 +
| January 29, 2013
 +
| Added Finesses and made CVP Media Server coloaded on CVP 9.x VM by default to VM Sample Deployments
 +
|-
 +
| Auguet 15, 2012
 +
| Deleted reference to ESXi 4.0 as it applies to 5.0 also in configure the ESXi data store block size to 2MB
 +
|-
 +
| Auguet 13, 2012
 +
| More Edits for 9.x release
 +
|-
 +
| June 19, 2012
 +
| Overall Edits for 9.x release
 +
|-
 +
| March 27, 2012
 +
| Added the Cisco SmartPlay Solution Packs for UC Support
 +
|-
 +
| February 14, 2011
 +
| Updated the UCS Network Configuration section for UCS C-Series network design.
 +
|-
 +
| December 09, 2011
 +
| Added PG limited coresidency with UCM/CUP/IPIVR capability.
 +
|-
 +
| November 15,2011
 +
| Added sample deployment for&nbsp;12,000 agents.
|-
|-
| June 28, 2011  
| June 28, 2011  
Line 46: Line 81:
== Information for Partners about Unified CCE on UCS Deployment Certification and Ordering  ==
== Information for Partners about Unified CCE on UCS Deployment Certification and Ordering  ==
-
'''It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page''' [[Unified CCE on UCS Deployment Certification Requirements and Ordering Information]].  
+
'''It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page''' [http://docwiki.cisco.com/wiki/UCCE_on_UCS_Deployment_Certification_Requirements_and_Ordering_Information UCCE on UCS Deployment Certification Requirements and Ordering Information].  
This page contains essential information for partners about the following:  
This page contains essential information for partners about the following:  
Line 54: Line 89:
:*Important Notes on Cisco UCS Service and Support
:*Important Notes on Cisco UCS Service and Support
-
== Unified CCE 8.x Support for Virtualization on the ESXi/UCS Platform  ==
+
<br>
 +
 
 +
== Unified Contact Center Enterprise Support for Virtualization on the VMware vSphere Platform  ==
-
Starting with Release 8.0(2), virtualization of the following deployments and Unified CCE components on Cisco Unified Computing Systems (UCS) B200 Series and C210 Series hardware is supported:  
+
Starting with Release 8.0(2), virtualization of the following deployments and Unified CCE components on Cisco Unified Computing Systems (UCS) B Series and C Series hardware is supported:  
:*Router  
:*Router  
Line 75: Line 112:
:*Administration Client  
:*Administration Client  
:*Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be collocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)  
:*Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be collocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)  
-
:*Support Tools  
+
:*Support Tools (Note: It is no longer supported with CCE 8.5x and higher)
:*Rogger (a Router and a Logger in the same VM)  
:*Rogger (a Router and a Logger in the same VM)  
:*The Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE is supported; see the section [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Support_for_UCM_Clustering_Over_the_WAN_with_Unified_CCE_on_UCS_Hardware Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware] for important information.  
:*The Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE is supported; see the section [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Support_for_UCM_Clustering_Over_the_WAN_with_Unified_CCE_on_UCS_Hardware Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware] for important information.  
:*Unified IP-IVR is supported with Unified CCE on UCS B-Series solution and on UCS C-Series with the model ([http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/solution_overview_c22-597556.html UCS-C210-VCD2)] only. Please refer to the IPIVR product specific pages for detail.  
:*Unified IP-IVR is supported with Unified CCE on UCS B-Series solution and on UCS C-Series with the model ([http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/solution_overview_c22-597556.html UCS-C210-VCD2)] only. Please refer to the IPIVR product specific pages for detail.  
-
:*CVP is supported with CCE on UCS B-Series solution. Please refer to the [http://docwiki.cisco.com/wiki/Unified_Customer_Voice_Portal Unified Customer Voice Portal] wiki page for details.  
+
:*CVP is supported with CCE on UCS solution. Please refer to the [[Virtualization for Unified CVP]] wiki page for details.  
-
:*Contact Center Management Portal (CCMP). See the [[Virtualization for CCMP with Unified CCE on UCS Hardware]] wiki page for details.  
+
:*Contact Center Management Portal (CCMP). See the [http://docwiki.cisco.com/wiki/Virtualization_for_CCMP_with_Unified_CCE_on_UCS_Hardware Virtualization for CCMP with Unified CCE on UCS Hardware] wiki page for details.  
-
:*Cisco Unified Intelligence Center (Unified IC). Please refer to the [[Cisco Unified Intelligence Center]] wiki page for details.
+
:*Cisco Unified Intelligence Center (Unified IC). Please refer to the [http://docwiki.cisco.com/wiki/Cisco_Unified_Intelligence_Center Cisco Unified Intelligence Center] wiki page for details.
 +
:*Cisco E-mail Interaction Manager (EIM)/Web Interaction Manager (WIM). See the Virtualization for EIM-WIM wiki page for details.
-
The following deployments and Unified CCE components have not been qualified and '''are not supported''' in virtualization:
+
Starting&nbsp;with Release 9.0(3), virtualization of the Unified ICME&nbsp;with more than 12,000 agents and/or the use of NIC, SIGTRAN (for up to 150PGs) is also supported on Cisco Unified Computing Systems (UCS) B Series and C Series hardware.
-
:*Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.  
+
On UCCE Release 8.5(4), an Engineering Special (ES) is required to support ICME on UCS on Windows Server 2008. Please contact TAC to obtain the required ES before proceeding with deployment. Additionally, if the deployment is greater than 8,000 agents for ICME on UCS, a manual change of the ICM router, logger, and HDS/DDS components is necessary. The virtual machine specifications (vCPU, RAM, and CPU and Memory reservations) must be changed to match the UCCE 9.0 OVAs.
-
:*Unified CCH
+
-
:*Unified ICMH
+
-
:*Outbound Option with SCCP Dialer
+
-
:*WebView Server
+
-
:*Cisco E-mail Interaction Manager (EIM)/Web Interaction Manager (WIM)
+
-
:*Expert Advisor
+
-
:*Remote Silent Monitoring (RSM)
+
-
:*Span based Silent Monitoring on UCS B-series chassis
+
-
:*Cisco Unified CRM Connector
+
-
:*IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.
+
-
{{note|The hybrid (virtual and non-virtual server) deployment model is supported. Components that are not yet virtualized can continue to be on MCS in the UCCE on UCS deployment. See the [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Hybrid_Deployment_Options Hybrid Deployment Options] section for more information.}}  
+
<br>The following deployments and Unified CCE components have not been qualified and '''are not supported''' in virtualization:
 +
 
 +
*Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.
 +
*Unified CCH (multi-instance)
 +
*Unified ICMH (multi-instance)
 +
*Outbound Option with SCCP Dialer
 +
*WebView Server
 +
*Expert Advisor
 +
*Remote Silent Monitoring (RSM)
 +
*Span based Silent Monitoring on UCS B-series chassis. For support on UCS C-series, please consult the VMware Community/Knowledge Base regarding the possible use of promiscuous mode to receive the monitored traffic from Span port.
 +
*Cisco Unified CRM Connector
 +
*Agent PG with more than 1 Agent PIM (2nd PIM for UCM CTI RP is allowed as per SRND)
 +
*Multi-Instance CTIOS
 +
*IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.
 +
 
 +
{{note|The hybrid (virtual and non-virtual servers) deployment model is supported. However, for paired components having side A and side B (e.g., Rogger A and Rogger B), the server hardware must be identical on both sides (e.g., you cannot mix bare metal MCS on side A of the paired component and virtual UCS on side B of the paired component). You may however deploy a mixture of UCE B and C series for the separate Duplex pair sides, so long as the UCS server and processor generation aligns (UCS-B200M2-VCS1 is equal generation to UCS-C210M2-VCD2 for example).
 +
The hybrid support is for non-paired components that are not yet virtualized can continue to be on MCS in the UCCE on UCS deployments. See the [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Hybrid_Deployment_Options Hybrid Deployment Options] section for more information.}}  
 +
 
 +
<br>
The following VMware features are not supported with Unified CCE:  
The following VMware features are not supported with Unified CCE:  
:*VMware Physical to Virtual migration  
:*VMware Physical to Virtual migration  
-
:*VMware snapshots
+
:*VMware Snapshots
:*VMware Consolidated Backup  
:*VMware Consolidated Backup  
:*VMware High Availability (HA)  
:*VMware High Availability (HA)  
Line 108: Line 154:
:*VMware vCenter Update Manager  
:*VMware vCenter Update Manager  
:*VMware vCenter Converter
:*VMware vCenter Converter
 +
 +
<br>
== Hardware Requirements for Unified CCE Virtualized Systems  ==
== Hardware Requirements for Unified CCE Virtualized Systems  ==
-
Requirements for Cisco Unified CCE systems using UCS B200 or C210 hardware are located in the [[Unified Computing System Hardware|Unified Computing System Hardware]].  
+
Supported hardware platforms for Cisco Unified Contact Center Enterprise solutions are located on the [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Supported_Applications Unified Communications Virtualization Supported Applications]page. Hardware specifications for those supported platforms are detailed at the [http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware UC Virtualization Supported Hardware]page.  
-
For UCS C210, Unified CCE supports only the model ([http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/solution_overview_c22-597556.html UCS-C210-VCD2)] with a specific HDS Virtual Machine (VM) coresidence/population rule. See [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Sample_CCE_Deployments Sample CCE Deployments]
+
=== Cisco SmartPlay Solution Packs for UC/Hardware Bundles Support  ===
-
Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release, 8.0(x).] Unified CCE does not support UCS C200.  
+
[http://www.cisco.com/web/partners/incentives_and_promotions/cisco_smartplay_promo.html Cisco SmartPlay Solution Packs for UC], which are the pre-configured bundles (value UC bundles) based on '''UCS B200M2 or C210M2''' as an alternative ordering to UC on UCS TRCs above are '''supported with caveats''':
 +
 
 +
:*For '''B200M2 and C210M2 Solution Packs (''Value UC Bundles'')''' that have better specification than the UC on UCS B200M2/C210M2 TRC models (e.g., 6 cores per same cpu family, etc.), the UC on UCS spec-based HW support policy needs to be followed and these bundles are supported by UCCE/CVP as an exception providing the same UCCE VM co-residency rules are compliant and the number of CVP Call Server VMs cannot be more than two on the same server/host.
 +
:*For ''other Spec-based Servers'' according to UC on UCS Spec-based Hardware Policy that have specification equal to or better than the UC on UCS B200M2/C210M2 TRCs, they '''may''' be used for UCCE/CVP once validated in the Customer Collaboration DMS/A2Q (Design Mentoring Session/Assessment To Quality) process. This also means a particular desired spec-based server model may '''not be approved''' for use after the server design review in the DMS/A2Q session.
 +
 
 +
<br>Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted]. Unified CCE does not support UCS C200.  
== VMware and Application Software Requirements  ==
== VMware and Application Software Requirements  ==
-
The following software requirements apply specifically to Unified Contact Center Enterprise:  
+
For supported VMware ESXi versions for Unified Contact Center Enterprise releases, see the ''VMware vSphere ESXi Version Support for Contact Center Applications'' table at [http://docwiki.cisco.com/wiki/Unified_Communications_VMware_Requirements Unified Communications VMware Requirements].
-
:*VMWare ESXi 4.0 or 4.1, minimum version 4.0U1. ''Versions of ESXi prior to 4.0U1 and all versions of ESX are unsupported.''
+
The following additional software requirements apply specifically to Unified Contact Center Enterprise:
-
:*The Windows, SQL, and other third party software requirements for the Unified CCE applications in the ESXi/UCS platform are the same as in the physical server. For such information refer to the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf Hardware and Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted, Release 8.0(x)].
+
-
=== ESXi 4.1 Software Requirements  ===
+
:*If you are upgrading ESXi software, see the section [http://docwiki.cisco.com/wiki/Ongoing_Virtualization_Operations_and_Maintenance Upgrade ESXi].
 +
:*The Windows, SQL, and other third party software requirements for the Unified CCE applications in the ESXi/UCS platform are the same as in the physical server. For more information see the appropriate release of [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted].
-
When Cisco Unified CCE is running on ESXi 4.1, you must perform the following steps:  
+
=== ESXi 4.1 and 5.0 Software Requirements  ===
 +
 
 +
When Cisco Unified CCE is running on ESXi 4.1 and 5.0, you must install or upgrade VMware Tools for ESXi to match the running version on each of the VMs and use all of the VMware Tools default settings. For more information, see the section [http://docwiki.cisco.com/wiki/VMware_Tools VMware Tools]. You must do this every time the ESXi version is upgraded.
 +
 
 +
Starting with CCE/CVP Release 9.x with Windows Server 2008 environment, disabling the LRO in ESXi 4.1/5.0 (and later) is no longer a requirement.
 +
 
 +
<br>
-
:*You must install or upgrade VMware Tools for ESXi 4.1 on each of the VMs and use all of the VMware Tools default settings. Refer to VMware documentation for instructions on installing or upgrading VMWare Tools on the VM with Windows operating system.
+
:
-
:*You must disable Large Receive Offload (LRO). For details, see the section [http://docwiki-dev.cisco.com/wiki/Disable_LRO Disable LRO]
+
<br>
<br>
Line 135: Line 193:
= Unified CCE Component Capacities and VM Configuration Requirements  =
= Unified CCE Component Capacities and VM Configuration Requirements  =
-
For supported Unified CCE&nbsp;component capacities and VM computing resource requirements, see the [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise List of Unified CCE OVA Templates.]  
+
For supported Unified CCE&nbsp;component capacities and VM computing resource requirements, see the [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise List of Unified CCE OVA Templates].
{{note| You must use the OVA VM templates to create the Unified CCE component VMs.}}  
{{note| You must use the OVA VM templates to create the Unified CCE component VMs.}}  
-
<br>For instructions on how to obtain the OVA templates, see [http://docwiki.cisco.com/wiki/Downloading_OVA_Templates_for_UC_Applications Downloading OVA Templates for UC Applications]  
+
<br>For instructions on how to obtain the OVA templates, see [http://docwiki.cisco.com/wiki/Downloading_OVA_Templates_for_UC_Applications Downloading OVA Templates for UC Applications]. <br><br>
== Unified CCE Scalability Impacts  ==
== Unified CCE Scalability Impacts  ==
-
The [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information] is based on the operating conditions published in the ''Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.x'', Chapter 10, ''Operating Conditions'' and the ''Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted'''','''''Release 8.x, Section 5. Both documents are available at [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco.com]'''.'''  
+
The [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information] is based on the operating conditions published in the ''Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.''x''/9.''x'''', Chapter 10, ''Operating Conditions'' and the ''Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted''''''<i>,</i>'''8.''x''/9.''x''''''<i>, Section 5. Both documents are available at [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco.com]'''.'''</i>
The following features reduce the scalability of certain components below the agent count of the respective OVA [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information].  
The following features reduce the scalability of certain components below the agent count of the respective OVA [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_%28including_OVA/OVF_Templates%29#Unified_Contact_Center_Enterprise capacity sizing information].  
Line 151: Line 209:
*Outbound Option – Refer to SRND, Chapter 10, ''Sizing Information for Unified CCE Components and Servers ''table for sizing guidance with Outbound Option.  
*Outbound Option – Refer to SRND, Chapter 10, ''Sizing Information for Unified CCE Components and Servers ''table for sizing guidance with Outbound Option.  
*Agent Greeting – Refer to SRND, Chapter 10, ''Sizing Information for Unified CCE Components and Servers ''table for sizing guidance with the Agent Greeting feature enabled.  
*Agent Greeting – Refer to SRND, Chapter 10, ''Sizing Information for Unified CCE Components and Servers ''table for sizing guidance with the Agent Greeting feature enabled.  
 +
*Whisper Announcement – (United CCE 9.x) Refer to SRND, Chapter 10, ''Sizing Information for Unified CCE Components and Servers'' table for sizing guidance with the Whisper Announcement feature enabled.
 +
*Precision Queues – (United CCE 9.x) Refer to SRND, Chapter 10, ''Sizing Information for Unified CCE Components and Servers'' table for sizing guidance with precision queues.
*Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.
*Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.
 +
 +
<br><br>
= UCS Network Configuration  =
= UCS Network Configuration  =
:*'''IMPORTANT:''' For instructions on performing the network configuration needed to deploy Cisco Unified Contact Center Enterprise (UCCE) on UCS servers, see [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE UCS Network Configuration for UCCE].  
:*'''IMPORTANT:''' For instructions on performing the network configuration needed to deploy Cisco Unified Contact Center Enterprise (UCCE) on UCS servers, see [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE UCS Network Configuration for UCCE].  
-
:*QoS should be enabled for the Router and PG configuration. For more information on QoS consideration, refer to Chapter 12, &nbsp;[http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/design/guide/uccesrnd80.pdf Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)].
+
:*'''QoS must be enabled''' for both the Private network connections (between Side A and B) and the public network connections (between the Router and the PG) in Unified CCE Setup. Refer to Chapter 12 of the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)] for more details.
-
:*For information on best practices for Cisco VLAN trunking to VMware, refer to [http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003806|http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003806 this website].
+
== Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware  ==
== Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware  ==
Line 163: Line 224:
You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.  
You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.  
-
When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/design/guide/uccesrnd80.pdf Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND).]
+
When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the [http://www.cisco.com/en/US/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND)].  
In addition, note the following expectations for UCS hardware points of failure:  
In addition, note the following expectations for UCS hardware points of failure:  
Line 173: Line 234:
=== B-Series Considerations  ===
=== B-Series Considerations  ===
-
When deploying Clustering Over the WAN with B-Series hardware, use of the Cisco UCS '''M81KR''' Virtual Interface Card is mandatory.  
+
Cisco recommends use of the M81KR Virtual Interface Card (VIC) for Unified CCE deployments, though the M71KR(-E/Q) and M72KR(-E/Q) may also be used as per reference design 2 detailed at the dedicated networking page linked below. M51KR, M61KR and 82598KR are not supported for Contact Center use in UCS B series blades.  
-
New B-Series deployments using Clustering Over the WAN '''must''' use a Nexus 7000 Series / Nexus 5000 Series vPC infrastructure, or a Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720-10G.  
+
New B Series deployments are recommended to use Nexus 5000/7000 series data center switches with vPC PortChannels. This technology has been shown to add considerable advantageous to Contact Center applications in fault recovery scenarios.  
See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_B-Series_Network_Configuration UCCE on UCS B-Series Network Configuration].  
See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_B-Series_Network_Configuration UCCE on UCS B-Series Network Configuration].  
Line 181: Line 242:
=== C-Series Considerations  ===
=== C-Series Considerations  ===
-
If deploying Clustering Over the WAN with C-Series hardware, '''do not''' trunk public and private networks. You '''must''' use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#Network_Requirements_for_C-210_M1_Servers Network Requirements for C-210 M1 Servers].  
+
If deploying Clustering Over the WAN with C-Series hardware, '''do not''' trunk public and private networks. You '''must''' use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in [http://docwiki.cisco.com/wiki/UCS_Network_Configuration_for_UCCE#UCCE_on_UCS_C-Series_Network_Configuration UCCE on UCS C Series Network Configuration].  
<br>
<br>
-
= Notes for Deploying Unified CCE Applications on UCS B-Series Hardware with SAN  =
+
= Notes for Deploying Unified CCE Applications on UCS B Series Hardware with SAN  =
In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.  
In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.  
Line 191: Line 252:
In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.  
In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.  
-
Keep the following considerations in mind when deploying UCCE applications on UCS B-series hardware with SAN.  
+
Keep the following considerations in mind when deploying UCCE applications on UCS B Series hardware with SAN.  
-
 
+
-
*This deployment must comply with the conditions listed in Section 3.1.6 of the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf Hardware &amp; System Software Specification (Bill of Materials) for Cisco Unified Contact Center Enterprise, Release 8.0(1).] In particular, SAN disk arrays must be configured as RAID 5 or RAID 10.<br>
+
-
 
+
-
*Each Historical Data Server (HDS) requires a dedicated LUN and a datastore with a 2 MB block size. No other application can reside on the same datastore as the HDS. The HDS requires a 2 MB block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores. The HDS block size is configured in VMware at datastore creation.
+
-
 
+
-
*To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.<br>
+
 +
*This deployment must comply with the conditions listed in Section 3.1.6 of the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware &amp; System Software Specification (Bill of Materials) for Cisco Unified Contact Center Enterprise]. In particular, SAN disk arrays must be configured as RAID 5 or RAID 10.<br>
 +
**Note: RAID 6/ADG is also supported as an extension of RAID 5.
 +
*Historical Data Server (HDS) requires a 2 MB datastore block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores (in ESXi 4.0U1, this may change in later versions). The HDS block size is configured in vSphere at datastore creation.
 +
*To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.
*The SAN design and configuration must meet the following VMware ESXi disk performance guidelines:  
*The SAN design and configuration must meet the following VMware ESXi disk performance guidelines:  
**Disk Command Latency – It should be 15 mSec or less. 15mSec latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.  
**Disk Command Latency – It should be 15 mSec or less. 15mSec latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.  
**Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.
**Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.
-
 
*The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:  
*The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:  
**AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).  
**AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).  
**&nbsp;%Disktime must remain less than 60%.<br>
**&nbsp;%Disktime must remain less than 60%.<br>
-
 
+
*Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements &amp; guidelines.
-
*Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements &amp; guidelines.
+
*vSphere will alarm when disk free space is less than 20% free on any datastore. Recommendation is to provision at least 20% free space overhead, with 10% overhead '''required'''.
 +
*Recommend deploying from 4-8 VMs per LUN/datastore so long as IOPS and space requirements can be met, with supported range from 1-10.
<br>
<br>
Line 224: Line 283:
[[Image:RoggerSideB.jpg]]  
[[Image:RoggerSideB.jpg]]  
 +
 +
<br>
= Steps for Installing/Migrating Unified CCE Components on Virtual Machines  =
= Steps for Installing/Migrating Unified CCE Components on Virtual Machines  =
-
Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x). For more information, see [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).]
+
Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x)/9.0(x). For more information, see the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].  
-
#Acquire the supported servers for Unified CCE 8.0(x).  
+
#Acquire the supported servers for Unified CCE 8.0(2) or later release.  
-
#*Cisco UCS servers are specified in the [http://docwiki-dev.cisco.com/w/index.php?title=Virtualization_for_Unified_CCE&oldid=63654#Hardware_Requirements_for_Unified_CCE_Virtualized_Systems Hardware Requirements for Unified CCE Virtualized Systems] section.  
+
#*Cisco UCS servers are specified in the [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Hardware_Requirements_for_Unified_CCE_Virtualized_Systems Hardware Requirements for Unified CCE Virtualized Systems] section.  
-
#*MCS-7845-I3-CCE2 is specified in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).]
+
#*MCS-7845-I3-CCE2 is specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].  
#*If there are PG VMs that are running on the following older MCS servers, MCS-7845-H2 or MCS-7845-I2, replace these servers with supported servers.
#*If there are PG VMs that are running on the following older MCS servers, MCS-7845-H2 or MCS-7845-I2, replace these servers with supported servers.
#Install, setup, and configure the servers.  
#Install, setup, and configure the servers.  
#Configure the network. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#UCS_Network_Configuration UCS Network Configuration]. {{note |Configuring the network for the MCS servers is the same as configuring the network for the UCS C-Series servers.}}  
#Configure the network. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#UCS_Network_Configuration UCS Network Configuration]. {{note |Configuring the network for the MCS servers is the same as configuring the network for the UCS C-Series servers.}}  
-
#If VMware VirtualCenter is used for virtualization management, install or update to VMware vCenter Server 4.0 or later.  
+
#If VMware VirtualCenter is used for virtualization management, install or update to VMware vCenter Server 4.0/ 5.0 (for ESXi 5.0)&nbsp;or later.  
#Install and Boot VMWare ESXi. See the [http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/b/os/vmware/install/bseries-vmware-install.html Cisco UCS B-Series Blade Servers VMware Installation Guide] or the [http://www.cisco.com/en/US/docs/unified_computing/ucs/c/sw/os/vmware/install/vmware_install_c.html Cisco UCS C-Series Servers VMware Installation Guide]. On C-Series servers or MCS servers, you must configure the ESXi datastore block size for the Administration &amp; Data Server. See [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Configuring_the_ESXi_Data_Store_Block_Size_for_Administration_and_Data_Server Configuring the ESXi Data Store Block Size for Administration and Data Server] for instructions.  
#Install and Boot VMWare ESXi. See the [http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/b/os/vmware/install/bseries-vmware-install.html Cisco UCS B-Series Blade Servers VMware Installation Guide] or the [http://www.cisco.com/en/US/docs/unified_computing/ucs/c/sw/os/vmware/install/vmware_install_c.html Cisco UCS C-Series Servers VMware Installation Guide]. On C-Series servers or MCS servers, you must configure the ESXi datastore block size for the Administration &amp; Data Server. See [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Configuring_the_ESXi_Data_Store_Block_Size_for_Administration_and_Data_Server Configuring the ESXi Data Store Block Size for Administration and Data Server] for instructions.  
#Create the Unified CCE virtual machines from the OVA templates. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Creating_Virtual_Machines_from_OVA_VM_Templates Creating Virtual Machines from OVA VM Templates]. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.  
#Create the Unified CCE virtual machines from the OVA templates. See reference at [http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Creating_Virtual_Machines_from_OVA_VM_Templates Creating Virtual Machines from OVA VM Templates]. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.  
#Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.  
#Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.  
-
#Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines. {{note| Microsoft Windows Server 2003 ''Standard Edition'' and Microsoft SQL Server 2005 ''Standard Edition'' should be used for virtual machine guests. See related information in the links below.}}  
+
#Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines. {{note| Microsoft Windows Server 2008 R2 ''Standard Edition'' and Microsoft SQL Server 2008 R2 ''Standard Edition'' should be used for virtual machine guests. See related information in the links below. If you have prior to Unified CCE 9.0 release deployment using Windows Server 2003 and SQL 2005 then plan to use them accordingly on the older Unified CCE releases.}}  
#Install or migrate the Unified CCE Software components on the configured virtual machines, using Fresh Install or Tech Refresh Upgrade, as described in [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Installing_Unified_CCE_Components_on_Virtual_Machines Installing Unified CCE components on virtual machines] and [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Migrating_Unified_CCE_Components_to_Virtual_Machines Migrating Unified CCE components].
#Install or migrate the Unified CCE Software components on the configured virtual machines, using Fresh Install or Tech Refresh Upgrade, as described in [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Installing_Unified_CCE_Components_on_Virtual_Machines Installing Unified CCE components on virtual machines] and [http://docwiki.cisco.com/wiki/Virtualization_for_Unified_CCE#Migrating_Unified_CCE_Components_to_Virtual_Machines Migrating Unified CCE components].
-
= Unified CCE Component Coresidency and Sample Deployments  =
+
<br>
-
You can have one or more Unified CCE VMs coresident on the same ESXi server (for example, B200M2 blade or C210M2 rack mount server).&nbsp; However, you must follow the rules described below:
+
= Unified CCE Component '''VM Co-residency''' and '''Sample Deployments'''  =
-
:*You can have any number of Unified CCE virtual machines and combination of coresidency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.&nbsp;
+
You can have one or more Unified CCE VMs co-resident on the same ESXi server, however, you must follow the rules as described below:
-
:*You must not have&nbsp;CPU overcommit on the ESXi server that is running Unified CCE&nbsp;realtime application components.&nbsp; The total number of vCPUs among all&nbsp;the virtual machines on an ESXi host must not be greater than the total number of CPUs available on the ESXi server.&nbsp; In the case of the Cisco UCS B-200 and C-210, the total number of CPUs available is 8.  
+
 
-
:*You must not have&nbsp;memory overcommit on the ESXi host running UC realtime applications.&nbsp; You must allocate&nbsp;a minimum&nbsp;2GB of memory for the ESXi kernel.&nbsp; For example, if an ESXi server on B-200 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel you have 34GB available for the virtual machines.&nbsp; The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.  
+
:*You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.  
-
:*VM coresidency with Unified Communications '''and''' third party applications (for example, WFM) is '''not''' supported unless it is described in the following subsection.  
+
:*You must not have CPU overcommitment on the ESXi server that is running Unified CCE realtime application components. The total number of vCPUs among all the virtual machines on an ESXi host must not be greater than the total number of CPU cores available on the ESXi server (not counting Hyper-threading cores). Commonly UCS servers have two physical socket CPUs with 4-10 cores each.  
-
:*On a '''C-Series''' server, the HDS '''cannot''' co-reside with a Router, Logger, or a PG.
+
:*You must not have memory over-committed on the ESXi host running UC realtime applications. You must allocate a minimum of 2GB of memory for the ESXi kernel. For example, a B200M2 server with 48GB of memory would allow for up to 46GB to be available for virtual machine allocation. The total memory allocated for all the virtual machines on an ESXi server must not be greater than 46GB in this case. Note that the ESXi kernel memory allocation can vary with hardware server platform type, so please take care to ensure you do not over allocate.  
 +
:*VM co-residency with Unified Communications '''and''' third party applications (for example, WFM) is '''not''' supported unless it is specifically indicated in the following subsection.
 +
 
 +
<br>
-
The following table shows how Unified CCE components can be coresident on the same ESXi server. A diamond indicates that coresidency is allowed. For example, the first row shows that Unified Communications Applications can not be colocated with Contact Center Tier 1 Applicaitons. The third row shows that Third Party Applications can only be colocated with Contact Center Tier 3 Applications.  
+
The following tables show Unified CCE component supported co-residencies. A '''diamond''' indicates that co-residency is allowed, whereas an '''asterisk''' with a # will be used to denote limited or conditional support; please check Exceptions and Notes below the table for guidance on those co-residencies.  
{| class="wikitable FCK__ShowTableBorders"
{| class="wikitable FCK__ShowTableBorders"
|-
|-
-
! Unified CCE Component Coresidency
+
! Unified CCE Component Co-residency
! Contact Center Tier 1 Applications  
! Contact Center Tier 1 Applications  
! Contact Center Tier 2 Applications  
! Contact Center Tier 2 Applications  
Line 263: Line 327:
! Third Party Applications
! Third Party Applications
|-
|-
-
| '''Contact Center Tier 1 Applications: Router, Logger, Peripheral Gateway, ADS-HDS'''  
+
| '''Contact Center Tier 1 Applications: Logger, Rogger, HDS (any)'''  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
-
|  
+
| align="center" | *1
|  
|  
|-
|-
-
| '''Contact Center Tier 2 Applications: CVP Call + VXML Server, CVP Reporting Server, CUIC, CCMP'''&nbsp;  
+
| '''Contact Center Tier 2 Applications: Router, Peripheral Gateway (PG), CVP Call + VXML Server, CVP Reporting Server, CUIC, CCMP, Finesse'''&nbsp;  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
Line 277: Line 341:
| align="center" |  
| align="center" |  
|-
|-
-
| '''Contact Center Tier 3 Applications: ADS/AW (any non-HDS), Admin&nbsp;Client,&nbsp;Support&nbsp;Tools, Windows AD DC, CVP Ops/OAMP Server,&nbsp;CVP Media Server, SocialMiner'''&nbsp;
+
| '''Contact Center Tier 3 Applications: ADS/AW (any non-HDS), Admin Client, Windows AD DC, CVP Ops/OAMP Server, CVP Media Server, SocialMiner'''  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
Line 284: Line 348:
| align="center" | ♦
| align="center" | ♦
|-
|-
-
| '''Unified Communications Applications: Communications Manager, Contact Center Express,&nbsp;IPIVR, CUP, Unity, Unity Connection, MediaSense'''&nbsp;
+
| '''Unified Communications Applications: Communications Manager, Contact Center Express, IPIVR, CUP, Unity, Unity Connection, MediaSense, and other UC apps per the UC on UCS supported apps page'''  
-
| align="center" |  
+
| align="center" | *1
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
| align="center" | ♦  
-
|  
+
| align="center" | *2
-
|-
+
-
|  
+
|}
|}
-
<br>
+
{{note| *1: Unified CCE 9.x CC Tier 1 components can be co-resident with UC Applications.}}
-
{{note|If Cisco Support determines that a third party application deployed coresident with a Contact Center Tier 3 application causes that application to fail in performance or function, the customer must address the issue by moving the applications to other servers as necessary to alleviate the failure.}}  
+
{{note| *2: For co-residency restrictions specific to individual Unified Communications applications, please see the [http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Sizing_Guidelines Application Co-residency and Virtual/Physical Sizing] page.}}  
-
For&nbsp;coresidency restrictions specific to individual&nbsp;Unified Communications applications that run on VMs, see the Unified Communications Virtualization Sizing Guideline docwiki.
+
{{note|'''EXCEPTION''' to the above VM co-residency table for UCS C series:}}
-
The following section depicts the CCE sample UCS&nbsp;deployments&nbsp;compliant to&nbsp;the coresidency rules described above.  
+
:*An HDS (all types) '''cannot''' co-reside with a Logger, Rogger, CVP Reporting Server or another HDS unless those database applications are deployed on separate DAS (Direct Attached Storage) RAID arrays so that no more than two database VMs be coresident on UCS&nbsp;TRCs that come with two disk arrays. Standard RAID array for virtualization is 8 disk drives in RAID 5 or 10. Thus two arrays would require 16 drives to allow for co-residency. Example, the UCS-C210M2-VCD2 would not allow Rogger and HDS on same server as it has only a single 8 disk RAID 5 array. A UCS-C260M2-VCD2 (or C240M3 TRC) has 16 drives in two 8 disk RAID 5 arrays that will allow Rogger and HDS to be deployed on that single C series server, so long as each application is installed to separate arrays.
 +
 
 +
<br>The following section depicts the CCE sample UCS deployments compliant to the co-residency rules described above.  
 +
 
 +
<br>
-
== Sample CCE Deployments  ==
+
== Sample Unified CCE Deployments  ==
'''Notes'''  
'''Notes'''  
Line 311: Line 377:
#Any deployment &gt; 2k agents requires at least 2 chassis  
#Any deployment &gt; 2k agents requires at least 2 chassis  
#It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.  
#It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.  
-
#ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.
+
#ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.
 +
#'''12,000''' agent is supported from 8.5(3) version onwards and '''requires''' Windows 2008 R2&nbsp;'''Standard/Enterprise Edition'''&nbsp;and SQL Server 2005 '''Enterprise Edition'''. Refer to BOM for more details.
 +
#For large multi-cores UCS models (more than 8 cores per ESXi host), you can still use the sample deployments below (which is based on 8 cores per ESXi host)&nbsp;and then collapse them to the actual available cores. For example,&nbsp;VMs compliant to co-resident rules on 2 x C210M2-VCD2 TRC can be collapsed to a single 1 x C260M2 TRC or 1 x C240M3 TRC host. Extra cores on large multi-cores UCS&nbsp;models may not all be utilized by CCE on UCS solution due to storage constraints. You need to observe the no more than two database VMs coresident stated earlier on large multicores UCS TRC that come with two disk arrays. This is subject to be verified in the DMS/A2Q process.
=== ROGGER Example 1  ===
=== ROGGER Example 1  ===
{| class="wikitable" border="1"
{| class="wikitable" border="1"
-
|+ ROGGER (up to 450 CTIOS Agents, or 297 CAD Agents) with 150 IPIVR or 150 CVP ports (N+N), optional 50 CUIC reporting users, as examples  
+
|+ ROGGER 9.x on 8-cores UCS TRCs (up to 450 CTIOS Agents, or 297 CAD Agents) with 150 IPIVR or 150 CVP ports (N+N), optional 50 CUIC reporting users, as examples  
|-
|-
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
! colspan="4" | Chassis Y (B series)/ Rack of C series rack mount Servers
! colspan="4" | Chassis Y (B series)/ Rack of C series rack mount Servers
|-
|-
-
| '''ESXi Server'''  
+
| '''ESXi Server&nbsp;'''  
| '''Component'''  
| '''Component'''  
| '''#vCPU'''  
| '''#vCPU'''  
| '''RAM (GB)'''  
| '''RAM (GB)'''  
-
| '''ESXi Server'''  
+
| '''ESXi Server&nbsp;'''  
| '''Component'''  
| '''Component'''  
| '''#vCPU'''  
| '''#vCPU'''  
| '''RAM (GB)'''
| '''RAM (GB)'''
|-
|-
-
| rowspan="4" | ESXi Server A-1  
+
| rowspan="4" | ESXi Server A-1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
| Rogger A  
| Rogger A  
| 4  
| 4  
| 4  
| 4  
-
| rowspan="4" | ESXi Server B-1 <br>
+
| rowspan="4" | ESXi Server B-1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
| Rogger B  
| Rogger B  
| 4  
| 4  
Line 353: Line 421:
| 2
| 2
|-
|-
-
| style="background: rgb(255,224,200)" | Support Tools<br>
+
| UCM&nbsp;Publisher
-
| style="background: rgb(255,224,200)" | 1
+
| 2
-
| style="background: rgb(255,224,200)" | 2
+
| 6
| CVP Op Svr  
| CVP Op Svr  
| 2<br>
| 2<br>
| 2<br>
| 2<br>
|-
|-
-
| rowspan="2" | ESXi Server A-2  
+
| rowspan="2" | ESXi Server&nbsp; A-2  
| AW-HDS-DDS 1  
| AW-HDS-DDS 1  
| 4  
| 4  
| 4  
| 4  
-
| rowspan="2" | ESXi Server B-2  
+
| rowspan="2" | ESXi Server&nbsp; B-2  
| AW-HDS-DDS 2  
| AW-HDS-DDS 2  
| 4  
| 4  
Line 380: Line 448:
| 2  
| 2  
| 6  
| 6  
-
| rowspan="3" | ESXi Server B-3  
+
| rowspan="3" | ESXi Server&nbsp; B-3  
| UCM Subscriber 2  
| UCM Subscriber 2  
| 2  
| 2  
| 6
| 6
|-
|-
-
| UCM Publisher
+
| Finesse Srv 1&nbsp;
-
| 2
+
| 4
-
| 6
+
| 8&nbsp;
-
| <br>
+
| Finesse Srv 2
-
| <br>
+
| 4
-
| <br>
+
| 8
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| IPIVR 1 or CUP Srv 1  
+
| IPIVR 1  
| 2  
| 2  
| 4  
| 4  
-
| IPIVR 2 or CUP Srv 2  
+
| IPIVR 2  
| 2  
| 2  
| 4
| 4
|-
|-
| rowspan="2" | ESXi Server A-4  
| rowspan="2" | ESXi Server A-4  
-
| CVP Call+VXML Srv 1  
+
| CVP Call+VXML+Media Srv 1  
| 4  
| 4  
| 4  
| 4  
-
| rowspan="2" | ESXi Server B-4  
+
| rowspan="2" | ESXi Server&nbsp; B-4  
-
| CVP Call+VXML Srv 2  
+
| CVP Call+VXML+Media Srv 2  
| 4  
| 4  
| 4
| 4
Line 411: Line 479:
| 4  
| 4  
| 4  
| 4  
-
| CVP Media Server
+
| CVP&nbsp;Rpt Srv 2
-
| 2
+
| 4&nbsp;
| 2
| 2
|-
|-
Line 418: Line 486:
|-
|-
| Not shaded  
| Not shaded  
-
| colspan="7" | Required
+
| colspan="9" | Required
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
| Shaded  
| Shaded  
-
| colspan="7" | Optional
+
| colspan="9" | Optional
|}
|}
Line 427: Line 495:
{| class="wikitable" border="1"
{| class="wikitable" border="1"
-
|+ ROGGER (up to 2,000 CTIOS Agents, or up to 1,000 CAD Agents) with 600 IPIVR or 900 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples  
+
|+ ROGGER 9.x on 8-cores UCS TRCs (up to 2,000 CTIOS Agents, or up to 1,000 CAD Agents) with 600 IPIVR or 900 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples  
|-
|-
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
Line 445: Line 513:
| 4  
| 4  
| 4  
| 4  
-
| rowspan="4" | ESXi Server B-1  
+
| rowspan="4" | ESXi Server&nbsp; B-1  
| Rogger B  
| Rogger B  
| 4  
| 4  
Line 464: Line 532:
| 2
| 2
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| Support Tools<br>
+
| spare
-
| 1<br>
+
|
-
| 2<br>
+
|
 +
| spare
 +
|  
 +
|  
|-
|-
| rowspan="2" | ESXi Server A-2  
| rowspan="2" | ESXi Server A-2  
Line 472: Line 543:
| 4  
| 4  
| 4  
| 4  
-
| rowspan="2" | ESXi Server B-2  
+
| rowspan="2" | ESXi Server&nbsp; B-2  
| AW-HDS-DDS 2  
| AW-HDS-DDS 2  
| 4  
| 4  
| 4
| 4
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| AW-HDS-DDS 3&nbsp;  
+
| AW-HDS-DDS 3&nbsp;or Finesse Srv 1&nbsp;  
| 4  
| 4  
 +
| 4 or 8
 +
| AW-HDS-DDS 4&nbsp; or Finesse Srv 2
| 4  
| 4  
-
| AW-HDS-DDS 4&nbsp;
+
| 4 or 8
-
| 4
+
-
| 4
+
|-
|-
| rowspan="4" | ESXi Server A-3 <br>
| rowspan="4" | ESXi Server A-3 <br>
Line 488: Line 559:
| 2  
| 2  
| 6  
| 6  
-
| rowspan="4" | ESXi Server B-3 <br>
+
| rowspan="4" | ESXi Server&nbsp; B-3 <br>
| UCM Subscriber 3  
| UCM Subscriber 3  
| 2  
| 2  
Line 507: Line 578:
| 2<br>
| 2<br>
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| IPIVR 1 or CUP Srv 1<br>
+
| IPIVR 1<br>
| 2<br>
| 2<br>
| 4<br>
| 4<br>
-
| IPIVR 2 or CUP Srv 2<br>
+
| IPIVR 2<br>
| 2<br>
| 2<br>
| 4<br>
| 4<br>
|-
|-
| rowspan="2" | ESXi Server A-4  
| rowspan="2" | ESXi Server A-4  
-
| CVP Call+VXML Srv 1  
+
| CVP Call+VXML+Media Srv 1  
| 4  
| 4  
| 4  
| 4  
-
| rowspan="2" | ESXi Server B-4  
+
| rowspan="2" | ESXi Server&nbsp; B-4  
-
| CVP Call+VXML Srv 2  
+
| CVP Call+VXML+Media Srv 2  
| 4  
| 4  
| 4
| 4
Line 534: Line 605:
| style="background: rgb(255,224,200)" | 4<br>
| style="background: rgb(255,224,200)" | 4<br>
| style="background: rgb(255,224,200)" | 6<br>
| style="background: rgb(255,224,200)" | 6<br>
-
| rowspan="2" | ESXi Server B-5 <br>
+
| rowspan="2" | ESXi Server&nbsp; B-5 <br>
-
| style="background: rgb(255,224,200)" | CCMP (all in one)
+
| style="background: rgb(255,224,200)" | CUIC&nbsp;2
| style="background: rgb(255,224,200)" | 4  
| style="background: rgb(255,224,200)" | 4  
-
| style="background: rgb(255,224,200)" | 4
+
| style="background: rgb(255,224,200)" | 6&nbsp;
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| CVP Media Server
+
| &nbsp;spare
-
| 2
+
-
| 2
+
-
| CUIC 2
+
| 4  
| 4  
-
| 6
+
|  
 +
| CCMP (all in one)
 +
| 4
 +
| 4
|-
|-
! colspan="8" | '''Legend'''
! colspan="8" | '''Legend'''
Line 557: Line 628:
=== ROGGER Example 3  ===
=== ROGGER Example 3  ===
-
{| class="wikitable" style="width: 1057px; height: 574px" border="1"
+
{| style="width: 1057px; height: 574px" class="wikitable" border="1"
-
|+ ROGGER (up to 4,000 CTIOS Agents, or up to 2,000 CAD Agents) with 1,200 IPIVR or 1,800 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples  
+
|+ ROGGER 9.x on 8-cores UCS TRCs (up to 4,000 CTIOS Agents, or up to 2,000 CAD Agents) with 1,200 IPIVR or 1,800 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples  
|-
|-
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
Line 584: Line 655:
| 2  
| 2  
| 4  
| 4  
-
| Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer)  
+
|  
 +
Agent PG 1B  
 +
 
 +
(CTIOS/CAD, optional  
 +
 
 +
MR PG, SIP Dialer)
 +
 
| 2  
| 2  
| 4
| 4
Line 598: Line 675:
| 2
| 2
|-
|-
-
| style="background: rgb(255,224,200)" | Support Tools<br>
+
| style="background: rgb(255,224,200)" | spare<br>
-
| style="background: rgb(255,224,200)" | 1<br>
+
| style="background: rgb(255,224,200)" | <br>
-
| style="background: rgb(255,224,200)" | 2<br>
+
| style="background: rgb(255,224,200)" |  
| CVP Op Srv  
| CVP Op Srv  
| 2<br>
| 2<br>
Line 617: Line 694:
| 4  
| 4  
| rowspan="2" | ESXi Server B-2 <br><br>
| rowspan="2" | ESXi Server B-2 <br><br>
-
| Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer)  
+
|  
 +
Agent PG 2B  
 +
 
 +
(CTIOS/CAD, optional  
 +
 
 +
MR PG, SIP Dialer)
 +
 
| width="145" | 2  
| width="145" | 2  
| 4
| 4
Line 637: Line 720:
| 4
| 4
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| AW-HDS-DDS 3&nbsp;
+
|  
 +
AW-HDS-DDS 3 or Finesse Srv1
 +
 
| 4  
| 4  
 +
| 4 or 8
 +
|
 +
AW-HDS-DDS 4 or Finessess Srv 2
 +
| 4  
| 4  
-
| AW-HDS-DDS 4&nbsp;
+
| 4 or 8
-
| 4
+
-
| 4
+
|-
|-
| rowspan="4" | ESXi Server A-5  
| rowspan="4" | ESXi Server A-5  
Line 663: Line 750:
| 2  
| 2  
| 6  
| 6  
-
| <br>
+
| spare
-
| <br>
+
|  
| <br>
| <br>
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
Line 697: Line 784:
| 4
| 4
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| CUP Server 1&nbsp;  
+
| spare&nbsp;  
| 2  
| 2  
| 4  
| 4  
-
| CUP Server 2&nbsp;
+
| spare
| 2  
| 2  
| 4
| 4
|-
|-
| rowspan="2" | ESXi Server A-7  
| rowspan="2" | ESXi Server A-7  
-
| CVP Call+VXML Srv 1  
+
| CVP Call+VXML+Media Srv 1  
| 4  
| 4  
| 4  
| 4  
| rowspan="2" | ESXi Server B-5  
| rowspan="2" | ESXi Server B-5  
-
| CVP Call+VXML Srv 2  
+
| CVP Call+VXML+Media Srv 2  
| 4  
| 4  
| 4
| 4
|-
|-
-
| CVP Call+VXML Srv 3  
+
| CVP Call+VXML+Media Srv 3  
| 4  
| 4  
| 4  
| 4  
-
| CVP Call+VXML Srv 4  
+
| CVP Call+VXML+ Media&nbsp;Srv 4  
| 4  
| 4  
| 4
| 4
|-
|-
| rowspan="2" | ESXi Server A-7 <br>
| rowspan="2" | ESXi Server A-7 <br>
-
| style="background: rgb(255,224,200)" | CVP Media Server
+
| style="background: rgb(255,224,200)" | CUIC 1
-
| style="background: rgb(255,224,200)" | 2
+
| style="background: rgb(255,224,200)" | 4
-
| style="background: rgb(255,224,200)" | 2
+
| style="background: rgb(255,224,200)" | 6
| rowspan="2" | ESXi Server B-6 <br>
| rowspan="2" | ESXi Server B-6 <br>
| style="background: rgb(255,224,200)" | CUIC 2  
| style="background: rgb(255,224,200)" | CUIC 2  
Line 729: Line 816:
| style="background: rgb(255,224,200)" | 6
| style="background: rgb(255,224,200)" | 6
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| CUIC 1
+
| spare
-
| 4
+
|  
-
| 6
+
|  
| CCMP (all in one)  
| CCMP (all in one)  
| 4  
| 4  
Line 747: Line 834:
=== Router/Logger Example 1  ===
=== Router/Logger Example 1  ===
-
{| class="wikitable" style="width: 1057px; height: 574px" border="1"
+
{| style="width: 1057px; height: 574px" class="wikitable" border="1"
-
|+ Router/Logger (up to 8,000 CTIOS Agents, or up to 4,000 CAD Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples  
+
|+ Router/Logger 9.x on 8-cores UCS TRCs (up to 8,000 CTIOS Agents, or up to 4,000 CAD Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples  
|-
|-
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
! colspan="4" | Chassis X (B series)/ Rack of C series rack mount Servers  
Line 764: Line 851:
| rowspan="5" | ESXi Server A-1  
| rowspan="5" | ESXi Server A-1  
| width="145" | Router A  
| width="145" | Router A  
-
| 2
+
| 4&nbsp;
-
| 4
+
| 8&nbsp;
| rowspan="5" | ESXi Server B-1  
| rowspan="5" | ESXi Server B-1  
| width="145" | Router B  
| width="145" | Router B  
-
| 2
+
| 4&nbsp;
-
| 4
+
| 8&nbsp;
|-
|-
| width="145" | Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer)  
| width="145" | Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer)  
Line 792: Line 879:
| 2
| 2
|-
|-
-
| style="background: rgb(255,224,200)" | Support Tools&nbsp;
+
| style="background: rgb(255,224,200)" | spare
-
| style="background: rgb(255,224,200)" | 1
+
| style="background: rgb(255,224,200)" |  
-
| style="background: rgb(255,224,200)" | 2
+
| style="background: rgb(255,224,200)" |  
-
| width="145" | &nbsp;  
+
| width="145" | spare&nbsp;  
| &nbsp;  
| &nbsp;  
| &nbsp;
| &nbsp;
Line 847: Line 934:
| style="background: rgb(255,224,200)" | 4
| style="background: rgb(255,224,200)" | 4
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| width="145" | AW-HDS 5&nbsp;
+
| width="145" |  
 +
AW-HDS 5
 +
 
| 4  
| 4  
| 4  
| 4  
-
| width="145" | AW-HDS 6&nbsp;
+
| width="145" |  
 +
AW-HDS 6
 +
 
| 4  
| 4  
| 4
| 4
Line 923: Line 1,014:
| style="background: rgb(255,224,200)" | 2
| style="background: rgb(255,224,200)" | 2
|- style="background: rgb(255,224,200)"
|- style="background: rgb(255,224,200)"
-
| width="145" | CUP Server 1&nbsp;  
+
| width="145" | spare&nbsp;  
| 2  
| 2  
| 4  
| 4  
-
| width="145" | CUP Server 2&nbsp;
+
| width="145" | spare
| 2  
| 2  
| 4
| 4
Line 942: Line 1,033:
| style="background: rgb(255,224,200)" | 4  
| style="background: rgb(255,224,200)" | 4  
| style="background: rgb(255,224,200)" | 4  
| style="background: rgb(255,224,200)" | 4  
-
| width="145" | <br>
+
| width="145" | spare
| <br>
| <br>
| <br>
| <br>
|-
|-
| rowspan="2" | ESXi Server A-9  
| rowspan="2" | ESXi Server A-9  
-
| width="145" | CVP Call+VXML Srv 1  
+
| width="145" | CVP Call+VXML+Media Srv 1  
| 4  
| 4  
| 4  
| 4  
| rowspan="2" | ESXi Server B-9  
| rowspan="2" | ESXi Server B-9  
-
| width="145" | CVP Call+VXML Srv 2  
+
| width="145" | CVP Call+VXML+Media Srv 2  
| 4  
| 4  
| 4
| 4
|-
|-
-
| width="145" | CVP Call+VXML Srv 3  
+
| width="145" | CVP Call+VXML+Media Srv 3  
| 4  
| 4  
| 4  
| 4  
-
| width="145" | CVP Call+VXML Srv 4  
+
| width="145" | CVP Call+VXML+Media Srv 4  
| 4  
| 4  
| 4
| 4
|-
|-
| rowspan="2" | ESXi Server A-10  
| rowspan="2" | ESXi Server A-10  
-
| width="145" | CVP Call+VXML Srv 5  
+
| width="145" | CVP Call+VXM+Media Srv 5  
| 4  
| 4  
| 4  
| 4  
| rowspan="2" | ESXi Server B-10  
| rowspan="2" | ESXi Server B-10  
-
| width="145" | CVP Call+VXML Srv 6  
+
| width="145" | CVP Call+VXML+Media Srv 6  
| 4  
| 4  
| 4
| 4
|-
|-
-
| width="145" | CVP Call+VXML Srv 7  
+
| width="145" | CVP Call+VXML+Media Srv 7  
| 4  
| 4  
| 4  
| 4  
-
| width="145" | CVP Call+VXML Srv 8  
+
| width="145" | CVP Call+VXML+Media Srv 8  
| 4  
| 4  
| 4
| 4
|-
|-
| rowspan="2" | ESXi Server A-11  
| rowspan="2" | ESXi Server A-11  
-
| style="background: rgb(255,224,200)" | CVP Media Server A<br>
+
| style="background: rgb(255,224,200)" | Finesse Srv 1
-
| style="background: rgb(255,224,200)" | 2<br>
+
| style="background: rgb(255,224,200)" | 4
-
| style="background: rgb(255,224,200)" | 2<br>
+
| style="background: rgb(255,224,200)" | 8
| rowspan="2" | ESXi Server B-11 <br>
| rowspan="2" | ESXi Server B-11 <br>
-
| style="background: rgb(255,224,200)" | CVP Media Server B<br>
+
| style="background: rgb(255,224,200)" | Finesse Srv 2
-
| style="background: rgb(255,224,200)" | 2<br>
+
| style="background: rgb(255,224,200)" | 4
-
| style="background: rgb(255,224,200)" | 2<br>
+
| style="background: rgb(255,224,200)" | 8
|-
|-
| VRU PG A<br>
| VRU PG A<br>
Line 1,013: Line 1,104:
| style="background: rgb(255,224,200)" | CCMP DB&nbsp;  
| style="background: rgb(255,224,200)" | CCMP DB&nbsp;  
| style="background: rgb(255,224,200)" | 8  
| style="background: rgb(255,224,200)" | 8  
-
| style="background: rgb(255,224,200)" | 4
+
| style="background: rgb(255,224,200)" | 8
| ESXi Server B-13  
| ESXi Server B-13  
| style="background: rgb(255,224,200)" | CCMP Web/App Svr&nbsp;  
| style="background: rgb(255,224,200)" | CCMP Web/App Svr&nbsp;  
Line 1,027: Line 1,118:
| colspan="7" | Optional
| colspan="7" | Optional
|}
|}
 +
 +
<br>
 +
 +
=== Router/Logger Example 2  ===
 +
 +
<br>Router/Logger 9.x on 8-cores UCS TRCs (up to 12,000 CTIOS Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples
 +
 +
{| style="width: 1057px; height: 574px" class="wikitable" border="1"
 +
|-
 +
| colspan="4" | '''Chassis X (B series) Servers'''
 +
| colspan="4" | '''Chassis Y (B series) Servers'''
 +
|-
 +
| '''ESXi Server'''
 +
| '''Component'''
 +
| '''#vCPU'''
 +
| '''RAM (GB)'''
 +
| '''ESXi Server'''
 +
| '''Component'''
 +
| '''#vCPU'''
 +
| '''RAM (GB)'''
 +
|-
 +
| rowspan="2" | ESXi Server A-1
 +
| Logger A
 +
| 4
 +
| 8
 +
| rowspan="2" | ESXi Server B-1
 +
| Logger B
 +
| 4
 +
| 8
 +
|-
 +
| Router A
 +
| 4
 +
| 8
 +
| Router B
 +
| 4
 +
| 8
 +
|-
 +
| rowspan="3" | ESXi Server A-2
 +
| Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
| rowspan="3" | ESXi Server B-2
 +
| Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
|-
 +
| Agent PG 3A (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
| Agent PG 3B (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
|-
 +
| Agent PG 5A (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
| Agent PG 5B (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
|-
 +
| rowspan="3" | ESXi Server A-3
 +
| Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
| rowspan="3" | ESXi Server B-3
 +
| Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
|-
 +
| Agent PG 4A (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
| Agent PG 4B (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
|-
 +
| Agent PG 6A (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
| Agent PG 6B (CTIOS/CAD, optional MR PG, SIP Dialer)
 +
| 2
 +
| 4
 +
|-
 +
| rowspan="4" | ESXi Server A-4
 +
| HDS-DDS 1
 +
| 4
 +
| 8
 +
| rowspan="4" | ESXi Server B-4
 +
| HDS-DDS 2
 +
| 4
 +
| 8
 +
|-
 +
| AW-HDS 1
 +
| 4
 +
| 8
 +
| AW-HDS 2
 +
| 4
 +
| 8
 +
|-
 +
| AW-HDS 3
 +
| 4
 +
| 8
 +
| AW-HDS 4
 +
| 4
 +
| 8
 +
|-
 +
| AW-HDS 5
 +
| 4
 +
| 8
 +
| AW-HDS 6
 +
| 4
 +
| 8
 +
|-
 +
| rowspan="3" | ESXi Server A-5
 +
| UCM 1 Subscriber 1
 +
| 2
 +
| 6
 +
| rowspan="3" | ESXi Server B-5
 +
| UCM 1 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 2 Subscriber 1
 +
| 2
 +
| 6
 +
| UCM 2 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 3 Subscriber 1
 +
| 2
 +
| 6
 +
| UCM 3 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| rowspan="3" | ESXi Server A-6
 +
| UCM 1 Subscriber 3
 +
| 2
 +
| 6
 +
| rowspan="3" | ESXi Server B-6
 +
| UCM 1 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 2 Subscriber 3
 +
| 2
 +
| 6
 +
| UCM 2 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 3 Subscriber 3
 +
| 2
 +
| 6
 +
| UCM 3 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| rowspan="3" | ESXi Server A-7
 +
| UCM 1 Subscriber 5
 +
| 2
 +
| 6
 +
| rowspan="3" | ESXi Server B-7
 +
| UCM 1 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 2 Subscriber 5
 +
| 2
 +
| 6
 +
| UCM 2 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 3 Subscriber 5
 +
| 2
 +
| 6
 +
| UCM 3 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| rowspan="3" | ESXi Server A-8
 +
| UCM 1 Subscriber 7
 +
| 2
 +
| 6
 +
| rowspan="3" | ESXi Server B-8
 +
| UCM 1 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 2 Subscriber 7
 +
| 2
 +
| 6
 +
| UCM 2 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| UCM 2 Subscriber 7
 +
| 2
 +
| 6
 +
| UCM 3 Subscriber 2
 +
| 2
 +
| 6
 +
|-
 +
| rowspan="4" | ESXi Server A-9
 +
| style="background: rgb(255,224,200)" | Domain Controller
 +
| style="background: rgb(255,224,200)" | 1
 +
| style="background: rgb(255,224,200)" | 2
 +
| rowspan="4" | ESXi Server B-9
 +
| style="background: rgb(255,224,200)" | Domain Controller
 +
| style="background: rgb(255,224,200)" | 1
 +
| style="background: rgb(255,224,200)" | 1
 +
|-
 +
| UCM Publisher 1
 +
| 2
 +
| 6
 +
| style="background: rgb(255,224,200)" | CVP Op Srv
 +
| style="background: rgb(255,224,200)" | 2
 +
| style="background: rgb(255,224,200)" | 2
 +
|-
 +
| style="background: rgb(255,224,200)" | Finesse Srv 1&nbsp;
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 8
 +
| style="background: rgb(255,224,200)" | Finesse Srv&nbsp;2&nbsp;
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 8
 +
|-
 +
| Administration Server - AW
 +
| 1
 +
| 2
 +
| Administration Server - AW
 +
| 1
 +
| 2
 +
|-
 +
| rowspan="2" | ESXi Server A-10
 +
| UCM Publisher 2
 +
| 2
 +
| 6
 +
| rowspan="2" | ESXi Server B-10
 +
| style="background: rgb(255,224,200)" | CVP Rpt Srv 2
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 4
 +
|-
 +
| style="background: rgb(255,224,200)" | CVP Rpt Srv 1
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 4
 +
| UCM Publisher 3
 +
| 2
 +
| 6
 +
|-
 +
| rowspan="2" | ESXi Server A-11
 +
| CVP Call+VXML+Media Srv 1
 +
| 4
 +
| 4
 +
| rowspan="2" | ESXi Server B-11
 +
| CVP Call+VXML+Media Srv 2
 +
| 4
 +
| 4
 +
|-
 +
| CVP Call+VXML+Media Srv 3
 +
| 4
 +
| 4
 +
| CVP Call+VXML+Media Srv 4
 +
| 4
 +
| 4
 +
|-
 +
| rowspan="2" | ESXi Server A-12
 +
| CVP Call+VXML+Media Srv 5
 +
| 4
 +
| 4
 +
| rowspan="2" | ESXi Server B-12
 +
| CVP Call+VXML+Media Srv 6
 +
| 4
 +
| 4
 +
|-
 +
| CVP Call+VXML+Media Srv 7
 +
| 4
 +
| 4
 +
| CVP Call+VXML+Media Srv 8
 +
| 4
 +
| 4
 +
|-
 +
| rowspan="2" | ESXi Server A-13
 +
| CVP Call+VXML+Media Srv 9
 +
| 4
 +
| 4
 +
| rowspan="2" | ESXi Server B-13
 +
| CVP Call+VXML+Media Srv 10
 +
| 4
 +
| 4
 +
|-
 +
| CVP Call+VXML+Media Srv 11
 +
| 4
 +
| 4
 +
| CVP Call+VXML=Media Srv 12
 +
| 4
 +
| 4
 +
|-
 +
| rowspan="2" | ESXi Server A-14
 +
| style="background: rgb(255,224,200)" | Finesse Srv 1
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 8&nbsp;
 +
| rowspan="2" | ESXi Server B-14
 +
| style="background: rgb(255,224,200)" | Finesse Srv 2
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 8
 +
|-
 +
| VRU PG A
 +
| 2
 +
| 2
 +
| VRU PG B
 +
| 2
 +
| 2
 +
|-
 +
| rowspan="2" | ESXi Server A-15
 +
| CUIC 1&nbsp;
 +
| 4
 +
| 6
 +
| rowspan="2" | ESXi Server B-15
 +
| CUIC 2&nbsp;
 +
| 4
 +
| 6
 +
|-
 +
| CUIC 3&nbsp;
 +
| 4
 +
| 6
 +
| CUIC 4
 +
| 4
 +
| 6
 +
|-
 +
| rowspan="2" | ESXi Server A-16
 +
| CUIC 5
 +
| 4
 +
| 6
 +
| rowspan="2" | ESXi Server B-16
 +
| CUIC 6
 +
| 4
 +
| 6
 +
|-
 +
| CUIC 7
 +
| 4
 +
| 6
 +
| CUIC 8
 +
| 4
 +
| 6
 +
|-
 +
| ESXi Server A-17
 +
| style="background: rgb(255,224,200)" | CCMP DB
 +
| style="background: rgb(255,224,200)" | 8
 +
| style="background: rgb(255,224,200)" | 8
 +
| ESXi Server B-17
 +
| style="background: rgb(255,224,200)" | CCMP Web/App Svr&nbsp;
 +
| style="background: rgb(255,224,200)" | 4
 +
| style="background: rgb(255,224,200)" | 4
 +
|-
 +
! colspan="8" | '''Legend'''
 +
|-
 +
| Not shaded
 +
| colspan="7" | Required
 +
|- style="background: rgb(255,224,200)"
 +
| Shaded
 +
| colspan="7" | Optional
 +
|}
 +
 +
&nbsp;
 +
 +
&nbsp;
= Hybrid Deployment Options  =
= Hybrid Deployment Options  =
Line 1,034: Line 1,493:
== Cisco Unified Contact Center Hosted  ==
== Cisco Unified Contact Center Hosted  ==
-
*NAM Rogger is deployed on a (bare-metal) quad CPU server as specified in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).]
+
*NAM Rogger is deployed on a (bare-metal) quad CPU server as specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].  
*Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)  
*Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)  
*As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.
*As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.
Line 1,040: Line 1,499:
== Parent/Child Deployments  ==
== Parent/Child Deployments  ==
-
*The parent ICM is deployed on (bare-metal) servers as specified in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/user/guide/icm80bom.pdf Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted, Release 8.0(1).].  
+
*The parent ICM is deployed on (bare-metal) servers as specified in the appropriate [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_technical_reference_list.html Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted].  
*The Unified Contact Center Enterprise (or Express) child may be deployed virtualized according to existing published VM requirements.  
*The Unified Contact Center Enterprise (or Express) child may be deployed virtualized according to existing published VM requirements.  
*The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.
*The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.
Line 1,048: Line 1,507:
See the following websites for more information:<br>
See the following websites for more information:<br>
-
*[http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_(including_OVA/OVF_Templates)#Unified_Contact_Center_Enterprise Unified Communications Virtualization Downloads (including OVA/OVF Templates).]  
+
*[http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Downloads_(including_OVA/OVF_Templates)#Unified_Contact_Center_Enterprise Unified Communications Virtualization Downloads (including OVA/OVF Templates)]  
*[http://www.cisco.com/cisco/software/release.html?mdfid=268439622&release=1.1&relind=AVAILABLE&flowid=5210&softwareid=283914286&rellifecycle=&reltype=latest Unified CCE OVA Templates]
*[http://www.cisco.com/cisco/software/release.html?mdfid=268439622&release=1.1&relind=AVAILABLE&flowid=5210&softwareid=283914286&rellifecycle=&reltype=latest Unified CCE OVA Templates]
Line 1,068: Line 1,527:
== Notes  ==
== Notes  ==
-
:*VM CPU affinity is not supported. You don't need to set CPU affinity for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform.  
+
:*VM CPU affinity is not supported. You may not set CPU affinity for Unified CCE application VMs on vSphere.  
-
:*VM resource Reservation - VM resource reservation is not supported for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform. The VM computing resources should have a default reservation setting, which is no resource reservations.  
+
:*VM Resource Reservation - VM resource reservation is '''not supported '''for Unified CCE application VMs on vSphere '''prior to release 9.0'''. The VM computing resources should have a default reservation setting, which is no resource reservations.  
-
:*You cannot change the computing resource configuration of your VM at any time.  
+
:*''Starting with Unified CCE 9.0(1) and later, VM resource reservation is supported and the computing resources have a default setting when&nbsp;deployed from the OVA&nbsp;for&nbsp;CCE 9.0''.&nbsp;
-
:*You can never go below the minimum VM computing resource requirements as defined in the OVA templates.  
+
:*You must not change the computing resource configuration of your VM at any time.  
-
:*ESXi Server hyperthread is enabled by default.
+
:*You must never go below the minimum VM computing resource requirements as defined in the OVA templates.  
 +
:*ESXi Server hyperthreading is enabled by default.
 +
 
 +
<br>
== Preparing for Windows Installation  ==
== Preparing for Windows Installation  ==
Line 1,101: Line 1,563:
= Migrating Unified CCE Components to Virtual Machines  =
= Migrating Unified CCE Components to Virtual Machines  =
-
Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/installation/guide/icm80ug.pdf Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted Release 8.0(1)].  
+
Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_installation_guides_list.html Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise &amp; Hosted].  
= Configuring the ESXi Data Store Block Size for Administration and Data Server  =
= Configuring the ESXi Data Store Block Size for Administration and Data Server  =
Line 1,115: Line 1,577:
<br>Steps to configure the ESXi data store block size to 2MB:  
<br>Steps to configure the ESXi data store block size to 2MB:  
-
#After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi 4.0 and use VMware vSphere Client to connect to the ESXi host.  
+
#After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi and use VMware vSphere Client to connect to the ESXi host.  
#On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.  
#On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.  
#Right-click on this data store and delete it. We will add the data store back in the following steps.  
#Right-click on this data store and delete it. We will add the data store back in the following steps.  
Line 1,137: Line 1,599:
:*You can use&nbsp;Unified CCE Serviceability Tools and&nbsp;Unified CCE reports to monitor the operation and performance of the&nbsp;Unified CCE system.  
:*You can use&nbsp;Unified CCE Serviceability Tools and&nbsp;Unified CCE reports to monitor the operation and performance of the&nbsp;Unified CCE system.  
:*The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.
:*The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.
 +
 +
<br>
You can use the following ESXi counters as performance indicators.  
You can use the following ESXi counters as performance indicators.  
Line 1,153: Line 1,617:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| CPU Usage (Average)  
| CPU Usage (Average)  
Line 1,159: Line 1,625:
:*ESXi server  
:*ESXi server  
:*Virtual machine
:*Virtual machine
 +
 +
<br>
| Less than 60%.
| Less than 60%.
Line 1,166: Line 1,634:
:*ESXi Server Processor#  
:*ESXi Server Processor#  
:*VM_vCPU#
:*VM_vCPU#
 +
 +
<br>
| CPU Usage 0 - 7 (Average)  
| CPU Usage 0 - 7 (Average)  
Line 1,172: Line 1,642:
:*ESXi server for processors 0 to 7  
:*ESXi server for processors 0 to 7  
:*Virtual machine vCPUs
:*Virtual machine vCPUs
 +
 +
<br>
| Less than 60%
| Less than 60%
Line 1,186: Line 1,658:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| Memory Usage (Average)  
| Memory Usage (Average)  
Line 1,196: Line 1,670:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| Memory Active&nbsp;(Average)  
| Memory Active&nbsp;(Average)  
Line 1,206: Line 1,682:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| Memory Balloon (Average)  
| Memory Balloon (Average)  
Line 1,216: Line 1,694:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| Memory Swap used (Average)  
| Memory Swap used (Average)  
Line 1,226: Line 1,706:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| Disk Usage (Average)  
| Disk Usage (Average)  
Line 1,236: Line 1,718:
:*ESXi Server vmhba ID  
:*ESXi Server vmhba ID  
:*VMbha ID
:*VMbha ID
 +
 +
<br>
| Disk Usage Read&nbsp;rate&nbsp;  
| Disk Usage Read&nbsp;rate&nbsp;  
Line 1,246: Line 1,730:
:*ESXi Server vmhba ID  
:*ESXi Server vmhba ID  
:*VM vmhba ID
:*VM vmhba ID
 +
 +
<br>
| Disk Usage Write rate  
| Disk Usage Write rate  
Line 1,256: Line 1,742:
:*ESXi Server vmhba ID  
:*ESXi Server vmhba ID  
:*VM vmhba ID
:*VM vmhba ID
 +
 +
<br>
| Disk Commands Issued  
| Disk Commands Issued  
Line 1,266: Line 1,754:
:*ESXi Server vmhba ID  
:*ESXi Server vmhba ID  
:*VM vmhba ID
:*VM vmhba ID
 +
 +
<br>
| Disk Command Aborts  
| Disk Command Aborts  
Line 1,277: Line 1,767:
:*ESXi Server vmhba ID  
:*ESXi Server vmhba ID  
:*VM vmhba ID
:*VM vmhba ID
 +
 +
<br>
| Disk Command Latency  
| Disk Command Latency  
Line 1,287: Line 1,779:
:*ESXi Server vmhba ID  
:*ESXi Server vmhba ID  
:*VM vmhba ID
:*VM vmhba ID
 +
 +
<br>
| Kernel Disk Command Latency  
| Kernel Disk Command Latency  
Line 1,297: Line 1,791:
:*ESXi Server  
:*ESXi Server  
:*VM
:*VM
 +
 +
<br>
| Network Usage (Average)  
| Network Usage (Average)  
Line 1,307: Line 1,803:
:*ESXi Server vmnic ID  
:*ESXi Server vmnic ID  
:*VM&nbsp;vmnic ID
:*VM&nbsp;vmnic ID
 +
 +
<br>
| Network Data Receive Rate  
| Network Data Receive Rate  
Line 1,316: Line 1,814:
:*ESXi Server vmnic ID  
:*ESXi Server vmnic ID  
:*VM&nbsp;vmnic ID
:*VM&nbsp;vmnic ID
 +
 +
<br>
| Network Data Transmit Rate  
| Network Data Transmit Rate  
Line 1,325: Line 1,825:
= System Performance Monitoring Using Windows Perfmon Counters  =
= System Performance Monitoring Using Windows Perfmon Counters  =
-
You must comply with the best practices described in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/design/guide/uccesrnd80.pdf Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND), Release 8.0] section System Performance Monitoring, and in Chapter 8 Performance Counters in the [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/configuration/guide/icm80srvg.pdf Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise].  
+
You must comply with the best practices described in the [http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/products_implementation_design_guides_list.html Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND)] section System Performance Monitoring, and in Chapter 8 Performance Counters in the&nbsp; CCE 8.x [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ipccenterprise8_0_1/configuration/guide/icm80srvg.pdf Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise]&nbsp;or CCE&nbsp;9.x [http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/ipcc_enterprise/ippcenterprise9_0_1/configuration/guide/ICM-CC_Serviceability_Best_Practices_Guide_for_Release_9.0.pdf Serviceability Best Practices Guide for Cisco Unified ICM/Unified CCE &amp; Unified CCH Release 9.0].  
<br>
<br>

Revision as of 13:59, 4 February 2013


Contents

Updates to this Page

The following is a list of significant updates to this page:

Date Update
February 1, 2013 Added details about UCCE 8.5(4)and ICME on UCS on Windows Server 2008.
January 30, 2013 Clarified ESXi software requirements.
January 29, 2013 Updated information about supported deployments and components, and ESXi software requirements.
January 29, 2013 Added Finesses and made CVP Media Server coloaded on CVP 9.x VM by default to VM Sample Deployments
Auguet 15, 2012 Deleted reference to ESXi 4.0 as it applies to 5.0 also in configure the ESXi data store block size to 2MB
Auguet 13, 2012 More Edits for 9.x release
June 19, 2012 Overall Edits for 9.x release
March 27, 2012 Added the Cisco SmartPlay Solution Packs for UC Support
February 14, 2011 Updated the UCS Network Configuration section for UCS C-Series network design.
December 09, 2011 Added PG limited coresidency with UCM/CUP/IPIVR capability.
November 15,2011 Added sample deployment for 12,000 agents.
June 28, 2011 Updated list of Unified CCE components that have not been qualified and are not supported in virtualization.
Updated Section 5.1 ESXI 4.1 Software Requirements to add new link to Disable LRO.
Updated Section 6 Unified CCE Component Capacities and VM Configuration Requirements to add new link to Downloading OVA Templates for UC Applications.
Removed Section 7.2ESXi 4.1 Network Settings.
Changed title from Creating Virtual Machines from OVA VM Templates to Cisco Unified CCE-Specific Information for OVA Templates
Removed Section 12.1 Downloading OVA Templates and added new links to Unified Communications Virtualization Downloads and Unified CCE OVA Templates.
June 1, 2011 Added Section 5.1 ESXi 4.1 Software Requirements.
Added Section 7.2 ESXi 4.1 Network Settings.
Removed references to ESXi 4.0.
Updated UCS Network Configuration section.
May 9, 2011 Updated the Unified CCE Component Co-Residency and Sample Deployments section.
February 17, 2011

Updated the Creating Virtual Machines from OVA VM Templates section.
Removed OVA list from this page and added a link.
Added the Scalability Assumptions section.
Updated the CUIC RAM (GB) numbers in the sample CCE deployments.
Updated the sample deployment tables and highlighted optional items.

December 22, 2010 Updated the pointer links for the Bill of Material, CCMP virtualization page, and the Hybrid Deployment section.
December 20, 2010 Updated the Component Capacities section, the VM configuration requirements table, the Hybrid Deployment Options section, the steps for installing/migrating Unified CCE Components, the Support for Virtualization on the ESXi/UCS Platform section, and the Hardware Requirements section.
December 14, 2010 Added Avaya ACD PG, TDM ACD PG, and Unified Contact Center Gateway to the Unified CCE Component Capacities and VM Configuration Requirements section. Updated the Steps for Installing/Migrating Unified CCE Components on Virtual Machines secton. Added Hybrid Deployment Options section.
December 10, 2010 Added Unified CVP, Unified IC, Contact Center Management Portal (CCMP), and CAD to the sample deployments section.
October 1, 2010 This page, UCCE on UCS Deployment Certification Requirements and Ordering Information, and UCS Network Configuration for UCCE updated to reflect support of UCS C-Series hardware on UCCE.
September 28, 2010 CVP added to list of supported components/deployments.
September 21, 2010 IPIVR added to list of supported components/deployments and as an optional component in the sample deployment tables at Sample CCE Deployments.

Information for Partners about Unified CCE on UCS Deployment Certification and Ordering

It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page UCCE on UCS Deployment Certification Requirements and Ordering Information.

This page contains essential information for partners about the following:

  • Partner Certification Requirements
  • UCS Server Ordering Information
  • Important Notes on Cisco UCS Service and Support


Unified Contact Center Enterprise Support for Virtualization on the VMware vSphere Platform

Starting with Release 8.0(2), virtualization of the following deployments and Unified CCE components on Cisco Unified Computing Systems (UCS) B Series and C Series hardware is supported:

  • Router
  • Logger
  • Agent PG
  • MR PG
  • VRU PG
  • Unified Contact Center Gateway
  • Avaya ACD PG (Also supported on virtualized ESXi server on MCS-7845-I3-CCE2)
  • TDM ACD PG (Also supported on virtualized ESXi server on MCS-7845-I3-CCE2)
  • Cisco Agent Desktop (CAD) Server
  • Administration and Data Server with one of the following roles:
  • Administration Server and Real-time Data Server (AW)
  • Configuration-only Administration Server (AW-CONFIG)
  • Administration Server and Real-time and Historical Data Server (AW-HDS)
  • Administration Server, Real-time and Historical Data Server, and Detail Data Server (AW-HDS-DDS)
  • Historical Data Server and Detail Data Server (HDS-DDS)
  • Administration Client
  • Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be collocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)
  • Support Tools (Note: It is no longer supported with CCE 8.5x and higher)
  • Rogger (a Router and a Logger in the same VM)
  • The Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE is supported; see the section Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware for important information.
  • Unified IP-IVR is supported with Unified CCE on UCS B-Series solution and on UCS C-Series with the model (UCS-C210-VCD2) only. Please refer to the IPIVR product specific pages for detail.
  • CVP is supported with CCE on UCS solution. Please refer to the Virtualization for Unified CVP wiki page for details.
  • Contact Center Management Portal (CCMP). See the Virtualization for CCMP with Unified CCE on UCS Hardware wiki page for details.
  • Cisco Unified Intelligence Center (Unified IC). Please refer to the Cisco Unified Intelligence Center wiki page for details.
  • Cisco E-mail Interaction Manager (EIM)/Web Interaction Manager (WIM). See the Virtualization for EIM-WIM wiki page for details.

Starting with Release 9.0(3), virtualization of the Unified ICME with more than 12,000 agents and/or the use of NIC, SIGTRAN (for up to 150PGs) is also supported on Cisco Unified Computing Systems (UCS) B Series and C Series hardware.

On UCCE Release 8.5(4), an Engineering Special (ES) is required to support ICME on UCS on Windows Server 2008. Please contact TAC to obtain the required ES before proceeding with deployment. Additionally, if the deployment is greater than 8,000 agents for ICME on UCS, a manual change of the ICM router, logger, and HDS/DDS components is necessary. The virtual machine specifications (vCPU, RAM, and CPU and Memory reservations) must be changed to match the UCCE 9.0 OVAs.


The following deployments and Unified CCE components have not been qualified and are not supported in virtualization:

  • Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.
  • Unified CCH (multi-instance)
  • Unified ICMH (multi-instance)
  • Outbound Option with SCCP Dialer
  • WebView Server
  • Expert Advisor
  • Remote Silent Monitoring (RSM)
  • Span based Silent Monitoring on UCS B-series chassis. For support on UCS C-series, please consult the VMware Community/Knowledge Base regarding the possible use of promiscuous mode to receive the monitored traffic from Span port.
  • Cisco Unified CRM Connector
  • Agent PG with more than 1 Agent PIM (2nd PIM for UCM CTI RP is allowed as per SRND)
  • Multi-Instance CTIOS
  • IPsec. UCS does not support IPsec off-board processing, therefore IPsec is not supported in virtualization.
Note Note: The hybrid (virtual and non-virtual servers) deployment model is supported. However, for paired components having side A and side B (e.g., Rogger A and Rogger B), the server hardware must be identical on both sides (e.g., you cannot mix bare metal MCS on side A of the paired component and virtual UCS on side B of the paired component). You may however deploy a mixture of UCE B and C series for the separate Duplex pair sides, so long as the UCS server and processor generation aligns (UCS-B200M2-VCS1 is equal generation to UCS-C210M2-VCD2 for example). The hybrid support is for non-paired components that are not yet virtualized can continue to be on MCS in the UCCE on UCS deployments. See the Hybrid Deployment Options section for more information.


The following VMware features are not supported with Unified CCE:

  • VMware Physical to Virtual migration
  • VMware Snapshots
  • VMware Consolidated Backup
  • VMware High Availability (HA)
  • VMware Site Recovery Manager
  • VMware vCenter Update Manager
  • VMware vCenter Converter


Hardware Requirements for Unified CCE Virtualized Systems

Supported hardware platforms for Cisco Unified Contact Center Enterprise solutions are located on the Unified Communications Virtualization Supported Applicationspage. Hardware specifications for those supported platforms are detailed at the UC Virtualization Supported Hardwarepage.

Cisco SmartPlay Solution Packs for UC/Hardware Bundles Support

Cisco SmartPlay Solution Packs for UC, which are the pre-configured bundles (value UC bundles) based on UCS B200M2 or C210M2 as an alternative ordering to UC on UCS TRCs above are supported with caveats:

  • For B200M2 and C210M2 Solution Packs (Value UC Bundles) that have better specification than the UC on UCS B200M2/C210M2 TRC models (e.g., 6 cores per same cpu family, etc.), the UC on UCS spec-based HW support policy needs to be followed and these bundles are supported by UCCE/CVP as an exception providing the same UCCE VM co-residency rules are compliant and the number of CVP Call Server VMs cannot be more than two on the same server/host.
  • For other Spec-based Servers according to UC on UCS Spec-based Hardware Policy that have specification equal to or better than the UC on UCS B200M2/C210M2 TRCs, they may be used for UCCE/CVP once validated in the Customer Collaboration DMS/A2Q (Design Mentoring Session/Assessment To Quality) process. This also means a particular desired spec-based server model may not be approved for use after the server design review in the DMS/A2Q session.


Unified CCE supports MCS-7845-I3-CCE2 with virtualization. For a list of supported virtualized components on MCS servers, see the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted. Unified CCE does not support UCS C200.

VMware and Application Software Requirements

For supported VMware ESXi versions for Unified Contact Center Enterprise releases, see the VMware vSphere ESXi Version Support for Contact Center Applications table at Unified Communications VMware Requirements.

The following additional software requirements apply specifically to Unified Contact Center Enterprise:

ESXi 4.1 and 5.0 Software Requirements

When Cisco Unified CCE is running on ESXi 4.1 and 5.0, you must install or upgrade VMware Tools for ESXi to match the running version on each of the VMs and use all of the VMware Tools default settings. For more information, see the section VMware Tools. You must do this every time the ESXi version is upgraded.

Starting with CCE/CVP Release 9.x with Windows Server 2008 environment, disabling the LRO in ESXi 4.1/5.0 (and later) is no longer a requirement.



Unified CCE Component Capacities and VM Configuration Requirements

For supported Unified CCE component capacities and VM computing resource requirements, see the List of Unified CCE OVA Templates.

Note Note: You must use the OVA VM templates to create the Unified CCE component VMs.


For instructions on how to obtain the OVA templates, see Downloading OVA Templates for UC Applications.

Unified CCE Scalability Impacts

The capacity sizing information is based on the operating conditions published in the Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND), Release 8.x/9.x', Chapter 10, Operating Conditions and the Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted',8.x/9.x', Section 5. Both documents are available at Cisco.com.

The following features reduce the scalability of certain components below the agent count of the respective OVA capacity sizing information.

  • CTI OS Security - CTI OS Server capacity is impacted when CTI OS Security is enabled; capacity is decreased by 25%.
  • Mobile Agents - Refer to the SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with Mobile Agents.
  • Outbound Option – Refer to SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with Outbound Option.
  • Agent Greeting – Refer to SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with the Agent Greeting feature enabled.
  • Whisper Announcement – (United CCE 9.x) Refer to SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with the Whisper Announcement feature enabled.
  • Precision Queues – (United CCE 9.x) Refer to SRND, Chapter 10, Sizing Information for Unified CCE Components and Servers table for sizing guidance with precision queues.
  • Extended Call Context (ECC) usage greater than the level noted in the Operating Conditions will have a performance and scalability impact on critical components of the Unified CCE solution. As noted in the SRND, the capacity impact will vary based on ECC configuration, therefore, guidance must be provided on a case-by-case basis.



UCS Network Configuration

Support for UCM Clustering Over the WAN with Unified CCE on UCS Hardware

You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.

When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND).

In addition, note the following expectations for UCS hardware points of failure:

  • For communication path single point of failure performed by Cisco on the Unified CCE UCS B-series High Availability (HA) deployment, system call handling was observed to be degraded for up to 45 seconds while the system recovered from the fault, depending upon the subsystem faulted. Single points of failure will not cause the built-in ICM software failover to occur. Single points of failure include, but are not limited to, a single fabric interconnect failure, a single fabric extender failure, and single link failures.
  • Multiple points of failure on the Unified CCE UCS HA deployment can cause catastrophic failure, such as ICM software failovers and interruption of service. If multiple points of failure occur, replace the failed redundant components and links immediately.

B-Series Considerations

Cisco recommends use of the M81KR Virtual Interface Card (VIC) for Unified CCE deployments, though the M71KR(-E/Q) and M72KR(-E/Q) may also be used as per reference design 2 detailed at the dedicated networking page linked below. M51KR, M61KR and 82598KR are not supported for Contact Center use in UCS B series blades.

New B Series deployments are recommended to use Nexus 5000/7000 series data center switches with vPC PortChannels. This technology has been shown to add considerable advantageous to Contact Center applications in fault recovery scenarios.

See the configuration guidelines in UCCE on UCS B-Series Network Configuration.

C-Series Considerations

If deploying Clustering Over the WAN with C-Series hardware, do not trunk public and private networks. You must use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in UCCE on UCS C Series Network Configuration.


Notes for Deploying Unified CCE Applications on UCS B Series Hardware with SAN

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

Keep the following considerations in mind when deploying UCCE applications on UCS B Series hardware with SAN.

  • This deployment must comply with the conditions listed in Section 3.1.6 of the appropriate Hardware & System Software Specification (Bill of Materials) for Cisco Unified Contact Center Enterprise. In particular, SAN disk arrays must be configured as RAID 5 or RAID 10.
    • Note: RAID 6/ADG is also supported as an extension of RAID 5.
  • Historical Data Server (HDS) requires a 2 MB datastore block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores (in ESXi 4.0U1, this may change in later versions). The HDS block size is configured in vSphere at datastore creation.
  • To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.
  • The SAN design and configuration must meet the following VMware ESXi disk performance guidelines:
    • Disk Command Latency – It should be 15 mSec or less. 15mSec latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.
    • Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.
  • The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:
    • AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).
    •  %Disktime must remain less than 60%.
  • Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements & guidelines.
  • vSphere will alarm when disk free space is less than 20% free on any datastore. Recommendation is to provision at least 20% free space overhead, with 10% overhead required.
  • Recommend deploying from 4-8 VMs per LUN/datastore so long as IOPS and space requirements can be met, with supported range from 1-10.


See below for an example of SAN configuration for Rogger 2000 agent deployment. This example corresponds to the 2000 agent Sample CCE Deployment for UCS B-Series described in: Unified CCE Component Coresidency and Sample Deployments.

Example of SAN Configuration for Unified CCE ROGGER Deployment up to 2000 Agents

The following SAN configuration was a tested design, though generalized here for illustration. It is not the only possible way in which to provision SAN arrays, LUNs, and datastores to UC applications. However, you must adhere to the guidance given earlier in this section (above).

Rogger Side A

RoggerSideA.jpg

Rogger Side B

RoggerSideB.jpg


Steps for Installing/Migrating Unified CCE Components on Virtual Machines

Follow the steps and references below to install the Unified CCE components on virtual machines. You can use these instructions to install or upgrade systems running with Unified CCE 8.0(2) and later. You can also use these instructions to migrate virtualized systems from Unified CCE 7.5(x) to Unified CCE 8.0(2) or later, including the Avaya PG and other selected TDM PGs that were supported on Unified CCE 7.5(x). Not all TDM PGs supported in Unified CCE 7.5(x) are supported in Unified CCE 8.0(x)/9.0(x). For more information, see the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.

  1. Acquire the supported servers for Unified CCE 8.0(2) or later release.
  2. Install, setup, and configure the servers.
  3. Configure the network. See reference at UCS Network Configuration.
    Note Note: Configuring the network for the MCS servers is the same as configuring the network for the UCS C-Series servers.
  4. If VMware VirtualCenter is used for virtualization management, install or update to VMware vCenter Server 4.0/ 5.0 (for ESXi 5.0) or later.
  5. Install and Boot VMWare ESXi. See the Cisco UCS B-Series Blade Servers VMware Installation Guide or the Cisco UCS C-Series Servers VMware Installation Guide. On C-Series servers or MCS servers, you must configure the ESXi datastore block size for the Administration & Data Server. See Configuring the ESXi Data Store Block Size for Administration and Data Server for instructions.
  6. Create the Unified CCE virtual machines from the OVA templates. See reference at Creating Virtual Machines from OVA VM Templates. This is a requirement for all component running Unified CCE 8.0(2) and later. In Unified CCE 7.5(x), this is not a requirement.
  7. Install VMware Tools with the ESXi version on the virtual machines. Install the same version of VMware Tools as the ESXi software on the virtual machines.
  8. Install Windows OS and SQL Server (for Logger and HDS components) on the created virtual machines.
    Note Note: Microsoft Windows Server 2008 R2 Standard Edition and Microsoft SQL Server 2008 R2 Standard Edition should be used for virtual machine guests. See related information in the links below. If you have prior to Unified CCE 9.0 release deployment using Windows Server 2003 and SQL 2005 then plan to use them accordingly on the older Unified CCE releases.
  9. Install or migrate the Unified CCE Software components on the configured virtual machines, using Fresh Install or Tech Refresh Upgrade, as described in Installing Unified CCE components on virtual machines and Migrating Unified CCE components.


Unified CCE Component VM Co-residency and Sample Deployments

You can have one or more Unified CCE VMs co-resident on the same ESXi server, however, you must follow the rules as described below:

  • You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation does not over commit the available ESXi server computing resources.
  • You must not have CPU overcommitment on the ESXi server that is running Unified CCE realtime application components. The total number of vCPUs among all the virtual machines on an ESXi host must not be greater than the total number of CPU cores available on the ESXi server (not counting Hyper-threading cores). Commonly UCS servers have two physical socket CPUs with 4-10 cores each.
  • You must not have memory over-committed on the ESXi host running UC realtime applications. You must allocate a minimum of 2GB of memory for the ESXi kernel. For example, a B200M2 server with 48GB of memory would allow for up to 46GB to be available for virtual machine allocation. The total memory allocated for all the virtual machines on an ESXi server must not be greater than 46GB in this case. Note that the ESXi kernel memory allocation can vary with hardware server platform type, so please take care to ensure you do not over allocate.
  • VM co-residency with Unified Communications and third party applications (for example, WFM) is not supported unless it is specifically indicated in the following subsection.


The following tables show Unified CCE component supported co-residencies. A diamond indicates that co-residency is allowed, whereas an asterisk with a # will be used to denote limited or conditional support; please check Exceptions and Notes below the table for guidance on those co-residencies.

Unified CCE Component Co-residency Contact Center Tier 1 Applications Contact Center Tier 2 Applications Contact Center Tier 3 Applications Unified Communications Applications Third Party Applications
Contact Center Tier 1 Applications: Logger, Rogger, HDS (any) *1
Contact Center Tier 2 Applications: Router, Peripheral Gateway (PG), CVP Call + VXML Server, CVP Reporting Server, CUIC, CCMP, Finesse 
Contact Center Tier 3 Applications: ADS/AW (any non-HDS), Admin Client, Windows AD DC, CVP Ops/OAMP Server, CVP Media Server, SocialMiner
Unified Communications Applications: Communications Manager, Contact Center Express, IPIVR, CUP, Unity, Unity Connection, MediaSense, and other UC apps per the UC on UCS supported apps page *1 *2
Note Note: *1: Unified CCE 9.x CC Tier 1 components can be co-resident with UC Applications.
Note Note: *2: For co-residency restrictions specific to individual Unified Communications applications, please see the Application Co-residency and Virtual/Physical Sizing page.
Note Note: EXCEPTION to the above VM co-residency table for UCS C series:
  • An HDS (all types) cannot co-reside with a Logger, Rogger, CVP Reporting Server or another HDS unless those database applications are deployed on separate DAS (Direct Attached Storage) RAID arrays so that no more than two database VMs be coresident on UCS TRCs that come with two disk arrays. Standard RAID array for virtualization is 8 disk drives in RAID 5 or 10. Thus two arrays would require 16 drives to allow for co-residency. Example, the UCS-C210M2-VCD2 would not allow Rogger and HDS on same server as it has only a single 8 disk RAID 5 array. A UCS-C260M2-VCD2 (or C240M3 TRC) has 16 drives in two 8 disk RAID 5 arrays that will allow Rogger and HDS to be deployed on that single C series server, so long as each application is installed to separate arrays.


The following section depicts the CCE sample UCS deployments compliant to the co-residency rules described above.


Sample Unified CCE Deployments

Notes

  1. The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.
  2. Although the sample deployments in these tables reflect the C-Series restriction that the HDS cannot coreside with a Router, Logger, or a PG, this restriction is not present on a B-Series hardware platform.
  3. For deployments where Historical Data Servers (HDSs) are coresident, two RAID 5 groups (one for each HDS) are recommended.
  4. Any deployment > 2k agents requires at least 2 chassis
  5. It may be preferable to place your domain controller on bare metal rather than in the UCS B-series chassis itself. When a power failure occurs, the vCenter login credentials are dependent on Active Directory, leading to a potential chicken-and-egg problem if the domain controller is down as well.
  6. ACE (for CUIC) and CUSP (for CVP) components are not supported virtualized on UCS; these components are deployed on separate hardware. Please review the product SRND for more details.
  7. 12,000 agent is supported from 8.5(3) version onwards and requires Windows 2008 R2 Standard/Enterprise Edition and SQL Server 2005 Enterprise Edition. Refer to BOM for more details.
  8. For large multi-cores UCS models (more than 8 cores per ESXi host), you can still use the sample deployments below (which is based on 8 cores per ESXi host) and then collapse them to the actual available cores. For example, VMs compliant to co-resident rules on 2 x C210M2-VCD2 TRC can be collapsed to a single 1 x C260M2 TRC or 1 x C240M3 TRC host. Extra cores on large multi-cores UCS models may not all be utilized by CCE on UCS solution due to storage constraints. You need to observe the no more than two database VMs coresident stated earlier on large multicores UCS TRC that come with two disk arrays. This is subject to be verified in the DMS/A2Q process.

ROGGER Example 1

ROGGER 9.x on 8-cores UCS TRCs (up to 450 CTIOS Agents, or 297 CAD Agents) with 150 IPIVR or 150 CVP ports (N+N), optional 50 CUIC reporting users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server  Component #vCPU RAM (GB) ESXi Server  Component #vCPU RAM (GB)
ESXi Server A-1          Rogger A 4 4 ESXi Server B-1            Rogger B 4 4
Agent PG A (generic PG w/ optional VRU), CTIOS/CAD, optional MR PG 1 2 Agent PG B (generic PG w/ optional VRU), CTIOS/CAD, optional MR PG 1 2
Domain Controller A 1 2 Domain Controller B 1 2
UCM Publisher 2 6 CVP Op Svr 2
2
ESXi Server  A-2 AW-HDS-DDS 1 4 4 ESXi Server  B-2 AW-HDS-DDS 2 4 4
CUIC 1 4 6 CUIC 2 4 6
ESXi Server A-3 UCM Subscriber 1 2 6 ESXi Server  B-3 UCM Subscriber 2 2 6
Finesse Srv 1  4 Finesse Srv 2 4 8
IPIVR 1 2 4 IPIVR 2 2 4
ESXi Server A-4 CVP Call+VXML+Media Srv 1 4 4 ESXi Server  B-4 CVP Call+VXML+Media Srv 2 4 4
CVP Rpt Srv 1 4 4 CVP Rpt Srv 2 2
Legend
Not shaded Required
Shaded Optional

ROGGER Example 2

ROGGER 9.x on 8-cores UCS TRCs (up to 2,000 CTIOS Agents, or up to 1,000 CAD Agents) with 600 IPIVR or 900 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Rogger A 4 4 ESXi Server  B-1 Rogger B 4 4
Agent PG A (generic PG w/ optional VRU, CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG B (generic PG w/ optional VRU, CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Domain Controller A 1 2 Domain Controller B 1 2
spare spare
ESXi Server A-2 AW-HDS-DDS 1 4 4 ESXi Server  B-2 AW-HDS-DDS 2 4 4
AW-HDS-DDS 3 or Finesse Srv 1  4 4 or 8 AW-HDS-DDS 4  or Finesse Srv 2 4 4 or 8
ESXi Server A-3
UCM Subscriber 1 2 6 ESXi Server  B-3
UCM Subscriber 3 2 6
UCM Subscriber 2
2 6 UCM Subscriber 4
2
6
UCM Publisher 2
6
CVP Op Srv
2
2
IPIVR 1
2
4
IPIVR 2
2
4
ESXi Server A-4 CVP Call+VXML+Media Srv 1 4 4 ESXi Server  B-4 CVP Call+VXML+Media Srv 2 4 4
CVP Rpt Srv 1 4 4 CVP Rpt Srv 2 4 4
ESXi Server A-5
CUIC1 4
6
ESXi Server  B-5
CUIC 2 4
 spare 4 CCMP (all in one) 4 4
Legend
Not shaded Required
Shaded Optional

ROGGER Example 3

ROGGER 9.x on 8-cores UCS TRCs (up to 4,000 CTIOS Agents, or up to 2,000 CAD Agents) with 1,200 IPIVR or 1,800 CVP ports (N+N), optional 200 CUIC reporting users, CCMP 1,500 users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Rogger A 4 4 ESXi Server B-1 Rogger B 4 4
Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4

Agent PG 1B

(CTIOS/CAD, optional

MR PG, SIP Dialer)

2 4
ESXi Server A-2
VRU PG A 2 2 ESXi Server B-2
VRU PG B
2 2
spare

CVP Op Srv 2
2
Domain Controller A  1 2 Domain Controller B  1 2
ESXi Server A-3 Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 ESXi Server B-2

Agent PG 2B

(CTIOS/CAD, optional

MR PG, SIP Dialer)

2 4
CVP Rpt Srv 1 4 4 CVP Rpt Srv 2 4 4
ESXi Server A-4 AW-HDS-DDS 1 4 4 ESXi Server B-3 AW-HDS-DDS 2 4 4

AW-HDS-DDS 3 or Finesse Srv1

4 4 or 8

AW-HDS-DDS 4 or Finessess Srv 2

4 4 or 8
ESXi Server A-5 UCM Subscriber 1 2 6 ESXi Server B-4 UCM Subscriber 2 2 6
UCM Subscriber 3 2 6 UCM Subscriber 4 2 6
UCM Publisher 2 6 spare
IPIVR 1 2 4 IPIVR 3 2 4
ESXi Server A-6 UCM Subscriber 5 2 6 ESXi Server B-5 UCM Subscriber 6 2 6
UCM Subscriber 7 2 6 UCM Subscriber 8 2 6
IPIVR 2  2 4 IPIVR 4  2 4
spare  2 4 spare 2 4
ESXi Server A-7 CVP Call+VXML+Media Srv 1 4 4 ESXi Server B-5 CVP Call+VXML+Media Srv 2 4 4
CVP Call+VXML+Media Srv 3 4 4 CVP Call+VXML+ Media Srv 4 4 4
ESXi Server A-7
CUIC 1 4 6 ESXi Server B-6
CUIC 2 4 6
spare CCMP (all in one) 4 4

Not shaded Required
Shaded Optional

Router/Logger Example 1

Router/Logger 9.x on 8-cores UCS TRCs (up to 8,000 CTIOS Agents, or up to 4,000 CAD Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples
Chassis X (B series)/ Rack of C series rack mount Servers Chassis Y (B series)/ Rack of C series rack mount Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Router A ESXi Server B-1 Router B
Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 3A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 3B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Domain Controller A  1 2 Domain Controller B  1 2
spare spare     
ESXi Server A-2 Logger A 4 4 ESXi Server B-2 Logger B 4 4
Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 4A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 4B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-3 HDS-DDS 1 4 4 ESXi Server B-3 HDS-DDS 2 4 4
AW-HDS 1 4 4 AW-HDS 2 4 4
ESXi Server A-4 AW-HDS 3  4 4 ESXi Server B-4 AW-HDS 4  4 4

AW-HDS 5

4 4

AW-HDS 6

4 4
ESXi Server A-5 UCM 1 Subscriber 1 2 6 ESXi Server B-5 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 1 2 6 UCM 2 Subscriber 2 2 6
UCM 1 Subscriber 3 2 6 UCM 1 Subscriber 4 2 6
UCM 2 Subscriber 3 2 6 UCM 2 Subscriber 4 2 6
ESXi Server A-6 UCM 1 Subscriber 5 2 6 ESXi Server B-6 UCM 1 Subscriber 6 2 6
UCM 2 Subscriber 5 2 6 UCM 2 Subscriber 6 2 6
UCM 1 Subscriber 7 2 6 UCM 1 Subscriber 8 2 6
UCM 2 Subscriber 7 2 6 UCM 2 Subscriber 8 2 6
ESXi Server A-7
UCM Publisher 1 2 6 ESXi Server B-7
CVP Op Srv 2 2
spare  2 4 spare 2 4
ESXi Server A-8 UCM Publisher 2 2 6 ESXi Server B-8 CVP Rpt Srv 2 4 4
CVP Rpt Srv 1 4 4 spare

ESXi Server A-9 CVP Call+VXML+Media Srv 1 4 4 ESXi Server B-9 CVP Call+VXML+Media Srv 2 4 4
CVP Call+VXML+Media Srv 3 4 4 CVP Call+VXML+Media Srv 4 4 4
ESXi Server A-10 CVP Call+VXM+Media Srv 5 4 4 ESXi Server B-10 CVP Call+VXML+Media Srv 6 4 4
CVP Call+VXML+Media Srv 7 4 4 CVP Call+VXML+Media Srv 8 4 4
ESXi Server A-11 Finesse Srv 1 4 8 ESXi Server B-11
Finesse Srv 2 4 8
VRU PG A
2
2
VRU PG B
2
2
ESXi Server A-12 CUIC 1  4 6 ESXi Server B-12
CUIC 2  4 6
CUIC 3  4 6 CUIC 4  4 6
ESXi Server A-13
CCMP DB  8 8 ESXi Server B-13 CCMP Web/App Svr  4 4
Legend
Not shaded Required
Shaded Optional


Router/Logger Example 2


Router/Logger 9.x on 8-cores UCS TRCs (up to 12,000 CTIOS Agents) with 3,600 CVP ports (N+N), optional 400 CUIC reporting users, CCMP 8,000 users, as examples

Chassis X (B series) Servers Chassis Y (B series) Servers
ESXi Server Component #vCPU RAM (GB) ESXi Server Component #vCPU RAM (GB)
ESXi Server A-1 Logger A 4 8 ESXi Server B-1 Logger B 4 8
Router A 4 8 Router B 4 8
ESXi Server A-2 Agent PG 1A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 ESXi Server B-2 Agent PG 1B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 3A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 3B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 5A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 5B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-3 Agent PG 2A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 ESXi Server B-3 Agent PG 2B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 4A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 4B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
Agent PG 6A (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4 Agent PG 6B (CTIOS/CAD, optional MR PG, SIP Dialer) 2 4
ESXi Server A-4 HDS-DDS 1 4 8 ESXi Server B-4 HDS-DDS 2 4 8
AW-HDS 1 4 8 AW-HDS 2 4 8
AW-HDS 3 4 8 AW-HDS 4 4 8
AW-HDS 5 4 8 AW-HDS 6 4 8
ESXi Server A-5 UCM 1 Subscriber 1 2 6 ESXi Server B-5 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 1 2 6 UCM 2 Subscriber 2 2 6
UCM 3 Subscriber 1 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-6 UCM 1 Subscriber 3 2 6 ESXi Server B-6 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 3 2 6 UCM 2 Subscriber 2 2 6
UCM 3 Subscriber 3 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-7 UCM 1 Subscriber 5 2 6 ESXi Server B-7 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 5 2 6 UCM 2 Subscriber 2 2 6
UCM 3 Subscriber 5 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-8 UCM 1 Subscriber 7 2 6 ESXi Server B-8 UCM 1 Subscriber 2 2 6
UCM 2 Subscriber 7 2 6 UCM 2 Subscriber 2 2 6
UCM 2 Subscriber 7 2 6 UCM 3 Subscriber 2 2 6
ESXi Server A-9 Domain Controller 1 2 ESXi Server B-9 Domain Controller 1 1
UCM Publisher 1 2 6 CVP Op Srv 2 2
Finesse Srv 1  4 8 Finesse Srv 2  4 8
Administration Server - AW 1 2 Administration Server - AW 1 2
ESXi Server A-10 UCM Publisher 2 2 6 ESXi Server B-10 CVP Rpt Srv 2 4 4
CVP Rpt Srv 1 4 4 UCM Publisher 3 2 6
ESXi Server A-11 CVP Call+VXML+Media Srv 1 4 4 ESXi Server B-11 CVP Call+VXML+Media Srv 2 4 4
CVP Call+VXML+Media Srv 3 4 4 CVP Call+VXML+Media Srv 4 4 4
ESXi Server A-12 CVP Call+VXML+Media Srv 5 4 4 ESXi Server B-12 CVP Call+VXML+Media Srv 6 4 4
CVP Call+VXML+Media Srv 7 4 4 CVP Call+VXML+Media Srv 8 4 4
ESXi Server A-13 CVP Call+VXML+Media Srv 9 4 4 ESXi Server B-13 CVP Call+VXML+Media Srv 10 4 4
CVP Call+VXML+Media Srv 11 4 4 CVP Call+VXML=Media Srv 12 4 4
ESXi Server A-14 Finesse Srv 1 4 ESXi Server B-14 Finesse Srv 2 4 8
VRU PG A 2 2 VRU PG B 2 2
ESXi Server A-15 CUIC 1  4 6 ESXi Server B-15 CUIC 2  4 6
CUIC 3  4 6 CUIC 4 4 6
ESXi Server A-16 CUIC 5 4 6 ESXi Server B-16 CUIC 6 4 6
CUIC 7 4 6 CUIC 8 4 6
ESXi Server A-17 CCMP DB 8 8 ESXi Server B-17 CCMP Web/App Svr  4 4
Legend
Not shaded Required
Shaded Optional

 

 

Hybrid Deployment Options

Some Unified Contact Center deployments are supported in a "hybrid" fashion whereby certain components must be deployed on (bare-metal) Media Convergence Servers (MCS) or generic servers, and other components are deployed in virtual machine guests on Unified Computing System (UCS) or MCS servers. The following sub-sections provide further details on these hybrid deployment options.

Cisco Unified Contact Center Hosted

  • NAM Rogger is deployed on a (bare-metal) quad CPU server as specified in the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.
  • Each customer instance central controller (CICM) connecting to the NAM may be deployed in its own virtual machine as a Rogger or separate Router/Logger pair on UCS hardware. Multiple CICM instances are not supported collocated in one VM. Existing published rules and capacities apply to CICM Rogger and Router/Logger VMs. (Note: CICMs are not supported on bare-metal UCS.)
  • As in Enterprise deployments, each Agent PG is deployed in its own virtual machine. Multi-instance Agent PGs are not supported in a single VM. Existing published rules and capacities apply to PGs in Hosted deployments.

Parent/Child Deployments

  • The parent ICM is deployed on (bare-metal) servers as specified in the appropriate Hardware and System Software Specification (Bill of Materials) for Cisco Unified ICM/Contact Center Enterprise and Hosted.
  • The Unified Contact Center Enterprise (or Express) child may be deployed virtualized according to existing published VM requirements.
  • The Unified Contact Center Enterprise Gateway PG and System PG are each deployed in its own virtual machine; agent capacity (and resources allocated to the VM) are the same as the Unified CCE Agent PG @ 2,000 agent capacity. Use the same virtual machine OVA template to create the CCE Gateway or System PG VM.

Cisco Unified CCE-Specific Information for OVA Templates

See the following websites for more information:


Creating Virtual Machines by Deploying the OVA Templates

In the vSphere client, perform the following steps to deploy the Virtual machines.

  1. Highlight the host or cluster to which you wish the VM to be deployed.
  2. Select File > Deploy OVF Template.
  3. Click the Deploy from File radio button and specify the name and location of the file you downloaded in the previous section OR click the Deploy from URL radio button and specify the complete URL in the field, then click Next.
  4. Verify the details of the template, and click Next.
  5. Give the VM you are about to create a name, and choose an inventory location on your host, then click Next.
  6. Choose the datastore on which you would like the VM to reside - be sure there is sufficient free space to accommodate the new VM, then click Next.
  7. Choose a virtual network for the VM, then click Next.
  8. Verify the deployment settings, then click Finish.

Notes

  • VM CPU affinity is not supported. You may not set CPU affinity for Unified CCE application VMs on vSphere.
  • VM Resource Reservation - VM resource reservation is not supported for Unified CCE application VMs on vSphere prior to release 9.0. The VM computing resources should have a default reservation setting, which is no resource reservations.
  • Starting with Unified CCE 9.0(1) and later, VM resource reservation is supported and the computing resources have a default setting when deployed from the OVA for CCE 9.0
  • You must not change the computing resource configuration of your VM at any time.
  • You must never go below the minimum VM computing resource requirements as defined in the OVA templates.
  • ESXi Server hyperthreading is enabled by default.


Preparing for Windows Installation

In the vSphere client, perform the following steps to prepare for operating system installation.

  1. Right click on the virtual machine you want to edit and select Edit Settings. A Virtual Machine Properties dialog appears.
  2. On the Hardware tab, select CD/DVD Drive 1. Under Device Type, select Datastore ISO File and enter the location of the operating system ISO.
  3. Click OK to save setting changes.
  4. Power up your VM and continue with operating system installation.


Remote Control of the Virtual Machines

For administrative tasks, you can use either Windows Remote Desktop or the VMware Infrastructure Client for remote control. The contact center supervisor can access the ClientAW VM using Windows Remote Desktop.

Installing VMware Tools

The VMware Tools must be installed on each of the VMs and all of the VMware Tools default settings should be used. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with Windows operating system.

Installing Unified CCE Components on Virtual Machines

Install the Unified CCE components after you create and configure the virtual machine. Installation of the Unified CCE components on a virtual machine is the same as the installation of the components on physical hardware.

Refer to the Unified CCE documentation for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

Migrating Unified CCE Components to Virtual Machines

Migrate the Unified CCE components from physical hardware or another virtual machine after you create and configure the virtual machine. Migration of these Unified CCE software components to a VM is the same as the migration of the components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise & Hosted.

Configuring the ESXi Data Store Block Size for Administration and Data Server

This section is applicable to storing Virtual Machines on C-210 local storage. The C-210 Server comes with a default local storage configured with two sets of RAID groups. Disk 1-2 is RAID 1, while the remaining disks (3-10) are RAID 5.

The creation of the virtual machine for the Unified CCE Administration and Data Server requires a large virtual disk size. You must follow the steps described below to configure the ESXi data store block size to 2MB for it to handle the Unified CCE Administration and Data Server virtual disk size requirement before you deploy the OVAs for the following Unified CCE components:

  • AW-HDS
  • AW-HDS-DDS
  • HDS-DDS


Steps to configure the ESXi data store block size to 2MB:

  1. After you install ESXi on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi and use VMware vSphere Client to connect to the ESXi host.
  2. On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.
  3. Right-click on this data store and delete it. We will add the data store back in the following steps.
  4. Click on the “Add Storage…” and select the Disk/LUN.
  5. The data store that was just deleted will now be available to add, select it.
  6. In the configuration for this data store you will now be able to select the block size, select 2MB, and finish the adding of the storage to the ESXi host. This storage is now available for deployment of the virtual machines that requires large disk size, such as the Administration and Data Servers.

Timekeeping Best Practices for Windows

You should follow the best practices outlined in the VMware Knowledge Base article VMware KB: Timekeeping best practices for Windows.

  • ESXi hosts and domain controllers should synchronize the time from the same NTP source.
  • When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.
  • Be sure that Time synchronization between the virtual machine and the host operating system in the VMware Tools tool box GUI of the Windows Server 2003 guest operating system remains deselected; this checkbox is deselected by default.

System Performance Monitoring Using ESXi Counters

  • Make sure that you follow VMware's ESXi best practices and SAN vendor's best practices for optimal system performance. 
  • VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.
  • You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.
  • You can use Unified CCE Serviceability Tools and Unified CCE reports to monitor the operation and performance of the Unified CCE system.
  • The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.


You can use the following ESXi counters as performance indicators.

Category Object Measurement Units Description Performance Indication and Threshold
CPU
  • ESXi Server
  • VM


CPU Usage (Average) Percent CPU Usage Average in percentage for:
  • ESXi server
  • Virtual machine


Less than 60%.
CPU
  • ESXi Server Processor#
  • VM_vCPU#


CPU Usage 0 - 7 (Average) Percent CPU Usage Average for:
  • ESXi server for processors 0 to 7
  • Virtual machine vCPUs


Less than 60%
CPU VM CPU Ready mSec The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU. Less than 150 mSec. If it is greater than 150 mSec doing system failure, you should investigate and understand why the machine is so busy.
Memory
  • ESXi Server
  • VM


Memory Usage (Average) Percent Memory Usage = Active/ Granted * 100 Less than 80%
Memory
  • ESXi Server
  • VM


Memory Active (Average) KB Memory that is actively used or being referenced by the guest OS and its applications. When it exceeds the amound of memory on the host, the server starts swap. Less than 80% of the Granted memory
Memory
  • ESXi Server
  • VM


Memory Balloon (Average) KB ESXi use balloon driver to recover memory from less memory-intensive VMs so it can be used by those with larger active sets of memory. Since we do not over commit the memory, this should be 0 or very low. Note: ESXi performs memory ballooning before memory swap.
Memory
  • ESXi Server
  • VM


Memory Swap used (Average) KB ESXi Server swap usage. Use the disk for RAM swap Since we do not over commit the memory, this should be 0 or very low
Disk
  • ESXi Server
  • VM


Disk Usage (Average) KBps Disk Usage = Disk Read rate + Disk Write rate Ensure that your SAN is configured to handle this amount of disk I/O.
Disk
  • ESXi Server vmhba ID
  • VMbha ID


Disk Usage Read rate  KBps Rate of reading data from the disk Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Usage Write rate KBps Rate of writing data to the disk Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Commands Issued Number Number of disk commands issued on this disk in the period. Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Command Aborts Number Number of disk commands aborted on this disk in the period. Disk command aborts when the disk array is taking too long to respond to the command. (Command timeout)

This counter should be zero. A non-zero value indicates storage performance issue.

Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Disk Command Latency mSec The average amount of time taken for a command from the perspective of a Gust OS. Disk Command Latency = Kernel Command Latency + Physical Device Command Latency. 15ms latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID


Kernel Disk Command Latency mSec The average time spent in ESXi Server VMKernel per command Kernel Command Latency should be very small in comparison to the Physical Device Command Latency, and it should be close to zero.  Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.
Network
  • ESXi Server
  • VM


Network Usage (Average) KBps Network Usage = Data receive rate + Data transmit rate Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.
Network
  • ESXi Server vmnic ID
  • VM vmnic ID


Network Data Receive Rate KBps Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.
Network
  • ESXi Server vmnic ID
  • VM vmnic ID


Network Data Transmit Rate KBps The average rate at which data is transmitted on this Ethernet port Less than 30% of the available network bandwidth.  For example, it should be less than 300Mps for 1G network.

System Performance Monitoring Using Windows Perfmon Counters

You must comply with the best practices described in the Cisco Unified Contact Center Enterprise Solution Reference Network Design(SRND) section System Performance Monitoring, and in Chapter 8 Performance Counters in the  CCE 8.x Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise or CCE 9.x Serviceability Best Practices Guide for Cisco Unified ICM/Unified CCE & Unified CCH Release 9.0.



Back to: Unified Communications in a Virtualized Environment

Rating: 3.9/5 (18 votes cast)

Personal tools