Unified Contact Center Enterprise

From DocWiki

(Difference between revisions)
Jump to: navigation, search
(UCS Hardware Requirements)
Line 126: Line 126:
== UCS Hardware Requirements ==
== UCS Hardware Requirements ==
-
Requirements for Cisco Unified Contact Center Enterprise systems using UCS B-200 M1 hardware are located [http://docwiki.cisco.com/wiki/Unified_Computing_System_Hardware#B-200_M1_Requirements here].
+
 
 +
Requirements for Cisco Unified Contact Center Enterprise systems using UCS B200 M1 hardware are located [http://docwiki-dev.cisco.com/wiki/Unified_Computing_System_Hardware#B-200_M1_Requirements here].
 +
 
 +
Requirements for Cisco Unified Contact Center Enterprise systems using UCS C210 M1 hardware are located [http://docwiki-dev.cisco.com/wiki/Unified_Computing_System_Hardware#C-210_M1_Requirements here].
== VMware and Application Software Requirements ==
== VMware and Application Software Requirements ==

Revision as of 15:02, 8 October 2010

Contents

Updates to this Page

The following is a list of significant updates to this page:

Date Update
October 1, 2010 This page, UCCE on UCS Deployment Certification Requirements and Ordering Information, and UCS Network Configuration for UCCE updated to reflect support of UCS C-Series hardware on UCCE.
September 28, 2010 CVP added to list of supported components/deployments in Unified CCE 8.0(2) Support for Virtualization on the ESXi/UCS Platform.
September 21, 2010 IPIVR added to list of supported components/deployments in Unified CCE 8.0(2) Support for Virtualization on the ESXi/UCS Platform, and as an optional component in the sample deployment tables at Sample CCE Deployments.
August 17, 2010 Guidelines for deploying UCCE Applications on UCS B-Series Hardware with SAN are now posted at Notes for Deploying UCCE Applications on UCS B-Series Hardware with SAN.

Details of support for the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS B-series hardware are now posted at Support for UCM Clustering Over the WAN with UCCE on UCS B-series Hardware.

August 3, 2010 The procedure for installing or migrating Unified CCE components on Virtual Machines is now listed at Steps for Installing/Migrating Unified CCE Components on Virtual Machines.

Sample CCE Deployments for UCS B-Series are now documented at Sample CCE Deployments for UCS B-Series.

July 20, 2010 Virtualization of Outbound Option with the SIP Dialer on UCS B Series hardware is now supported, under the conditions stated in Unified CCE 8.0(2) Support for Virtualization on the ESXi/UCS Platform.

Information for Partners about UCCE on UCS Deployment Certification and Ordering

It is important that partners who are planning to sell UCS products on Unified Contact Center Enterprise read the DocWiki page UCCE on UCS Deployment Certification Requirements and Ordering Information.

This page contains essential information for partners about the following:

  • Partner Certification Requirements
  • UCS Server Ordering Information
  • Important Notes on Cisco UCS Service and Support

Unified CCE 8.0(2) Support for Virtualization on the ESXi/UCS Platform

Starting with Release 8.0(2), virtualization of the following deployments and Unified CCE components on Cisco Unified Computing Systems (UCS) B-Series and C-Series hardware is supported:

  • Router
  • Logger
  • Agent PG
  • MR PG
  • VRU PG
  • Administration and Data Server with one of the following roles:
  • Administration Server and Real-time Data Server (AW)
  • Configuration-only Administration Server (AW-CONFIG)
  • Administration Server and Real-time and Historical Data Server (AW-HDS)
  • Administration Server, Real-time and Historical Data Server, and Detail Data Server (AW-HDS-DDS)
  • Historical Data Server and Detail Data Server (HDS-DDS)
  • Administration Client
  • Outbound Option with SIP Dialer (collocate SIP Dialer and MR PG with Agent PG in the same VM guest. Generic PG can also be colocated with the Agent PG in the same VM guest. Published agent capacity formula with Outbound Option applies.)
  • Support Tools
  • Rogger (a Router and a Logger in the same VM)
  • The Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS B-series hardware is supported; see the section Support for UCM Clustering Over the WAN with UCCE on UCS B-series Hardware for important information.
  • IPIVR is supported with CCE on UCS B-Series solution and on UCS C-Series Reference Configuration 2 (UCS-C210M1-VCD2) only. Please refer to the IPIVR product specific pages for detail.
  • CVP is supported with CCE on UCS B-Series solution. Please refer to the CVP product specific pages for detail.
  • For UCS C210 M1, UCCE supports only Reference Configuration 2 (UCS-C210M1-VCD2) with a specific HDS Virtual Machine (VM) co-residence/population rule. See Sample CCE Deployments

The following deployments and Unified CCE components have not been qualified and are not supported in virtualization:

  • Progger (a Router, a Logger, and a Peripheral Gateway); this all-in-one deployment configuration is not scalable in a virtualization environment. Instead, use the Rogger or Router/Logger VM deployment configuration.
  • Unified CCH
  • Unified ICME
  • Unified ICMH
  • Outbound Option with SCCP Dialer
  • WebView Server
  • Cisco Agent Desktop (CAD) Server/services
  • Cisco Unified Intelligence Center (CUIC)
  • Cisco E-mail Interaction Manager (EIM)/Web Interaction Manager (WIM)
  • Contact Center Management Portal (CCMP)
  • Expert Advisor
  • Remote Silent Monitoring (RSM)
  • Cisco Unified CRM Connector


The following VMware features are not supported with Unified CCE:

  • VMware Physical to Virtual migration
  • VMware snapshots
  • VMware Consolidated Backup
  • VMware High Availability (HA)
  • VMware Site Recovery Manager
  • VMware vCenter Update Manager
  • VMware vCenter Converter


ICM 8.0(2) MR

ICM 8.0(2) contains a fix that modifies the following Windows Server 2003 registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

“ArpCacheMinReferencedLife”=dword:0xffffffff

This setting resolves an issue in the Windows Server 2003 TCP/IP stack implementation of ARP that introduced unacceptable network latency.

UCS Hardware Requirements

Requirements for Cisco Unified Contact Center Enterprise systems using UCS B200 M1 hardware are located here.

Requirements for Cisco Unified Contact Center Enterprise systems using UCS C210 M1 hardware are located here.

VMware and Application Software Requirements

The following software requirements apply specifically to Unified Contact Center Enterprise:

Unified CCE Component Capacities and VM Configuration Requirements

This table shows the supported Unified CCE components, their capacities, and the VM computing resource requirements.   You must use the OVA virtual machine templates to create the Unified CCE component VMs.

Unified CCE Component Capacity vCPU RAM (GB) vDisk (GB) vNIC Template Name
Router 8,000 agents 2 4 80 2 UCCE_router_8000_v1.0_vmv7.ova
Logger 8,000 agents 4 4 150 2 UCCE_logger_8000_v1.0_vmv7.ova
Agent PG 2,000 agents 2 4 80 2 UCCE_agtpg_2000_v1.0_vmv7.ova
Agent PG 450 agents 1 2 80 2 UCCE_agtpg_450_v1.0_vmv7.ova
MR PG 2000 agents, 10 PIMs 2 4 80 2 UCCE_agtpg_2000_v1.0_vmv7.ova
MR PG 1000 agents, 5 PIMs 1 2 80 2 UCCE_agtpg_450_v1.0_vmv7.ova
VRU PG 9,600 ports, 10 PIMs 2 2 80 2 UCCE_vrupg_9600_v1.0_vmv7.ova
VRU PG 1,200 ports, 4 PIMs 1 2 80 2 UCCE_vrupg_1200_v1.0_vmv7.ova
Administration Server - AW 25 clients 1 2 40 1 UCCE_aw_v1.0_vmv7.ova
AW-CONFIG 50 clients 1 2 40 1 UCCE_aw_config_v1.0_vmv7.ova
AW-HDS 200 reporters 4 4 500 1 UCCE_aw_hds_v1.0_vmv7.ova
AW-HDS-DDS 200 reporters 4 4 500 1 UCCE_aw_hds_dds_v1.0_vmv7.ova
HDS-DDS 200 reporters 4 4 500 1 UCCE_hds_dds_v1.0_vmv7.ova


Administration Client (Client AW) 1 user 1 2 40 1 UCCE_clientaw_v1.0_vmv7.ova
Support Tools n/a 1 2 40 1 UCCE_support_tools_v1.0_vmv7.ova
Rogger 4000 agents 4 4 150 2 UCCE_logger_8000_v1.0_vmv7.ova. NOTE: The Logger template is also the correct template for creating the Rogger component VM.

UCS Network Configuration

Support for UCM Clustering Over the WAN with UCCE on UCS Hardware

You can deploy the Unified Communications Manager (UCM) Clustering Over the WAN deployment model with Unified CCE on UCS hardware.

When you implement this deployment model, be sure to follow the best practices outlined in the section "IPT: Clustering Over the WAN" in the Cisco Unified Contact Center Enterprise Solution Reference Network Design (SRND).

In addition, note the following expectations for UCS hardware points of failure:

  • For communication path single point of failure performed by Cisco on the UCCE UCS B-series High Availability (HA) deployment, system call handling was observed to be degraded for up to 45 seconds while the system recovered from the fault, depending upon the subsystem faulted. Single points of failure will not cause the built-in ICM software failover to occur. Single points of failure include, but are not limited to, a single fabric interconnect failure, a single fabric extender failure, and single link failures.
  • Multiple points of failure on the UCCE UCS HA deployment can cause catastrophic failure, such as ICM software failovers and interruption of service. If multiple points of failure occur, replace the failed redundant components and links immediately.

B-Series Considerations

When deploying Clustering Over the WAN with B-Series hardware, use of the Cisco UCS M81KR Virtual Interface Card is mandatory.

New B-Series deployments using Clustering Over the WAN must use a Nexus 7000 Series / Nexus 5000 Series vPC infrastructure, or a Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720-10G.

C-Series Considerations

If deploying Clustering Over the WAN with C-Series hardware, do not trunk public and private networks. You must use separate physical interfaces off of the C-Series servers to create the public and private connections. See the configuration guidelines in Network Requirements for C-210 M1 Servers.

Notes for Deploying UCCE Applications on UCS B-Series Hardware with SAN

In Storage Area Network (SAN) architecture, storage consists of a series of arrays of Redundant Array of Independent Disks (RAIDs). A Logical Unit Number (LUN) that represents a device identifier can be created on a RAID array. A LUN can occupy all or part of a single RAID array, or span multiple RAID arrays.

In a virtualized environment, datastores are created on LUNs. Virtual Machines (VMs) are installed on the SAN datastore.

Keep the following considerations in mind when deploying UCCE applications on UCS B-series hardware with SAN.

  • Each Historical Data Server (HDS) requires a dedicated LUN and a datastore with a 2 MB block size. No other application can reside on the same datastore as the HDS. The HDS requires a 2 MB block size to accommodate the 500 GB OVA disk size, which exceeds the 256 GB file size supported by the default 1 MB block size for datastores. The HDS block size is configured in VMware at datastore creation.
  • To help keep your system running most efficiently, schedule automatic database purging to run when your system is least busy.
  • The ESXi boot from SAN feature is not supported; it must be installed to internal storage.
  • For UCS B-series, Virtual Machines must be stored on and booted from the SAN.
  • The SAN design and configuration must meet the following VMware ESXi disk performance guidelines:
    • Disk Command Latency – It should be 15 mSec or less. 15mSec latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.
    • Kernel Disk Command Latency – It should be very small in comparison to the Physical Device Command Latency, and it should be close to zero. A high Kernel Command Latency indicates there is a lot of queuing in the ESXi kernel.
  • The SAN design and configuration must meet the following Windows performance counters on UCCE VMs:
    • AverageDiskQueueLength must remain less than (1.5 ∗ (the total number of disks in the array)).
    •  %Disktime must remain less than 60%.
  • The total size of all Virtual Machines on a disk (total size = VM disk + RAM copy) must not exceed 90% of the capacity of a datastore.
  • Any given SAN array must be designed to have an IOPS capacity exceeding the sum of the IOPS required for all resident UC applications. Unified CCE applications should be designed for the 95th percentile IOPS values published in this wiki. For other UC applications, please follow their respective IOPS requirements & guidelines.
  • IOPS utilization should be monitored for each application to ensure that the aggregate IOPS is not exceeding the capacity of the array. Prolonged buffering of IOPS against an array may result in degraded system performance and delayed reporting data availability.
  • Unified CCE requires application storage to be on VMFS, Raw Device Mapping (RDM) is not supported.


See below for an example of SAN configuration for Rogger 2000 agent deployment. This example corresponds to the 2000 agent Sample CCE Deployment for UCS B-Series described in: http://docwiki.cisco.com/wiki/Unified_Contact_Center_Enterprise#Unified_CCE_Component_Co-Residency_and_Sample_Deployments.

Example of SAN Configuration for UCCE ROGGER Deployment up to 2000 Agents

The following SAN configuration was a tested design, though generalized here for illustration. It is not the only possible way in which to provision SAN arrays, LUNs, and datastores to UC applications. However, you must adhere to the guidance given earlier in this section (above).

Rogger Side A

RAID Group LUN VMware Data Store ESXi Server Virtual Machines
RAID Array A1 LUN-A1 DataStore-A1 B200-A2 AW-HDS-DDS 1
LUN-A2 DataStore-A2 B200-A2 AW-HDS-DDS 3
RAID Array A2 LUN-A3 DataStore-A3 B200-A1 Rogger A
Agent PG A
Domain Controller A
Support Tools
LUN-A4 DataStore-A4 B200-A3 UCM Publisher
UCM Sub 1
UCM Sub 3

Rogger Side B

RAID Group LUN VMware Data Store ESXi Server Virtual Machines
RAID Array B1 LUN-B1 DataStore-B1 B200-B2 AW-HDS-DDS 2
LUN-B2 DataStore-B2 B200-B2 AW-HDS-DDS 4
RAID Array B2 LUN-B3 DataStore-B3 B200-B1 Rogger B
Agent PG B
Domain Controller B
LUN-B4 DataStore-B4 B200-B3 UCM Sub 2
UCM Sub 4
CUP Server 1

Steps for Installing/Migrating Unified CCE Components on Virtual Machines

Follow the steps and references below to install or migrate the Unified CCE components on Virtual Machines.

  1. Install, setup, and configure the UCS Hardware.
  2. Configure the UCS Network. See reference at UCS Network Configuration.
  3. Install and Boot VMWare ESXi. See the Cisco UCS B-Series Blade Servers VMware Installation Guide or the Cisco UCS C-Series Servers VMware Installation Guide. On C-Series servers, you also need to configure the ESXi datastore block size for the Administration & Data Server. See Configuring the ESXi Data Store Block Size for Administration and Data Server for instructions.
  4. Create the Unified CCE Virtual Machines from the OVA template. See reference at Creating Virtual Machines from OVA VM Templates.
  5. Install Windows OS and SQL Server (for Logger and HDS components) on the created Virtual Machines. (Note: Microsoft Windows Server 2003 Standard Edition and Microsoft SQL Server 2005 Standard Edition should be used for virtual machine guests. See related information in the links below.)
  6. Install or migrate the Unified CCE Software components on the configured Virtual Machines. See install reference at Installing Unified CCE Components on Virtual Machines. See migration reference at Migrating Unified CCE Components to Virtual Machines.

Unified CCE Component Co-Residency and Sample Deployments

You can have one or more Unified CCE VMs co-resident on the same ESXi server.  However, you must follow the rules described below:

  • You can have any number of Unified CCE virtual machines and combination of co-residency of Unified CCE virtual machines on an ESXi server as long as the sum of all the virtual machine CPU and memory resource allocation is not over committed on the available ESXi server computing resources. 
  • You must not have CPU overcommit on the ESXi server that is running Unified CCE realtime application components.  The total number of vCPUs among all the virtual machines on an ESXi host must not greater than the total number of CPUs available on the ESXi server.  In the case of the Cisco UCS B-200 M1 and C-210 M1, the total number of CPUs available is 8.
  • You must not have memory overcommit on the ESXi host that running UC realtime applications.  You must allocate minimum 2GB of memory for the ESXi kernel.  For example, if an ESXi server on B-200 M1 hardware has 36GB of memory, after you allocate 2GB for the ESXi kernel, you have 34GB available for the virtual machines.  The total memory allocated for all the virtual machines on an ESXi server must not be greater than 34GB in this case.
  • VM co-residency with Unified Communications and third party applications not covered in following examples is not supported.
  • On a C-Series server, the HDS cannot co-reside with a Router, Logger, or a PG.

Sample CCE Deployments

Legend

Grey denotes solution "optional", meaning that not all customers may choose that option in their deployment.

Notes

The ESXi Servers listed in these tables can be deployed on either a B-Series or C-Series hardware platform.

Although the sample deployments in these tables reflect the C-Series restriction that the HDS cannot co-reside with a Router, Logger, or a PG, this restriction is not present on a B-Series hardware platform.

For deployments where Historical Data Servers (HDSs) are co-resident, two RAID 5 groups (one for each HDS) are recommended.

ROGGER (up to 2000 Agents)

Sample2000Agents.jpg

ROGGER (up to 4000 Agents)

Sample4000Agents.jpg

Router/Logger (up to 8000 Agents)

Sample8000Agents.jpg

Creating Virtual Machines from OVA VM Templates

Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual appliances.  Files in this format have an extension of .ova. The naming convention for the template is PRODUCT_COMPONENT_USER COUNT_VERSION_VMVER.ova

Click here to download the OVA templates from cisco.com to a local datastore that vSphere Client can access.

Downloading OVA Templates

  • To download a single OVA file, click the Download File button next to that file. To download multiple OVA files, click the Add to Cart button next to each file that you want to download, then click on the Download Cart link. A Download Cart page appears.
  • Click the Proceed with Download button on this page. A Software License Agreement page appears.
  • Read the Software License Agreement, then click the Agree button
  • On the next page, click on either the Download Manager link (requires Java) or the Non Java Download Option link. A new browser window appears.
  • If you selected Download Manager, a Select Location dialog box appears. Specify the location where you want to save the file, and click Open to save the file to your local machine.
  • If you selected Non Java Download Option, click the Download link on the new browser window. Specify the location and save the file to your local machine.

Creating Virtual Machines by Deploying the OVA Templates

In the vSphere client, perform the following steps to deploy the Virtual machines.

  1. Highlight the host or cluster to which you wish the VM to be deployed.
  2. Select File > Deploy OVF Template.
  3. Click the Deploy from File radio button and specify the name and location of the file you downloaded in the previous section OR click the Deploy from URL radio button and specify the complete URL in the field, then click Next.
  4. Verify the details of the template, and click Next.
  5. Give the VM you are about to create a name, and choose an inventory location on your host, then click Next.
  6. Choose the datastore on which you would like the VM to reside - be sure there is sufficient free space to accommodate the new VM, then click Next.
  7. Choose a virtual network for the VM, then click Next.
  8. Verify the deployment settings, then click Finish.

Notes

  • VM CPU affinity is not supported. You don't need to set CPU affinity for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform.
  • VM resource Reservation - VM resource reservation is not supported for the VMs that are running Unified CCE applications on the VMware ESXi on UCS platform. The VM computing resources should have a default reservation setting, which is no resource reservations.
  • You cannot change the computing resource configuration of your VM at any time.
  • You can never go below the minimum VM computing resource requirements as defined in the OVA templates.
  • ESXi Server hyperthread is enabled by default.

Remote Control of the Virtual Machines

For administrative tasks, you can use either Windows Remote Desktop or the VMware Infrastructure Client for remote control. The contact center supervisor can access the ClientAW VM using Windows Remote Desktop.

Installing VMware Tools

The VMware Tools must be installed on each of the VMs and all of the VMware Tools default settings should be used. Please refer to VMware documentation for instructions on installing or upgrading VMware Tools on the VM with Windows operating system.

Installing Unified CCE Components on Virtual Machines

You can install the Unified CCE components after the configuration of the VMs. Installation of these Unified CCE components on a VM is the same as the installation of these components on physical hardware.

Refer to the Unified CCE documentation for the steps to install Unified CCE components. You can install the supported Virus Scan software, the Cisco Security Agent(CSA), or any other software in the same way as on physical hardware.

Migrating Unified CCE Components to Virtual Machines

You can migrate the Unified CCE components from physical hardware or another virtual machine after the configuration of the VMs. Migration of these Unified CCE software components to a VM is the same as the migration of these components to new physical hardware and follows existing policies. It requires a Tech Refresh as described in the Upgrade Guide for Cisco Unified ICM/Contact Center Enterprise & Hosted Release 8.0(1).

Configuring the ESXi Data Store Block Size for Administration and Data Server

This section is applicable to storing Virtual Machines on C-210 M1 local storage. The C-210 Server comes with a default local storage configured with two sets of RAID groups. Disk 1-2 is RAID 1, while the remaining disks (3-10) are RAID 5.

The creation of the virtual machine for the UCCE Administration and Data Server requires a large virtual disk size. You must follow the steps described below to configure the ESXi data store block size to 2MB for it to handle the UCCE Administration and Data Server virtual disk size requirement before you deploy the OVAs for the following UCCE components:

  • AW-HDS
  • AW-HDS-DDS
  • HDS-DDS


Steps to configure the ESXi data store block size to 2MB:

  1. After you install ESXi 4.0 on the first disk array group (RAID 1 with disk 1 and disk 2), boot ESXi 4.0 and use VMware vSphere Client to connect to the ESXi host.
  2. On the Configuration tab for the host, select Storage in the box labeled Hardware. Select the second disk array group with RAID-5 configuration, and you will see in the formatting of “Datastore Details” that the block size is by default 1MB.
  3. Right-click on this data store and delete it. We will add the data store back in the following steps.
  4. Click on the “Add Storage…” and select the Disk/LUN.
  5. The data store that was just deleted will now be available to add, select it.
  6. In the configuration for this data store you will now be able to select the block size, select 2MB, and finish the adding of the storage to the ESXi host. This storage is now available for deployment of the virtual machines that requires large disk size, such as the Administration and Data Servers.

Performance Requirements

  • CPU usage (average) should not exceed 60% for the ESXi Server and for each of the individual processors, and for each VM.
  • Memory usage (average) should not exceed 80% for the ESXi Server and for each of the VMs.
  • VM snapshots are not supported in production since they have significant impact on system performance. 
  • The SAN must be able to handle the following Unified CCE application disk I/O characteristics.
  • Enable hyperthreading on all ESXi servers.
Unified CCE Component IOPS Disk Read KBytes / sec Disk Write KBytes / sec Operating Conditions
Peak Avg. 95th Pct. Peak Avg. 95th Pct. Peak Avg. 95th Pct.


Router 20 8 10 520 30 180 400 60 150 8,000 agents; 60 cps; ECC: 5 scalars @ 35 bytes each; No reporting.
Logger 1,000 600 700 4,000 600 2,500 12,000 3,000 7,000
HDS 1,600 1,000 1,100 600 70 400 6,000 2,000 3,800
Agent PG 125 40 70 300 5 20 2,000 1,200 1,500 2,000 agents, 15 cps


HDS 3,900 2,500 3,800 75,000 30,000 50,000 9,500 2,200 5,800 8,000 agents; 60 cps; 200 reporting users at max query load; ECC: 5 scalars @ 35 bytes each.
ROGGER 610 360 425 2,700 400 1,600 7,500 2,150 4,300 4,000 agents; 30 cps; ECC: 5 scalars @ 35 bytes each

Timekeeping Best Practices for Windows

You should follow the best practices outlined in the VMware Knowledge Base article VMware KB: Timekeeping best practices for Windows.

  • ESXi hosts and domain controllers should synchronize the time from the same NTP source.
  • When Unified CCE virtual machines join the domain, they synchronize the time with the domain controller automatically using w32time.
  • Be sure that Time synchronization between the virtual machine and the host operating system in the VMware Tools tool box GUI of the Windows Server 2003 guest operating system remains deselected; this checkbox is deselected by default.

System Performance Monitoring Using ESXi Counters

  • Make sure that you follow VMware's ESXi best practices and SAN vendor's best practices for optimal system performance. 
  • VMware provides a set of system monitoring tools for the ESXi platform and the VMs. These tools are accessible through the VMware Infrastructure Client or through VirtualCenter.
  • You can use Windows Performance Monitor to monitor the performance of the VMs. Be aware that the CPU counters may not reflect the physical CPU usage since the Windows Operating System has no direct access to the physical CPU.
  • You can use Unified CCE Serviceability Tools and Unified CCE reports to monitor the operation and performance of the Unified CCE system.
  • The ESXi Server and the virtual machines must operate within the limit of the following ESXi performance counters.

You can use the following ESXi counters as performance indicators.

Category Object Measurement Units Description Performance Indication and Threshold
CPU
  • ESXi Server
  • VM
CPU Usage (Average) Percent CPU Usage Average in percentage for:
  • ESXi server
  • Virtual machine
Less than 60%.
CPU
  • ESXi Server Processor#
  • VM_vCPU#
CPU Usage 0 - 7 (Average) Percent CPU Usage Average for:
  • ESXi server for processors 0 to 7
  • Virtual machine vCPUs
Less than 60%
CPU VM CPU Ready mSec The time a virtual machine or other process waits in the queue in a ready-to-run state before it can be scheduled on a CPU. Less than 150 mSec. If it is greater than 150 mSec doing system failure, you should investigate and understand why the machine is so busy.
Memory
  • ESXi Server
  • VM
Memory Usage (Average) Percent Memory Usage = Active/ Granted * 100 Less than 80%
Memory
  • ESXi Server
  • VM
Memory Active (Average) KB Memory that is actively used or being referenced by the guest OS and its applications. When it exceeds the amound of memory on the host, the server starts swap. Less than 80% of the Granted memory
Memory
  • ESXi Server
  • VM
Memory Balloon (Average) KB ESXi use balloon driver to recover memory from less memory-intensive VMs so it can be used by those with larger active sets of memory. Since we do not over commit the memory, this should be 0 or very low. Note: ESXi performs memory ballooning before memory swap.
Memory
  • ESXi Server
  • VM
Memory Swap used (Average) KB ESXi Server swap usage. Use the disk for RAM swap Since we do not over commit the memory, this should be 0 or very low
Disk
  • ESXi Server
  • VM
Disk Usage (Average) KBps Disk Usage = Disk Read rate + Disk Write rate Ensure that your SAN is configured to handle this amount of disk I/O.
Disk
  • ESXi Server vmhba ID
  • VMbha ID
Disk Usage Read rate  KBps Rate of reading data from the disk Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID
Disk Usage Write rate KBps Rate of writing data to the disk Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID
Disk Commands Issued Number Number of disk commands issued on this disk in the period. Ensure that your SAN is configured to handle this amount of disk I/O
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID
Disk Command Aborts Number Number of disk commands aborted on this disk in the period. Disk command aborts when the disk array is taking too long to respond to the command. (Command timeout)

This counter should be zero. A non-zero value indicates storage performance issue.

Disk
  • ESXi Server vmhba ID
  • VM vmhba ID
Disk Command Latency mSec The average amount of time taken for a command from the perspective of a Gust OS. Disk Command Latency = Kernel Command Latency + Physical Device Command Latency. 15ms latencies or greater indicates a possible over-utilized, misbehaving, or mis-configured disk array.
Disk
  • ESXi Server vmhba ID
  • VM vmhba ID
Kernel Disk Command Latency mSec The average time spent in ESXi Server VMKernel per command Kernel Command Latency should be very small in comparison to the Physical Device Command Latency, and it should be close to zero.  Kernel Command Latency can be high, or even higher than the Physical Device Command Latency if there is a lot of queuing in the ESXi kernel.
Network
  • ESXi Server
  • VM
Network Usage (Average) KBps Network Usage = Data receive rate + Data transmit rate Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.
Network
  • ESXi Server vmnic ID
  • VM vmnic ID
Network Data Receive Rate KBps Less than 30% of the available network bandwidth. For example, it should be less than 300Mps for 1G network.
Network
  • ESXi Server vmnic ID
  • VM vmnic ID
Network Data Transmit Rate KBps The average rate at which data is transmitted on this Ethernet port Less than 30% of the available network bandwidth.  For example, it should be less than 300Mps for 1G network.

System Performance Monitoring Using Windows Perfmon Counters

You must comply with the best practices described in the ICM 8.0(x) SRND section System Performance Monitoring, and in Chapter 8 Performance Counters in the Serviceability Best Practices Guide for Unified ICM/Contact Center Enterprise.



Back to Unified Communications Virtualization main page

Rating: 5.0/5 (1 vote cast)

Personal tools