UC Virtualization Supported Hardware

From DocWiki

(Difference between revisions)
Jump to: navigation, search
(Table 1 - UC on UCS TRCs)
m (1 revision)
(27 intermediate revisions not shown)
Line 3: Line 3:
----
----
-
= Introduction =
+
= Introduction =
-
{{ note | Not all UC apps support all hardware options.  [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]. }}<br>
+
{{ note | Not all UC apps support all hardware options.  [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]. }}<br>  
-
This web page describes supported compute, storage and network hardware for Virtualization of Cisco Unified Communications, including [http://www.cisco.com/go/uconucs UC on UCS] (Cisco Unified Communications on Cisco Unified Computing System). [[High-level Checklist for Design and Implementation|Click here for a checklist]] to design, quote and procure a virtualized UC solution that follows Cisco's support policy.  
+
This web page describes supported compute, storage and network hardware for Virtualization of Cisco Unified Communications, including [http://www.cisco.com/go/uconucs UC on UCS] (Cisco Unified Communications on Cisco Unified Computing System). [[High-level Checklist for Design and Implementation|Click here for a checklist]] to design, quote and procure a virtualized UC solution that follows Cisco's support policy. <br><br>  
-
<br><br>
+
Cisco uses three different support models:  
Cisco uses three different support models:  
-
*'''[[#UC on UCS Tested Reference Configurations | UC on UCS Tested Reference Configuration]] (TRC)'''
 
-
*'''UC on UCS Specs-based '''
 
-
*'''HP/IBM Specs-based'''
 
-
<br>
 
-
'''"TRC"''' used by itself means "UC on UCS Tested Reference Configuration (TRC)".
+
*'''[[#UC_on_UCS_Tested_Reference_Configurations|UC on UCS Tested Reference Configuration]] (TRC)'''  
-
'''"UC on UCS"''' used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.<br>
+
*'''UC on UCS Specs-based '''
-
'''"Specs-based"''' used by itself refers to the common rules of UC on UCS Specs-based and HP/IBM Specs-based.&nbsp; <br><br>
+
*'''Third-party Server Specs-based'''
-
Below is a comparison of the hardware support options.  Note that the following are identical regardless of the support model chosen:
+
<br>  
-
* Virtual machine (OVA) definitions<br>
+
-
* VMware product, version and feature support<br>
+
-
* VMware configuration requirements for UC<br>
+
-
* Application/VM Co-residency policy (specifically regarding application mix, 3rd-party support, no reservations / oversubscription, virtual/physical sizing rules and max VM count per server). <br>
+
-
<br>
+
-
{| width="1200" style="" class="wikitable FCK__ShowTableBorders"
+
'''"TRC"''' used by itself means "UC on UCS Tested Reference Configuration (TRC)". '''"UC on UCS"''' used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.<br> '''"Specs-based"''' used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based.&nbsp; <br><br>
 +
 
 +
Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen:
 +
 
 +
*Virtual machine (OVA) definitions<br>
 +
*VMware product, version and feature support<br>
 +
*VMware configuration requirements for UC<br>
 +
*Application/VM Co-residency policy (specifically regarding application mix, 3rd-party support, no reservations / oversubscription, virtual/physical sizing rules and max VM count per server). <br>
 +
 
 +
<br>
 +
 
 +
{| width="1200" class="wikitable FCK__ShowTableBorders" style=""
|-
|-
-
! <br>
+
! <br>  
! UC on UCS TRC  
! UC on UCS TRC  
! UC on UCS Specs-based  
! UC on UCS Specs-based  
-
! HP/IBM Specs-based  
+
! Third-Party Server Specs-based  
-
! Any other Cisco or 3rd-party hardware
+
! Other hardware
|-
|-
! Basic Approach  
! Basic Approach  
| Configuration-based  
| Configuration-based  
| Rules-based  
| Rules-based  
-
| Rules-based
+
| Rules-based  
| Not supported - does not satisfy this page's policy.
| Not supported - does not satisfy this page's policy.
|-
|-
! Allowed for which UC apps?  
! Allowed for which UC apps?  
-
| [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]
+
| [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]  
-
| [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]
+
| [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]  
-
| [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]
+
| [[Unified Communications Virtualization Supported Applications|Click here for supported apps matrix]]  
| Not supported
| Not supported
-
 
|-
|-
! UC-required Virtualization Software  
! UC-required Virtualization Software  
|  
|  
-
*[[UC Virtualization Supported Hardware#VMware Requirements | Click here]] for general requirements.
+
*[[UC Virtualization Supported Hardware#VMware_Requirements|Click here]] for general requirements.  
-
* '''VMware vCenter''' is optional.
+
*'''VMware vCenter''' is optional.  
-
* One of the following is mandatory:
+
*One of the following is mandatory:  
-
** '''Cisco UC Virtualization Foundation'''
+
**'''Cisco UC Virtualization Foundation'''  
-
** '''VMware vSphere'''  
+
**'''VMware vSphere'''  
-
** [[Unified Communications VMWare Requirements| Click here]] for supported versions, editions, features, capacities and purchase options.
+
**[[Unified Communications VMWare Requirements|Click here]] for supported versions, editions, features, capacities and purchase options.
-
+
 
-
*[[UC Virtualization Supported Hardware#VMware Requirements | Click here]] for general requirements.
+
-
* '''VMware vCenter''' is mandatory.  Also mandatory to capture Statistics Level 4 for maximum duration at each level.
+
-
* One of the following is mandatory:
+
-
** '''Cisco UC Virtualization Foundation'''
+
-
** '''VMware vSphere'''
+
-
** [[Unified Communications VMWare Requirements| Click here]] for supported versions, editions, features and capacities and purchase options.
+
|  
|  
-
*[[UC Virtualization Supported Hardware#VMware Requirements | Click here]] for general requirements.
+
*[[UC Virtualization Supported Hardware#VMware_Requirements|Click here]] for general requirements.
-
* '''VMware vCenter''' is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.
+
*'''VMware vCenter''' is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.
-
* '''VMware vSphere''' is mandatory:
+
*One of the following is mandatory:
-
** [[Unified Communications VMWare Requirements| Click here]] for supported versions, features and capacities and purchase options.
+
**'''Cisco UC Virtualization Foundation'''
 +
**'''VMware vSphere'''
 +
**[[Unified Communications VMWare Requirements|Click here]] for supported versions, editions, features and capacities and purchase options.
 +
 
 +
|
 +
*[[UC Virtualization Supported Hardware#VMware_Requirements|Click here]] for general requirements.  
 +
*'''VMware vCenter''' is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.  
 +
*'''VMware vSphere''' is mandatory:  
 +
**[[Unified Communications VMWare Requirements|Click here]] for supported versions, features and capacities and purchase options.
 +
 
| N/A - not supported.
| N/A - not supported.
|-
|-
! Allowed Servers  
! Allowed Servers  
-
| Select Cisco UCS listed in [[#UC on UCS Tested Reference Configurations | Table 1]]. Must follow all TRC rules in this policy.
+
| Select Cisco UCS listed in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]]. Must follow all TRC rules in this policy.  
-
| Any Cisco UCS that satisfies this page's policy
+
| Any Cisco UCS that satisfies this page's policy  
-
| Any HP/IBM that satisfies this page's policy
+
| Any 3rd-party server model that satisfies this page's policy  
| None
| None
|-
|-
-
! Required Level of Virtualization/Server Experience<br>
+
! Required Level of Virtualization/Server Experience<br>  
-
| Low/medium<br>
+
| Low/medium<br>  
-
| High<br>
+
| High<br>  
-
| High<br>
+
| High<br>  
| N/A
| N/A
|-
|-
-
! Cisco-tested hardware?
+
! Cisco-tested?  
-
| Yes by UC and DC
+
| Joint validation of apps and server hardware by UC and UCS teams.
-
| Yes, but DC only
+
| Generic server hardware validation by UCS team. Not jointly validated with UC apps by Cisco.<br>
-
| No
+
| No server hardware validation by Cisco.&nbsp; Not jointly validated with UC apps by Cisco.<br>
-
| No
+
| No&nbsp;Cisco testing (unsupported hardware)
-
 
+
|-
|-
-
! Server Model, CPU and Component Choices<br>
+
! Server Model, CPU and Component Choices<br>  
-
| Less (customer accepts tradeoff of less hardware flexibility for more UC predictability).<br>
+
| Less (customer accepts tradeoff of less hardware flexibility for more UC predictability).<br>  
-
| More (customer assumes more test/design ownership to get more hardware flexibility)<br>
+
| More (customer assumes more test/design ownership to get more hardware flexibility)<br>  
-
| More (customer assumes more test/design ownership to get more hardware flexibility)<br>
+
| More (customer assumes more test/design ownership to get more hardware flexibility)<br>  
| None (unsupported hardware)
| None (unsupported hardware)
|-
|-
-
! Does Cisco TAC support UC apps?<br>
+
! Does Cisco TAC support UC apps?<br>  
-
| Yes, when all TRC rules in this policy are followed.
+
| Yes, when all TRC rules in this policy are followed.  
-
UC apps on '''C-Series DAS-only TRC''': Supported with Guaranteed performance<br>UC apps on '''C-Series FC SAN TRC''' or '''B-Series FC SAN TRC''': Supported with Guaranteed performance provided all [[#Storage | shared storage requirements in this policy are met]].<br>
+
UC apps on '''C-Series DAS-only TRC''': Supported with Guaranteed performance<br>UC apps on '''C-Series FC SAN TRC''' or '''B-Series FC SAN TRC''': Supported with Guaranteed performance provided all [[#Storage|shared storage requirements in this policy are met]].<br>  
-
| Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only<br>
+
 
-
| Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only<br>
+
| Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only<br>  
 +
| Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only<br>  
| UC apps not supported when deployed on unsupported hardware.
| UC apps not supported when deployed on unsupported hardware.
|-
|-
-
! Does Cisco TAC support the server?<br>
+
! Does Cisco TAC support the server?<br>  
-
| Yes. If used with UC apps, then all TRC rules in this policy must be followed. <br>
+
| Yes. If used with UC apps, then all TRC rules in this policy must be followed. <br>  
-
| Yes. If used with UC apps, then all UC on UCS Specs-based rules in this policy must be followed.<br>
+
| Yes. If used with UC apps, then all UC on UCS Specs-based rules in this policy must be followed.<br>  
-
| No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract.<br>
+
| No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract.<br>  
-
| No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract. Also note UC apps also not supported when deployed on unsupported hardware.
+
| No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract. Also note UC apps also not supported when deployed on unsupported hardware.
|-
|-
-
! Who designs/determines the server's BOM?<br>
+
! Who designs/determines the server's BOM?<br>  
-
| Customer wants Cisco to own
+
| Customer wants Cisco to own  
-
| Customer wants to own, with assistance from Cisco
+
| Customer wants to own, with assistance from Cisco  
-
| Customer wants to own
+
| Customer wants to own  
| N/A
| N/A
-
 
|}
|}
-
<br>
+
<br>  
For more details on Cisco UCS servers in general, see the following:  
For more details on Cisco UCS servers in general, see the following:  
Line 126: Line 128:
*[http://www.cisco.com/go/ucs Cisco UCS home page]  
*[http://www.cisco.com/go/ucs Cisco UCS home page]  
*[http://www.cisco.com/go/uconucs Cisco UC on UCS solution home page]
*[http://www.cisco.com/go/uconucs Cisco UC on UCS solution home page]
 +
<br>
<br>
Line 133: Line 136:
What does a TRC definition include?
What does a TRC definition include?
*Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level.  
*Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level.  
-
*Required RAID configuration (e.g. RAID10) when the TRC uses DAS storage  
+
*Required RAID configuration (e.g. RAID5, RAID10, etc.) - including battery backup cache or SuperCap - when the TRC uses DAS storage  
*Guidance on hardware installation and basic setup (e.g. [http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/servers.html click here]).
*Guidance on hardware installation and basic setup (e.g. [http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/servers.html click here]).
**[http://www.cisco.com/go/ucs Click here for detailed Cisco UCS server documentation] regarding hardware configuration procedures.
**[http://www.cisco.com/go/ucs Click here for detailed Cisco UCS server documentation] regarding hardware configuration procedures.
Line 151: Line 154:
[http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CA526319_00_cucm-on-virtualized-servers.html Click here] for basic guidance on TRC hardware setup.
[http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CA526319_00_cucm-on-virtualized-servers.html Click here] for basic guidance on TRC hardware setup.
-
== Table 1 - UC on UCS TRCs ==
+
== Table 1 - UC on UCS TRCs ==
 +
 
 +
{{ note | Partners may find convenience bundle SKUs (hardware-only) for most TRCs at Cisco Build & Price: http://apps.cisco.com/ccw/cpc/offers/uconucs }}
 +
 
{| width="1200" style="" class="wikitable FCK__ShowTableBorders"
{| width="1200" style="" class="wikitable FCK__ShowTableBorders"
|-
|-
-
! width="130" | Tested Reference Configuration (TRC)
+
! width="130" | "Size"
-
! width="200" | Part Numbers / SKUs / BOM
+
! width="200" | Tested Reference Configuration (TRC) <br>and Part Numbers / SKUs / BOM  
! width="400" | Form Factor, CPU Model and Specs  
! width="400" | Form Factor, CPU Model and Specs  
-
! Capacity Available to VMs<br> (using required UC sizing rules)  
+
! Capacity Available to VMs<br> (using [[Unified Communications Virtualization Sizing Guidelines | required Sizing Rules ]])
|-
|-
-
! colspan="4" | Shipping Configurations
+
| bgcolor="#652D89" align="center" | <span style="color:#FFFFFF"> '''Extra-Extra-Large<br>(2XL)''' </span>
 +
! UCS B440 M2 TRC#1 <br> [[#B440_M2_TRC.231|Click here for BOM]]
 +
|
 +
* Full-width Blade Server
 +
* Quad E7-4870 (10-core / 2.4 GHz)
 +
* 256 GB RAM
 +
* Diskless - VMware + UC apps boot from FC SAN
 +
* Cisco VIC (UCS M81KR)
 +
|
 +
* 40 total physical cores ("Full UC Performance" CPU type)
 +
* 254 GB physical RAM
 +
* Storage capacity dependent on SAN/NAS.
 +
* 2x 10Gb ports for LAN+storage access. 
 +
 
 +
|-
 +
| rowspan="3" bgcolor="#E2CEEF" align="center" | '''Extra-Large<br>(XL)'''
|-
|-
-
! UCS B440 M2 <br>TRC#1
+
! UCS C260 M2 TRC#1 <br> [[#C260_M2_TRC.231|Click here for BOM]]  
-
| [[#B440 M2 TRC#1 | Click here for BOM]]  
+
|  
|  
-
Full-width Blade Server<br>
+
* 2RU Rack-mount Server
-
Quad E7-4870 (10-core / 2.4 GHz) <br>
+
* Dual E7-2870 (10-core / 2.4 GHz)
-
256 GB RAM <br>
+
* 128 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* VMware + UC apps boot from DAS (2 logical volumes, each 8x 300 GB 10K disks, RAID5)
-
Cisco VIC (UCS M81KR)
+
* Ethernet ports on motherboard + 3rd-party NIC
|  
|  
-
40 total physical cores<br>
+
* 20 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 254 GB physical RAM<br>
+
* 126 GB physical RAM
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* 2 volumes, each of 1.93 TB
-
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC).
-
|-
+
 
-
! UCS B230 M2 <br>TRC#1<br>  
+
|-  
-
| [[#B230 M2 TRC#1 | Click here for BOM]]  
+
! UCS B230 M2 TRC#1<br> [[#B230_M2_TRC.231|Click here for BOM]]  
|  
|  
-
Half-width Blade Server<br>
+
* Half-width Blade Server
-
Dual E7-2870 (10-core / 2.4 GHz)<br>
+
* Dual E7-2870 (10-core / 2.4 GHz)
-
128 GB RAM<br>
+
* 128 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* Diskless - VMware + UC apps boot from FC SAN
-
Cisco VIC (UCS M81KR)
+
* Cisco VIC (UCS M81KR)  
|  
|  
-
20 total physical cores<br>
+
* 20 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 126 GB physical RAM<br>
+
* 126 GB physical RAM
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* Storage capacity dependent on SAN/NAS.
-
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
+
* 2x 10Gb ports for LAN+storage access.
 +
 
 +
 
 +
|-
 +
| rowspan="3" bgcolor="#89DCFF" align="center" | '''Large<br>(L)'''
|-
|-
-
! UCS B200 M3 <br>TRC#1<br>  
+
! UCS C240 M3S (SFF) TRC#1<br> [[#C240_M3S_.28SFF.29_TRC.231|Click here for BOM]]  
-
| [[#B200 M3 TRC#1 | Click here for BOM]]  
+
|  
|  
-
Half-width Blade Server<br>
+
* 2RU Rack-mount Server
-
Dual E5-2680 (8-core / 2.7 GHz)<br>
+
* Dual E5-2680 (8-core, 2.7 GHz)
-
96 GB RAM<br>
+
* 96 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* VMware + UC apps boot from DAS (2 logical volumes, each 8x 300GB 15K SFF disks, RAID5)
-
Cisco VIC 1240
+
* Ethernet ports on motherboard + 3rd-party NICs
|  
|  
-
16 total physical cores<br>
+
* 16 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 94 GB physical RAM<br>
+
* 94 GB physical RAM
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* Two volumes of 1.93 TB each
-
4x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
+
* 12x 1GbE ports for LAN access (not counting CIMC)
 +
 
|-
|-
-
! UCS B200 M2 <br>TRC#1<br>  
+
! UCS B200 M3 TRC#1<br> [[#B200_M3_TRC.231|Click here for BOM]]  
-
| [[#B200 M2 TRC#1 | Click here for BOM]]  
+
|  
|  
-
Half-width Blade Server<br>
+
* Half-width Blade Server
-
Dual E5640 (4-core / 2.66 GHz)<br>
+
* Dual E5-2680 (8-core / 2.7 GHz)
-
48 GB RAM<br>
+
* 96 GB RAM
-
VMware boot from DAS (2 disks RAID1)<br>
+
* Diskless - VMware + UC apps boot from FC SAN
-
UC apps boot from FC SAN<br>
+
* Cisco VIC 1240
-
Cisco VIC (UCS M81KR)
+
|  
|  
-
8 total physical cores<br>
+
* 16 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 46 GB physical RAM<br>
+
* 94 GB physical RAM
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* Storage capacity dependent on SAN/NAS
-
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
+
* 2x/4x 10GbE ports for LAN+storage access (dependent on IOM)
|-
|-
-
! UCS B200 M2 <br>TRC#2<br>  
+
| bgcolor="#B7D333" align="center" |  '''Medium<br>(M)'''
-
| [[#B200 M2 TRC#2 | Click here for BOM]]  
+
! UCS C220 M3S (SFF) TRC#1<br> [[#C220_M3S_.28SFF.29_TRC.231|Click here for BOM]]  
|  
|  
-
Half-width Blade Server<br>
+
* 1RU Rack-mount Server
-
Dual E5640 (4-core / 2.66 GHz)<br>
+
* Dual E5-2643 (4-core, 3.3 GHz)
-
48 GB RAM<br>
+
* 64 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* VMware + UC apps boot from DAS (8x 300GB 15K SFF disks, RAID5)
-
Cisco VIC (UCS M81KR)
+
* Ethernet ports on motherboard + 3rd-party NICs
 +
 
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 46 GB physical RAM<br>
+
* 62 GB physical RAM
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* 1.93 TB disk space
-
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC)
|-
|-
-
! UCS C260 M2 <br>TRC#1<br>  
+
| bgcolor="#FFFF79" align="center" | '''Small Plus<br>(S+)'''
-
| [[#C260 M2 TRC#1 | Click here for BOM]]  
+
! UCS C220 M3S (SFF) TRC#3<br> [[#C220_M3S_.28SFF.29_TRC.233|Click here for BOM]] <br><br>(also used as '''"High Density (HD)" Server''' for [[Cisco Business Edition 6000]])
|  
|  
-
2RU Rack-mount Server<br>
+
* 1RU Rack-mount Server
-
Dual E7-2870 (10-core / 2.4 GHz)<br>
+
* Dual E5-2665 (8-core, 2.4 GHz)
-
128 GB RAM<br>
+
* 48 GB RAM
-
VMware + UC apps boot from DAS (2 logical volumes, each 8x 300 GB 10K disks, RAID5)<br>
+
* VMware + UC apps boot from DAS (8x 300GB 15K SFF disks, RAID5)
-
Ethernet ports on motherboard + 3rd-party NIC
+
* Ethernet ports on motherboard + 3rd-party NIC
|  
|  
-
20 total physical cores<br>
+
* 16 total physical cores ("Restricted UC Performance" CPU type)
-
After ESXi, 126 GB physical RAM<br>
+
* 46 GB physical RAM
-
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br>
+
* 1.93 TB disk space
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC)
|-
|-
-
! UCS C240 M3S (SFF) <br>TRC#1<br>  
+
| bgcolor="#FFC000" align="center" | '''Small<br>(S)'''
-
| [[#C240 M3S (SFF) TRC#1 | Click here for BOM]]  
+
! UCS C220 M3S (SFF) TRC#2<br> [[#C220_M3S_.28SFF.29_TRC.232|Click here for BOM]] <br><br>(also used as '''"Medium Density (MD)" Server''' for [[Cisco Business Edition 6000]])
|  
|  
-
2RU Rack-mount Server<br>
+
* 1RU Rack-mount Server
-
Dual E5-2680 (8-core, 2.7 GHz)<br>
+
* Dual E5-2609 (4-core, 2.4 GHz)
-
96 GB RAM<br>
+
* 32 GB RAM
-
VMware + UC apps boot from DAS (2 logical volumes, each 8x 300GB 15K SFF disks, RAID5)<br>
+
* VMware + UC apps boot from DAS (4x 500GB 7.2K SFF disks, RAID10)
-
Ethernet ports on motherboard + 3rd-party NICs
+
* Ethernet ports on motherboard  
|  
|  
-
16 total physical cores<br>
+
* 8 total physical cores ("Restricted UC Performance" CPU type)
-
After ESXi, 94 GB physical RAM<br>
+
* 30 GB physical RAM
-
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br>
+
* 929.46 GB
-
9x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
* 2x 1GbE ports for LAN access (not counting CIMC)
 +
 
 +
|-
 +
! colspan="4" | Older (End of Sale) Configurations
 +
 
 +
|-
 +
| rowspan="12" bgcolor="#B7D333" align="center" | '''Medium<br>(M)'''
|-
|-
-
! UCS C220 M3S (SFF)<br>TRC#1<br>  
+
! UCS C210 M2 TRC#1<br> [[#C210_M2_TRC.231|Click here for BOM]]  
-
| [[#C220 M3S (SFF) TRC#1 | Click here for BOM]]  
+
|  
|  
-
1RU Rack-mount Server<br>
+
* 2RU Rack-mount Server
-
Dual E5-2643 (4-core, 3.3 GHz)<br>
+
* Dual E5640 (4-core, 2.66 GHz)
-
64 GB RAM<br>
+
* 48 GB RAM
-
VMware + UC apps boot from DAS (8x 300GB 15K SFF disks, RAID5)<br>
+
* VMware boots from DAS (2x 146/300 GB 15K, RAID1)
-
Ethernet ports on motherboard + 3rd-party NIC
+
* UC apps boot from DAS (8x 146/300 GB 15K, RAID5)
 +
* Ethernet ports on motherboard + 3rd-party NIC  
 +
 
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 62 GB physical RAM<br>
+
* 46 GB physical RAM
-
After RAID/VMFS overhead, 1.93 TB (not counting VM overhead)<br>
+
* 947 GB
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC)
|-
|-
-
! UCS C220 M3S (SFF)<br>TRC#2<br>  
+
! UCS C210 M2 TRC#2<br> [[#C210_M2_TRC.232|Click here for BOM]]  
-
| [[#C220 M3S (SFF) TRC#2 | Click here for BOM]]
+
|  
|  
-
1RU Rack-mount Server<br>
+
* 2RU Rack-mount Server
-
Dual E5-2609 (4-core, 2.4 GHz)<br>
+
* Dual E5640 (4-core, 2.66 GHz)
-
32 GB RAM<br>
+
* 48 GB RAM
-
VMware + UC apps boot from DAS (4x 500GB 7.2K SFF disks, RAID10)<br>
+
* VMware boots from DAS (2x 146/300 GB 15K, RAID1)
-
Ethernet ports on motherboard
+
* UC apps boot from FC SAN
 +
* Ethernet ports on motherboard + 3rd-party NIC
 +
* FC ports on 3rd-party HBA<br>
|  
|  
-
Restricted VM OVA template choices and co-residency.
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
8 total physical cores<br>
+
* 46 GB physical RAM
-
After ESXi, 30 GB physical RAM<br>
+
* 2x 4Gb FC ports for SAN access.
-
After RAID/VMFS overhead, 929.46 GB (not counting VM overhead)<br>
+
* 6x 1GbE ports for LAN access (not counting CIMC)
-
2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
|-
|-
-
! UCS C210 M2 <br>TRC#1<br>  
+
! UCS C210 M2 TRC#3<br> [[#C210_M2_TRC.233|Click here for BOM]]  
-
| [[#C210 M2 TRC#1 | Click here for BOM]]
+
|  
|  
-
2RU Rack-mount Server<br>
+
* 2RU Rack-mount Server
-
Dual E5640 (4-core, 2.66 GHz)<br>
+
* Dual E5640 (4-core, 2.66 GHz)
-
48 GB RAM<br>
+
* 48 GB RAM
-
VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br>
+
* Diskless - VMware + UC apps boot from FC SAN
-
UC apps boot from DAS (8x 146/300 GB 15K, RAID5)<br>
+
* Ethernet ports on motherboard + 3rd-party NIC  
-
Ethernet ports on motherboard + 3rd-party NIC
+
* FC ports on 3rd-party HBA<br>
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 46 GB physical RAM<br>
+
* 46 GB physical RAM
-
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br>
+
* 2x 4Gb FC ports for SAN access.
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC)
|-
|-
-
! UCS C210 M2 <br>TRC#2<br>  
+
! UCS C210 M1 TRC#1<br> [[#C210_M1_TRC.231|Click here for BOM]]  
-
| [[#C210 M2 TRC#2 | Click here for BOM]]  
+
|  
|  
-
2RU Rack-mount Server<br>
+
* 2RU Rack-mount Server
-
Dual E5640 (4-core, 2.66 GHz)<br>
+
* Dual E5540 (4-core, 2.53 GHz)
-
48 GB RAM<br>
+
* 12 GB RAM
-
VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br>
+
* VMware boots from DAS (2x 146 GB 15K, RAID1)
-
UC apps boot from FC SAN<br>
+
* UC apps boot from DAS (4x 146 GB 15K, RAID5)
-
Ethernet ports on motherboard + 3rd-party NIC<br>
+
* Ethernet ports on motherboard + 3rd-party NIC  
-
FC ports on 3rd-party HBA<br>
+
|  
|  
-
8 total physical cores<br>
+
* NOTE: Application co-residency NOT supported on this TRC. Single VM only.
-
After ESXi, 46 GB physical RAM<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
+
* 10 GB physical RAM
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC).
|-
|-
-
! UCS C210 M2 <br>TRC#3<br>  
+
! UCS C210 M1 TRC#2<br> [[#C210_M1_TRC.232|Click here for BOM]]  
-
| [[#C210 M2 TRC#3 | Click here for BOM]]
+
|  
|  
-
2RU Rack-mount Server<br>
+
* 2RU Rack-mount Server
-
Dual E5640 (4-core, 2.66 GHz)<br>
+
* Dual E5540 (4-core, 2.53 GHz)
-
48 GB RAM<br>
+
* 36 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* VMware boots from DAS (2x 146 GB 15K, RAID1)
-
Ethernet ports on motherboard + 3rd-party NIC<br>
+
* UC apps boot from DAS (8x 146 GB 15K, RAID5)
-
FC ports on 3rd-party HBA<br>
+
* Ethernet ports on motherboard + 3rd-party NIC  
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 46 GB physical RAM<br>
+
* 34 GB physical RAM
-
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
+
* 947 GB disk space
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
* 6x 1GbE ports for LAN access (not counting CIMC).  
|-
|-
-
! UCS C200 M2 <br>TRC#1<br>  
+
! UCS C210 M1 TRC#3<br> [[#C210_M1_TRC.233|Click here for BOM]]  
-
| [[#C200 M2 TRC#1 | Click here for BOM]]
+
|  
|  
-
1RU Rack-mount Server<br>
+
* 2RU Rack-mount Server
-
Dual E5506 (4-core, 2.13 GHz)<br>
+
* Dual E5540 (4-core, 2.53 GHz)
-
24 GB RAM<br>
+
* 36 GB RAM
-
VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)<br>
+
* VMware boots from DAS (2x 146 GB 15K, RAID1)
-
Ethernet ports on motherboard + 3rd-party NIC<br>
+
* UC apps boot from FC SAN
-
FC ports on 3rd-party HBA<br>
+
* Ethernet ports on motherboard + 3rd-party NIC  
 +
* FC ports on 3rd-party HBA
|  
|  
-
Restricted VM OVA template choices and co-residency.
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
8 total physical cores<br>
+
* 34 GB physical RAM
-
After ESXi, 22 GB physical RAM<br>
+
* 2x 4Gb FC ports for SAN access.
-
After RAID/VMFS overhead, 1.8 TB (not counting VM overhead)<br>
+
* 6x 1GbE ports for LAN access (not counting CIMC).  
-
2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
-
 
+
-
|-
+
-
! colspan="4" | Older (End of Sale) Configurations
+
|-
|-
-
! UCS B200 M1 <br>TRC#1<br>  
+
! UCS C210 M1 TRC#4<br> [[#C210_M1_TRC.234|Click here for BOM]]  
-
| [[#B200 M1 TRC#1 | Click here for BOM]]
+
|  
|  
-
Half-width Blade Server<br>
+
* 2RU Rack-mount Server
-
Dual E5540 (4-core / 2.53 GHz)<br>
+
* Dual E5540 (4-core, 2.53 GHz)
-
36 GB RAM<br>
+
* 36 GB RAM
-
VMware boot from DAS (2 disks RAID1)<br>
+
* Diskless - VMware + UC apps boot from FC SAN
-
UC apps boot from FC SAN<br>
+
* Ethernet ports on motherboard + 3rd-party NIC
-
3rd-party CNA (UCS M71KR-Q)
+
* FC ports on 3rd-party HBA
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 34 GB physical RAM<br>
+
* 34 GB physical RAM
-
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* 2x 4Gb FC ports for SAN access.
-
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network.  
+
* 6x 1GbE ports for LAN access (not counting CIMC).  
|-
|-
-
! UCS B200 M1 <br>TRC#2<br>  
+
! UCS B200 M2 TRC#1<br> [[#B200_M2_TRC.231|Click here for BOM]]  
-
| [[#B200 M1 TRC#2 | Click here for BOM]]
+
|  
|  
-
Half-width Blade Server<br>
+
* Half-width Blade Server
-
Dual E5540 (4-core / 2.53 GHz)<br>
+
* Dual E5640 (4-core / 2.66 GHz)
-
36 GB RAM<br>
+
* 48 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* VMware boot from DAS (2 disks RAID1)
-
3rd-party CNA (UCS M71KR-Q)  
+
* UC apps boot from FC SAN
 +
* Cisco VIC (UCS M81KR)  
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 34 GB physical RAM<br>
+
* 46 GB physical RAM
-
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.<br>
+
* Storage capacity dependent on SAN/NAS.
-
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network.  
+
* 2x 10GbE ports for LAN+storage access.  
|-
|-
-
! UCS C210 M1 <br>TRC#1<br>  
+
! UCS B200 M2 TRC#2<br> [[#B200_M2_TRC.232|Click here for BOM]]  
-
| [[#C210 M1 TRC#1 | Click here for BOM]]
+
|  
|  
-
2RU Rack-mount Server<br>
+
* Half-width Blade Server
-
Dual E5540 (4-core, 2.53 GHz)<br>
+
* Dual E5640 (4-core / 2.66 GHz)
-
12 GB RAM<br>
+
* 48 GB RAM
-
VMware boots from DAS (2x 146 GB 15K, RAID1)<br>
+
* Diskless - VMware + UC apps boot from FC SAN
-
UC apps boot from DAS (4x 146 GB 15K, RAID5)<br>
+
* Cisco VIC (UCS M81KR)  
-
Ethernet ports on motherboard + 3rd-party NIC
+
|  
|  
-
Application co-residency NOT supported on this TRC.  Single VM only.
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
8 total physical cores<br>
+
* 46 GB physical RAM
-
After ESXi, 10 GB physical RAM<br>
+
* Storage capacity dependent on SAN/NAS.
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
* 2x 10GbE ports for LAN+storage access.  
|-
|-
-
! UCS C210 M1 <br>TRC#2<br>  
+
! UCS B200 M1 TRC#1<br> [[#B200_M1_TRC.231|Click here for BOM]]  
-
| [[#C210 M1 TRC#2 | Click here for BOM]]
+
|  
|  
-
2RU Rack-mount Server<br>
+
* Half-width Blade Server
-
Dual E5540 (4-core, 2.53 GHz)<br>
+
* Dual E5540 (4-core / 2.53 GHz)
-
36 GB RAM<br>
+
* 36 GB RAM
-
VMware boots from DAS (2x 146 GB 15K, RAID1)<br>
+
* VMware boot from DAS (2 disks RAID1)
-
UC apps boot from DAS (8x 146 GB 15K, RAID5)<br>
+
* UC apps boot from FC SAN
-
Ethernet ports on motherboard + 3rd-party NIC
+
* 3rd-party CNA (UCS M71KR-Q)
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 34 GB physical RAM<br>
+
* 34 GB physical RAM
-
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br>
+
* Storage capacity dependent on SAN/NAS.
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
* 2x 10GbE ports for LAN+storage access.  
|-
|-
-
! UCS C210 M1 <br>TRC#3<br>  
+
! UCS B200 M1 TRC#2<br> [[#B200_M1_TRC.232|Click here for BOM]]  
-
| [[#C210 M1 TRC#3 | Click here for BOM]]
+
|  
|  
-
2RU Rack-mount Server<br>
+
* Half-width Blade Server
-
Dual E5540 (4-core, 2.53 GHz)<br>
+
* Dual E5540 (4-core / 2.53 GHz)
-
36 GB RAM<br>
+
* 36 GB RAM
-
VMware boots from DAS (2x 146 GB 15K, RAID1)<br>
+
* Diskless - VMware + UC apps boot from FC SAN
-
UC apps boot from FC SAN<br>
+
* 3rd-party CNA (UCS M71KR-Q)
-
Ethernet ports on motherboard + 3rd-party NIC
+
-
FC ports on 3rd-party HBA<br>
+
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Full UC Performance" CPU type)
-
After ESXi, 34 GB physical RAM<br>
+
* 34 GB physical RAM
-
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
+
* Storage capacity dependent on SAN/NAS.
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
* 2x 10GbE ports for LAN+storage access.  
 +
 
|-
|-
-
! UCS C210 M1 <br>TRC#4<br>  
+
| bgcolor="#FFC000" align="center" | '''Small<br>(S)'''
-
| [[#C210 M1 TRC#4 | Click here for BOM]]
+
! UCS C200 M2 TRC#1<br> [[#C200_M2_TRC.231|Click here for BOM]] <br><br>(also used as older '''"Medium Density (MD)" Server''' for [[Cisco Business Edition 6000]])
|  
|  
-
2RU Rack-mount Server<br>
+
* 1RU Rack-mount Server
-
Dual E5540 (4-core, 2.53 GHz)<br>
+
* Dual E5506 (4-core, 2.13 GHz)
-
36 GB RAM<br>
+
* 24 GB RAM
-
Diskless - VMware + UC apps boot from FC SAN<br>
+
* VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)
-
Ethernet ports on motherboard + 3rd-party NIC
+
* Ethernet ports on motherboard + 3rd-party NIC  
-
FC ports on 3rd-party HBA
+
 
|  
|  
-
8 total physical cores<br>
+
* 8 total physical cores ("Restricted UC Performance" CPU type)
-
After ESXi, 34 GB physical RAM<br>
+
* 22 GB physical RAM
-
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
+
* 1.8 TB
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
* 2x 1GbE ports for LAN access (not counting CIMC).
 +
 
 +
<br>
Line 476: Line 494:
All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features [[Unified Communications VMWare Requirements| as described here]].
All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features [[Unified Communications VMWare Requirements| as described here]].
-
{{ note | For '''UC on UCS Specs-based''' and "HP/IBM Specs-based''', use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory.  [[Troubleshooting and Performance Monitoring Virtualized Environments#vCenter Settings| Click here]] for how to configure VMware vCenter to capture these logs.  If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems. }}
+
{{ note | For '''UC on UCS Specs-based''' and '''Third-party Server Specs-based''', use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory.  [[Troubleshooting and Performance Monitoring Virtualized Environments#vCenter Settings| Click here]] for how to configure VMware vCenter to capture these logs.  If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems. }}
<br>  
<br>  
<br>
<br>
Line 488: Line 506:
*what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)  
*what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)  
*can its hardware component options satisfy all other requirements of this policy
*can its hardware component options satisfy all other requirements of this policy
-
 
+
*For additional considerations, see [http://www.cisco.com/en/US/customer/products/ps6884/products_tech_note09186a0080bf23f5.shtml TAC TechNote 115955].
-
<br>
+
-
 
+
-
The server vendor matters less from a "will it work" perspective and more from a joint customer success perspective:
+
-
*Recall that Cisco TAC only supports products purchased from Cisco with a valid, paid-up maintenance contract.<br>
+
-
*Therefore, how to ensure UC customer success when the server is not a Cisco product ''and'' where there is no OEM control over the server?<br>
+
<br> {{note |  
<br> {{note |  
Line 506: Line 519:
! UC on UCS TRC
! UC on UCS TRC
! UC on UCS Specs-based  
! UC on UCS Specs-based  
-
! HP/IBM Specs-based
+
! Third-Party Server Specs-based
! Not supported  
! Not supported  
Line 525: Line 538:
*Otherwise, any Cisco UCS model, generation, form factor&nbsp;(rack, blade) may be used.
*Otherwise, any Cisco UCS model, generation, form factor&nbsp;(rack, blade) may be used.
|
|
-
any HP server or IBM server is supported as long as:  
+
any 3rd-party server model is supported as long as:  
*it is on the [http://www.vmware.com/go/hcl VMware HCL] for [[Unified Communications VMWare Requirements|the version of VMware vSphere ESXi required by UC]].  
*it is on the [http://www.vmware.com/go/hcl VMware HCL] for [[Unified Communications VMWare Requirements|the version of VMware vSphere ESXi required by UC]].  
*it carries a CPU model supported by UC [[UC Virtualization Supported Hardware#Processor|(described later in this policy)]].  
*it carries a CPU model supported by UC [[UC Virtualization Supported Hardware#Processor|(described later in this policy)]].  
*it satisfies all other requirements of this policy<br>  
*it satisfies all other requirements of this policy<br>  
-
*Otherwise, any HP/IBM model, generation, form factor&nbsp;(rack, blade) may be used.
+
*Otherwise, any 3rd-party vendor, model, generation, form factor&nbsp;(rack, blade) may be used.
| rowspan="3" |
| rowspan="3" |
The following are '''NOT supported''':
The following are '''NOT supported''':
-
* Cisco, HP or IBM server models that do not satisfy the rules of this policy.
+
* Cisco or 3rd-party server models that do not satisfy the rules of this policy.
* Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model
* Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model
* Cisco UCS Express (SRE-V 9xx on ISR router hardware)
* Cisco UCS Express (SRE-V 9xx on ISR router hardware)
* Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware)
* Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware)
-
* Any other 3rd-party server vendor (such as Dell, Fujitsu, Oracle/Sun, NEC, etc.)
+
* For additional considerations, please see [http://www.cisco.com/en/US/customer/products/ps6884/products_tech_note09186a0080bf23f5.shtml TAC TechNote 115955].
-
* Cisco TAC is not obligated to troubleshoot UC app issues if the apps are deployed on unsupported hardware.
+
|-
|-
Line 579: Line 591:
<br>
<br>
-
= Processors / CPUs =
+
= Processors / CPUs =
-
UC applications require explicit qualification of CPU architectures, due to real-time technical considerations and customer requirements for predictable design rules. Therefore:
+
Cisco Collaboration is a set of mission-critical, "Tier 0" applications, where customer expectations for availability, stability and predictable performance are higher than for traditional business applications. Cisco Collaboration apps are real-time and latency-sensitive, with extremely different resource footprints and operational characteristics than traditional business applications. Therefore note the following:  
-
* not every CPU architecture will be supported
+
 
-
* within a supported CPU architecture, not every CPU model will be supported
+
<br>{{ note |
-
* UC support of new CPU architectures/models may lag the release date from Intel and/or server vendors.
+
* Explicit qualification of new or existing CPU architectures is required. 
-
<br>
+
* Until qualification occurs, new CPU architectures are not allowed or supported, even if they are believed to be "better" than currently supported CPU models.
-
{{ note | Until UC qualification occurs, new CPU models are not supported, even if they are believed to be "better" than currently supported models. }}
+
* Some CPU architectures may never be allowed or supported by Cisco Collaboration (e.g. Intel desktop architecture or Xeon 65xx).  Within an allowed or supported CPU architecture, some CPU models may never be supported (e.g. due to insufficient physical core speed). 
-
<br>
+
* Collaboration application support for new CPU architectures/models may lag the release date from Intel and/or server vendors.
-
Note that processor support varies by UC application - see the [[Unified Communications Virtualization Supported Applications | Supported Applications matrix]] <br>
+
* Collaboration applications require minimum physical core speeds on allowed CPU architectures.  Higher capacity VM configurations require higher minimum physical core speeds.}} <br>  
 +
 
 +
'''CPUs for virtualized Cisco Collaboration must meet these requirements:'''
 +
 
 +
:*Collaboration-defined "physical CPU type", either "Full UC Performance" or "Restricted UC Performance" (described in first table below).
 +
:*CPU rules for UC on UCS TRC or Specs-based in second table below.
 +
:*Allowed for the specific Cisco Collaboration app (see links for each app on www.cisco.com/go/uc-virtualized)
 +
 
 +
<br>
 +
 
 +
{{ note | Only certain VM configurations of certain Collaboration apps are allowed to run on a "Restricted UC Performance" CPU.  See the application pages on www.cisco.com/go/uc-virtualized.}}
 +
 
 +
<br>
 +
 
 +
{| cellspacing="1" cellpadding="1" border="1"
 +
|-
 +
| bgcolor="lightgray" align="center" | '''CPU Architecture &amp;&nbsp;Models<br>'''
 +
| bgcolor="lightgray" align="center" | '''Full UC&nbsp;Performance&nbsp;CPUs'''<br>
 +
| bgcolor="lightgray" align="center" | '''Restricted UC&nbsp;Performance CPUs'''<br>May not be supported for your UC app - see links at www.cisco.com/go/uc-virtualized
 +
|-
 +
| width="300" align="center" colspan="3" | '''Shipping CPUs'''<br>
 +
|-
 +
| width="250" | [http://ark.intel.com/products/codename/33175/Westmere-EX Intel Xeon E7-2800, E7-4800 or E7-8800]<br>(Westmere-EX)<br>
 +
| Any model with physical core speed 2.40 GHz or higher.<br>
 +
| Any model with physical core speed 2.00 GHz to 2.39 GHz.<br>
 +
|-
 +
| [http://ark.intel.com/products/codename/29902/ Intel Xeon E5-2600v2]<br>(Ivy Bridge-EP - note E5-16xx not supported)<br>
 +
| Any model with physical core speed 2.50 GHz or higher.<br>
 +
| Any model with physical core speed 2.00 GHz to 2.49 GHz.<br>
 +
|-
 +
| [http://ark.intel.com/products/codename/33170/Sandy-Bridge-EP Intel Xeon E5-2600 or E5-4600]<br>(Sandy Bridge-EP)<br>
 +
| Any model with physical core speed 2.50 GHz or higher.<br>
 +
| Any model with physical core speed 2.00 GHz to 2.49 GHz.<br>
 +
|-
 +
| [http://ark.intel.com/products/codename/33169/Sandy-Bridge-EN Intel Xeon E5-2400]<br>(Sandy Bridge-EN)<br>
 +
| N/A<br>
 +
| Any model with physical core speed 2.00 GHz or higher.<br>
 +
|-
 +
| align="center" colspan="3" | '''Older (end of sale)&nbsp;CPUs'''<br>
 +
|-
 +
| [http://ark.intel.com/products/codename/33164/Nehalem-EX Intel Xeon 7500]<br>(Nehalem-EX)<br>
 +
| Any model with minimum physical core speed of 2.53 GHz or higher.<br>
 +
|
 +
Any model with minimum physical core speed of 2.00 GHz to 2.52 GHz.
 +
 
 +
Note: not supported by majority of&nbsp;Collaboration apps.&nbsp; Recommend try shipping models.<br>
 +
 
 +
|-
 +
| [http://ark.intel.com/products/codename/33174/Westmere-EP Intel Xeon 5600]<br>(Westmere-EP)<br>
 +
| Any model with minimum physical core speed of 2.53 GHz or higher.<br>
 +
|
 +
Any model with minimum physical core speed of 2.00 GHz to 2.52 GHz.
 +
 
 +
Note:&nbsp;not supported by majority of&nbsp;Collaboratoin apps.&nbsp; Recommend try shipping models.<br>
 +
 
 +
|-
 +
| align="center" colspan="3" |
 +
For purposes of [[Unified Communications Virtualization Sizing Guidelines|sizing rules and co-residency]], virtualized UC apps see equivalent performance from one physical CPU core on any of the above architectures. E.g. UC apps perform eqiuvalently on 1 physical core of 5600 at 2.53+&nbsp;GHz or 1 physical core of E5-2600 at 2.50+ GHz or 1 physical core of E7-2800 at 2.40+&nbsp;GHz.<br>
 +
 
 +
|}
 +
 
 +
<br>
 +
 
 +
<br>  
{| width="1200" style="" class="wikitable FCK__ShowTableBorders"
{| width="1200" style="" class="wikitable FCK__ShowTableBorders"
|-
|-
-
!  
+
! <br>
-
! UC on UCS TRC
+
! UC on UCS TRC  
! UC on UCS Specs-based  
! UC on UCS Specs-based  
-
! HP/IBM Specs-based
+
! Third-party Server Specs-based  
! Not supported  
! Not supported  
-
 
+
<br>
|-
|-
-
! Physical CPU Quantity  
+
! Physical Sockets / CPU Quantity  
-
| width="140" | must exactly match what is listed in [[#UC on UCS Tested Reference Configurations | Table 1]].
+
| width="140" | must exactly match what is listed in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]].  
-
| colspan="2" | Customer choice (subject to what server model allows).
+
| colspan="2" | Customer choice (subject to what server model allows).  
-
| rowspan="4" width="300" |
+
| width="300" rowspan="4" |  
-
The following CPUs are '''NOT supported''' for UC:
+
The following CPUs are '''NOT supported''' for UC:  
-
* Intel CPUs that are IN one of the supported architectures/families, but do NOT meet minimum physical core speeds, are not supported for UC.
+
 
-
* Unlisted Intel CPU architectures/families (such as Intel Xeon 6500 or Intel Xeon E5-2400) are not supported for UC. An Intel CPU architecture is not supported for UC unless qualified by UC and listed above.
+
*Intel CPUs that are IN one of the supported architectures/families, but do NOT meet minimum physical core speeds, are not supported for UC.  
-
* Other CPU vendors such as AMD are not supported for UC.
+
*Unlisted Intel CPU architectures/families (such as Intel Xeon 6500, E5-16xx, Core-i7, etc.) are not supported for UC. An Intel CPU architecture is not supported for UC unless listed here.  
 +
*Other CPU vendors such as AMD are not supported for UC.
 +
 
<br> Cisco TAC is not obligated to troubleshoot UC app issues when deployed on unsupported hardware.  
<br> Cisco TAC is not obligated to troubleshoot UC app issues when deployed on unsupported hardware.  
 +
<br>
 +
 +
<br>
|-
|-
-
! Physical CPU Vendor and Model
+
! Physical CPU Vendor and CPU model
-
| must exactly match what is listed in [[#UC on UCS Tested Reference Configurations | Table 1]].
+
|  
 +
Must either exactly match the TRC BOM's CPU model in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]] or use a CPU model that satisfies the following requirements:
 +
 
 +
*Same physical CPU core count as the TRC BOM's CPU model in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]].  
 +
*Same CPU architecture as the TRC BOM's CPU model in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]].
 +
*Physical CPU core speed same or higher than that of the TRC BOM's CPU model in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]].
 +
 
 +
E.g. if the TRC BOM was tested with 2-socket Intel Xeon E5-2680 (Romley/SandyBridge-EP, 8-core, 2.7 GHz), then 2-sockets of any 8-core CPU of 2.7 GHz or higher with the E5-26xx/46xx (Intel&nbsp;Xeon Romley/SandyBridge-EP architecture) may be substituted.
 +
 
| colspan="2" |  
| colspan="2" |  
-
* Any [http://ark.intel.com/products/codename/33174/Westmere-EP Intel Xeon 5600 model] with minimum physical core speed of 2.53 GHz
+
See app links in table on http://www.cisco.com/go/uc-virtualized or [[Unified Communications Virtualization Supported Applications|Supported Applications]] for which CPU Types are allowed for a given VM configuration of a UC app. E.g. the Unified Communications Manager 7500 user VM configuration is only allowed on a Full UC Performance CPU. <br>
-
* Any [http://ark.intel.com/products/codename/33164/Nehalem-EX  Intel Xeon 7500 model] with minimum physical core speed of 2.53 GHz
+
 
-
* Any [http://ark.intel.com/products/codename/33175/Westmere-EX Intel Xeon E7-2800, E7-4800 or E7-8800 model] with minimum physical core speed of 2.4 GHz
+
<br>
-
* Any [http://ark.intel.com/products/codename/33170/Sandy-Bridge-EP Intel Xeon E5-2600 or E5-4600 model] with minimum physical core speed of 2.50 GHz
+
-
<br>For purposes of [[Unified Communications Virtualization Sizing Guidelines | sizing rules and co-residency]], virtualized UC apps see equivalent performance from a physical CPU core on any of the above architectures.  
+
|-
|-
! rowspan="2" | Total physical CPU cores  
! rowspan="2" | Total physical CPU cores  
|  
|  
-
Total ''available'' is fixed based on the CPU models in [[#UC on UCS Tested Reference Configurations | Table 1]].
+
Total ''available'' is fixed based on the CPU models in [[#UC_on_UCS_Tested_Reference_Configurations|Table 1]].  
 +
 
| colspan="2" |  
| colspan="2" |  
Total ''available'' depends on the physical server's socket count and the CPU model selected.  
Total ''available'' depends on the physical server's socket count and the CPU model selected.  
 +
<br>
|-
|-
| colspan="3" |  
| colspan="3" |  
-
Total ''required'' is based on:
+
Total ''required'' is based on:  
-
* the [[Unified Communications Virtualization Downloads (including OVA/OVF Templates)|sum of UC virtual machines' vCPUs]]
+
-
* and the [[Unified Communications Virtualization Sizing Guidelines| UC sizing and co-residency rules (click here)]].  <br>
+
-
Per these policies, recall that '''physical CPU cores may not be over-subscribed for UC VMs'''
+
*the [[Unified Communications Virtualization Downloads (including OVA/OVF Templates)|sum of UC virtual machines' vCPUs]]
-
* I.e. '''one physical CPU core must equal one VM vCPU core'''.
+
*and the [[Unified Communications Virtualization Sizing Guidelines|UC sizing and co-residency rules (click here)]]. <br>
-
* Hyper-threading on the CPU should be enabled when available, but the resulting Logical Cores do not change UC app rules. UC rules are based on 1:1 mapping of physical cores to virtual cores, not Logical Cores to virtual cores.
+
 
-
<br> Cisco TAC is not obligated to troubleshoot UC app issues in deployments with insufficient physical processor cores or speed.
+
Per these policies, recall that '''physical CPU cores may not be over-subscribed for UC VMs'''  
 +
 
 +
*I.e. '''one physical CPU core must equal one VM vCPU core'''.  
 +
*Hyper-threading on the CPU should be enabled when available, but the resulting Logical Cores do not change UC app rules. UC rules are based on 1:1 mapping of physical cores to vcpu, not Logical Cores to vcpu.
 +
 
 +
<br> Cisco TAC is not obligated to troubleshoot UC app issues in deployments with insufficient physical processor cores or speed.  
|}
|}
 +
<br><br>
<br><br>
Line 653: Line 745:
!   
!   
! UC on UCS TRC
! UC on UCS TRC
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
Line 687: Line 779:
*Compatible with the VMware HCL and compatible with the supported server model used  
*Compatible with the VMware HCL and compatible with the supported server model used  
*''kernel disk command latency'' &lt; 4ms (no spikes above) and ''physical device command latency'' &lt; 20 ms (no spikes above).  For NFS NAS, ''guest latency'' < 24 ms (no spikes above)
*''kernel disk command latency'' &lt; 4ms (no spikes above) and ''physical device command latency'' &lt; 20 ms (no spikes above).  For NFS NAS, ''guest latency'' < 24 ms (no spikes above)
-
*[[Unified Communications Virtualization Downloads (including OVA/OVF Templates)|Published '''vDisk''' capacity requirements of UC VMs ]]. Disk space must be available to the VM as needed. If thin provisioned, running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).  
+
*[[Unified Communications Virtualization Downloads (including OVA/OVF Templates)|Published '''vDisk''' capacity requirements of UC VMs. ]]  
 +
:* For DAS-only TRCs (including [[Cisco Business Edition 6000]], Thin Provisioning (either from VMware or from storage array) is '''not''' supported. Thick provisioning must be used.
 +
:* For diskless TRCs and any Specs-based server, thin provisioning (either from VMware or from storage array) is allowed with the caveat that disk space must be available to the VM as needed. Running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).  
*[[IO Operations Per Second (IOPS)|Published '''IOPS''' performance requirements of UC VMs]] (including excess capacity provisioned to handle IOPS spikes such as during Cisco Unified Communications Manager upgrades).  
*[[IO Operations Per Second (IOPS)|Published '''IOPS''' performance requirements of UC VMs]] (including excess capacity provisioned to handle IOPS spikes such as during Cisco Unified Communications Manager upgrades).  
*Other storage system design requirements ([[UC Virtualization Storage System Design Requirements |click here]]).
*Other storage system design requirements ([[UC Virtualization Storage System Design Requirements |click here]]).
Line 702: Line 796:
! <br>
! <br>
! UC on UCS TRC  
! UC on UCS TRC  
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
! Supported Storage Options  
! Supported Storage Options  
Line 725: Line 819:
! <br>
! <br>
! UC on UCS TRC  
! UC on UCS TRC  
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
! rowspan="2" | Disk Size and Speed  
! rowspan="2" | Disk Size and Speed  
Line 741: Line 835:
*compatible with the VMware HCL and compatible with the server model used  
*compatible with the VMware HCL and compatible with the server model used  
-
*all UC latency, performance and capacity requirements are met
+
*all UC latency, performance and capacity requirements are met.  To ensure optimum UC app performance, '''be sure to use Battery Backup cache or SuperCap on RAID controllers for DAS.'''
|-
|-
Line 767: Line 861:
<br>  
<br>  
-
<br> <br> '''Removable Media'''<br>Booting from USB devices or SD cards is not supported with UC apps at this time.<br>
+
<br> <br> '''Removable Media'''
 +
 
 +
{| width="770" class="wikitable FCK__ShowTableBorders" style=""
 +
|-
 +
! <br>
 +
! UC on UCS TRC
 +
! Specs-based (UCS or 3rd-party Server)
 +
|-
 +
! Boot from USB devices or SD cards  
 +
|
 +
* Not allowed or supported for UC apps.   Must boot them from DAS or FC SAN depending on Table 1.
 +
* Not allowed or supported for VMware vSphere ESXi.  Must boot from DAS or FC SAN depending on Table 1.
 +
* Note all current TRCs are either diskless blades or C-Series DAS/HDD.  SD cards in C-Series TRCs are used for convenience to get the UCS utilities (like SCU and HUU), in lieu of a DVD drive.
 +
|
 +
* Not allowed or supported for UC apps.  Must boot them from DAS, SAN or NAS per Specs-based Storage requirements.
 +
* Allowed for VMware vSphere ESXi (with same support demarcations as with "boot from FC SAN" ). 
 +
|}
Otherwise, there are no UC-specific requirements or restrictions. The different methods of [[Implementing Virtualization Deployments#Installing_UC_Applications_in_the_VM|installing UC apps into VMs]] can leverage the following distribution types of Cisco UC software:  
Otherwise, there are no UC-specific requirements or restrictions. The different methods of [[Implementing Virtualization Deployments#Installing_UC_Applications_in_the_VM|installing UC apps into VMs]] can leverage the following distribution types of Cisco UC software:  
Line 784: Line 894:
!  
!  
! UC on UCS TRC  
! UC on UCS TRC  
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
! Physical Adapter Hardware (NIC, HBA, VIC, CNA)  
! Physical Adapter Hardware (NIC, HBA, VIC, CNA)  
Line 832: Line 942:
= UC on UCS TRC Bills of Material (BOMs) =
= UC on UCS TRC Bills of Material (BOMs) =
-
{{ note | Do not assume that every UCS bundle part number on [https://apps.cisco.com/QuickCatalog/home.do UCS Quick Catalog] can be used with UC on UCS.  Before quoting one of these bundles, identify the BOM that it ships and see below:
+
{{ note | Bundle SKUs for UC on UCS TRCs are listed in the [http://apps.cisco.com/ccw/cpc/offers/uconucs UC on UCS section of Cisco Commerce Build and Price tool].
-
* If the bundle meets TRC requirements, it may be quoted for UC on UCS TRC.
+
 
 +
Do not assume that other UCS bundle SKUs on [http://apps.cisco.com/ccw/cpc/home.do Cisco Commerce Build and Price] can be used with UC on UCS.  Before quoting one of these bundles, identify the BOM that it ships and see below:
 +
* If the bundle meets TRC requirements on this page, it may be quoted for UC on UCS TRC.
* If the bundle does NOT meet TRC requirements but DOES meet Specs-based requirements, then it may be quoted for UC on UCS Specs-based only.
* If the bundle does NOT meet TRC requirements but DOES meet Specs-based requirements, then it may be quoted for UC on UCS Specs-based only.
* If the bundle does NOT meet TRC requirements and also does NOT meet Specs-based requirements, then it may NOT be quoted for UC on UCS at all without modification.    }} <br>
* If the bundle does NOT meet TRC requirements and also does NOT meet Specs-based requirements, then it may NOT be quoted for UC on UCS at all without modification.    }} <br>
Line 1,026: Line 1,138:
=== B200 M3 TRC#1 ===
=== B200 M3 TRC#1 ===
 +
 +
This configuration is also quotable as either UCUCS-EZ-B200M3 (single blade) or UCSB-EZ-UC-B200M3 (multiple blades with chassis and switching).
{| class="prettytable"
{| class="prettytable"
Line 1,112: Line 1,226:
<br>
<br>
-
 
-
=== B200 M2 TRC#1 ===
 
-
 
-
This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1.  Memory and hard drives changes due to industry technology transitions not UC app requirements.
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|'''Quantity'''
 
-
|'''Cisco Part Number'''
 
-
|'''Description'''
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-B6625-1'''
 
-
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|'''2'''
 
-
|'''A01-X0109'''
 
-
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|'''12'''
 
-
|
 
-
Either:
 
-
*'''N01-M304GB1
 
-
*'''A02-M304GB2-L
 
-
*'''UCS-MR-1X041RX-A
 
-
|<br>
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 
-
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 
-
 
-
|-
 
-
|'''2'''
 
-
|Either:
 
-
*'''A03-D146GC2
 
-
*'''UCS-HDD300GI2F105'''
 
-
|<br>
 
-
*146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 
-
*300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-AC0002'''
 
-
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 
-
 
-
|-
 
-
|'''2'''
 
-
 
-
|'''N20-BHTS1'''
 
-
 
-
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
|}
 
-
 
-
 
-
 
-
<br>
 
-
 
-
=== B200 M2 TRC#2 ===
 
-
Memory and hard drives changes are due to industry transitions and not UC app requirements.
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|'''Quantity'''
 
-
|'''Cisco Part Number'''
 
-
|'''Description'''
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-B6625-1'''
 
-
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|'''2'''
 
-
|'''A01-X0109'''
 
-
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|'''12'''
 
-
|
 
-
Either:
 
-
*'''N01-M304GB1
 
-
*'''A02-M304GB2-L
 
-
*'''UCS-MR-1X041RX-A
 
-
|<br>
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 
-
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 
-
 
-
|-
 
-
|
 
-
|
 
-
| Diskless
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-AC0002'''
 
-
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 
-
 
-
|-
 
-
|'''2'''
 
-
 
-
|'''N20-BHTS1'''
 
-
 
-
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
|}
 
-
 
-
 
-
 
-
<br>
 
-
 
-
 
-
<br>
 
-
 
-
=== B200 M1 TRC#1 ===
 
-
 
-
This configuration was also quotable as UCS-B200M2-VCS1.
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|
 
-
'''Quantity'''
 
-
 
-
|
 
-
'''Cisco Part Number'''
 
-
 
-
|
 
-
'''Description'''
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-B6620-1'''
 
-
 
-
|
 
-
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-X00002'''
 
-
 
-
|
 
-
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|
 
-
'''8'''
 
-
 
-
|
 
-
'''N01-M304GB1'''
 
-
 
-
|
 
-
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''A03-D146GA2'''
 
-
 
-
|
 
-
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-AQ0002'''
 
-
 
-
|
 
-
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-BHTS1'''
 
-
 
-
|
 
-
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
 
-
|}
 
-
 
-
 
-
 
-
<br>
 
-
 
-
=== B200 M1 TRC#2 ===
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|
 
-
'''Quantity'''
 
-
 
-
|
 
-
'''Cisco Part Number'''
 
-
 
-
|
 
-
'''Description'''
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-B6620-1'''
 
-
 
-
|
 
-
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-X00002'''
 
-
 
-
|
 
-
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|
 
-
'''8'''
 
-
 
-
|
 
-
'''N01-M304GB1'''
 
-
 
-
|
 
-
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
 
-
|-
 
-
|
 
-
|
 
-
|Diskless
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-AQ0002'''
 
-
 
-
|
 
-
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-BHTS1'''
 
-
 
-
|
 
-
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
 
-
|}
 
-
 
-
 
Line 1,483: Line 1,330:
'''1'''
'''1'''
-
|
+
| One of:<br>
-
'''N2XX-AIPCI02'''
+
*'''N2XX-AIPCI02'''
 +
*'''UCSC-PCIE-IRJ45'''
-
|
+
|<br>
-
Intel Quad port GbE Controller (E1G44ETG1P20)
+
* Intel Quad port GbE Controller (E1G44ETG1P20)
 +
* Intel i350 Quad Port 1Gb Adapter
|-
|-
Line 1,598: Line 1,447:
{{ note | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. }}
{{ note | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. }}
 +
This configuration is also available via bundle UCUCS-EZ-C240M3S.  Note that the RAID controller shipped with this bundle depends on date of purchase. <br>
{| class="prettytable"
{| class="prettytable"
Line 1,634: Line 1,484:
|-
|-
|'''1
|'''1
-
|'''UCS-RAID-9266  
+
|'''UCSC-SD-16G-C240
-
|MegaRAID 9266-8i + battery backup for C240 and C220
+
|16GB SD Card Module for C240 Servers
 +
 
 +
|-
 +
|'''1
 +
|'''One of:
 +
*UCS-RAID-9266  
 +
*UCS-RAID-9266CV
 +
*UCS-RAID9271CV-8I
 +
|<br>
 +
*MegaRAID 9266-8i + battery backup for C240 and C220
 +
*MegaRAID 9266CV-8i w/TFM + Super Cap
 +
*MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
|-
|-
Line 1,661: Line 1,522:
|'''UCSC-RAIL-2U
|'''UCSC-RAIL-2U
|Auto-included: 2U Rail Kit for UCS C-Series servers
|Auto-included: 2U Rail Kit for UCS C-Series servers
-
 
-
|-
 
-
|'''1
 
-
|'''UCSC-SD-16G-C240
 
-
|Auto-included: 16GB SD Card Module for C240 Servers
 
|-
|-
Line 1,683: Line 1,539:
=== C220 M3S (SFF) TRC#1 ===
=== C220 M3S (SFF) TRC#1 ===
-
{{ note | This TRC is NOT supported for use with Cisco Business Edition 6000. }}
+
{{ note | This TRC is NOT supported for use with [[Cisco Business Edition 6000]]. }}
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }}
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }}
 +
This configuration is also available as bundle UCUCS-EZ-C220M3S.  Note that the RAID controller shipped with this bundle depends on date of purchase. 
Line 1,718: Line 1,575:
|-
|-
|'''1
|'''1
-
|'''UCS-RAID-9266  
+
|'''One of:
-
|MegaRAID 9266-8i + battery backup for C240 and C220
+
*UCS-RAID-9266  
 +
*UCS-RAID-9266CV
 +
*UCS-RAID9271CV-8I
 +
 
 +
|<br>
 +
*MegaRAID 9266-8i + battery backup for C240 and C220
 +
*MegaRAID 9266CV-8i w/TFM + Super Cap
 +
*MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
 +
 
 +
|-
 +
|'''1
 +
|'''UCSC-SD-16G-C220
 +
|16GB SD Card Module for C220 Servers
|-
|-
Line 1,735: Line 1,604:
|'''UCSC-PSU-650W  
|'''UCSC-PSU-650W  
|650W power supply for C-series rack servers
|650W power supply for C-series rack servers
-
 
-
|-
 
-
|'''1
 
-
|'''UCSC-SD-16G-C220
 
-
|16GB SD Card Module for C220 Servers
 
|-
|-
Line 1,759: Line 1,623:
=== C220 M3S (SFF) TRC#2 ===
=== C220 M3S (SFF) TRC#2 ===
{{ note |  
{{ note |  
-
*This TRC is supported for use with:
+
*This hardware configuration is supported for use as:
-
** Cisco Business Edition 6000 (where it is quoted as UCSC-C220-M3SBE as part of CMBE6K-UCL or CMBE6K-UWL)
+
** a "Medium Density (MD)" server for [[Cisco Business Edition 6000]] (as auto-included option in a BE6K bundle)
-
** UC on UCS TRC
+
** a "Small TRC" for UC on UCS (as a separately ordered hardware-only bundle: UCSC-C220-M3SBE&#x3D; )
-
*In either deployment scenario, there are special rules for allowed apps, allowed VM OVA templates and allowed co-residency.}}<br>
+
}}<br>
 +
 
 +
The RAID controller shipped with the above bundles depends on the date purchased.
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br>
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br>
Line 1,796: Line 1,662:
|-
|-
|'''1
|'''1
-
|'''UCS-RAID-9266  
+
|'''UCSC-SD-16G-C220
-
|MegaRAID 9266-8i + battery backup for C240 and C220
+
|16GB SD Card Module for C220 Servers
 +
 
 +
|-
 +
|'''1
 +
|'''One of:
 +
*UCS-RAID-9266  
 +
*UCS-RAID-9266CV
 +
*UCS-RAID9271CV-8I
 +
 
 +
|<br>
 +
* MegaRAID 9266-8i + battery backup for C240 and C220
 +
* MegaRAID 9266CV-8i w/TFM + Super Cap
 +
* MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
|-
|-
Line 1,813: Line 1,691:
|'''UCSC-PSU-650W  
|'''UCSC-PSU-650W  
|650W power supply for C-series rack servers
|650W power supply for C-series rack servers
-
 
-
|-
 
-
|'''1
 
-
|'''UCSC-SD-16G-C220
 
-
|16GB SD Card Module for C220 Servers
 
|-
|-
Line 1,843: Line 1,716:
|'''UCSC-PCIF-01F
|'''UCSC-PCIF-01F
|Auto-included:  Full height PCIe filler for C-Series
|Auto-included:  Full height PCIe filler for C-Series
 +
 +
|}
 +
<br>
 +
 +
=== C220 M3S (SFF) TRC#3 ===
 +
{{ note |
 +
*This hardware configurations is supported for use as either:
 +
** a "High Density (HD)" server for [[Cisco Business Edition 6000]] (as auto-included option in a BE6K bundle)
 +
** a "Small Plus TRC" for UC on UCS (as separately ordered hardware-only a la carte using BOM below)
 +
}}<br>
 +
 +
The RAID controller shipped with above bundles is dependent on the date purchased.
 +
 +
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br>
 +
 +
{| class="prettytable"
 +
|-
 +
|'''Quantity'''
 +
 +
|'''Cisco Part Number'''
 +
 +
|'''Description'''
 +
 +
|-
 +
|'''1'''
 +
|'''UCSC-C220-M3S
 +
|UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit
 +
 +
|-
 +
|'''2
 +
|'''UCS-CPU-E5-2665
 +
|2.40 GHz E5-2665/115W 8C/20MB Cache/DDR3 1600MHz
 +
 +
|-
 +
|'''6
 +
|'''UCS-MR-1X082RY-A 
 +
|8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
 +
 +
|-
 +
|'''8
 +
|'''UCS-HDD300GI2F105
 +
|300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 +
 +
|-
 +
|'''1
 +
|'''One of:
 +
*UCS-RAID-9266
 +
*UCS-RAID-9266CV
 +
*UCS-RAID9271CV-8I
 +
 +
|<br>
 +
* MegaRAID 9266-8i + battery backup for C240 and C220
 +
* MegaRAID 9266CV-8i w/TFM + Super Cap
 +
* MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
 +
 +
|-
 +
|'''
 +
|'''
 +
|DVD drive not offered with C220 M3.
 +
 +
|-
 +
|'''1
 +
|'''UCSC-PCIE-IRJ45
 +
|Intel i350 Quad Port 1Gb Adapter
 +
 +
|-
 +
|'''2
 +
|'''UCSC-PSU-650W
 +
|650W power supply for C-series rack servers
 +
 +
|-
 +
|'''2
 +
|'''UCSC-HS-C220M3
 +
|Auto-included: Heat Sink for UCS C220 M3 Rack Server
 +
 +
|-
 +
|'''1
 +
|'''UCSC-RAIL1
 +
|Auto-included: 2U Rail Kit for C220 servers
|}
|}
Line 1,848: Line 1,800:
<br>
<br>
 +
 +
<br>
 +
 +
= End of Sale UC on UCS TRC Bills of Material (BOMs) =
 +
 +
=== B200 M2 TRC#1 ===
 +
 +
This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1.  Memory and hard drives changes due to industry technology transitions not UC app requirements.
 +
 +
{| class="prettytable"
 +
|-
 +
|'''Quantity'''
 +
|'''Cisco Part Number'''
 +
|'''Description'''
 +
 +
|-
 +
|'''1'''
 +
|'''N20-B6625-1'''
 +
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|'''2'''
 +
|'''A01-X0109'''
 +
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 +
 +
|-
 +
|'''12'''
 +
|
 +
Either:
 +
*'''N01-M304GB1
 +
*'''A02-M304GB2-L
 +
*'''UCS-MR-1X041RX-A
 +
|<br>
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 +
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 +
 +
|-
 +
|'''2'''
 +
|Either:
 +
*'''A03-D146GC2
 +
*'''UCS-HDD300GI2F105'''
 +
|<br>
 +
*146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 +
*300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 +
 +
|-
 +
|'''1'''
 +
|'''N20-AC0002'''
 +
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 +
 +
|-
 +
|'''2'''
 +
 +
|'''N20-BHTS1'''
 +
 +
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
|}
 +
 +
 +
 +
<br>
 +
 +
=== B200 M2 TRC#2 ===
 +
Memory and hard drives changes are due to industry transitions and not UC app requirements.
 +
 +
{| class="prettytable"
 +
|-
 +
|'''Quantity'''
 +
|'''Cisco Part Number'''
 +
|'''Description'''
 +
 +
|-
 +
|'''1'''
 +
|'''N20-B6625-1'''
 +
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|'''2'''
 +
|'''A01-X0109'''
 +
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 +
 +
|-
 +
|'''12'''
 +
|
 +
Either:
 +
*'''N01-M304GB1
 +
*'''A02-M304GB2-L
 +
*'''UCS-MR-1X041RX-A
 +
|<br>
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 +
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 +
 +
|-
 +
|
 +
|
 +
| Diskless
 +
 +
|-
 +
|'''1'''
 +
|'''N20-AC0002'''
 +
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 +
 +
|-
 +
|'''2'''
 +
 +
|'''N20-BHTS1'''
 +
 +
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
|}
 +
 +
 +
 +
<br>
 +
 +
 +
<br>
 +
 +
=== B200 M1 TRC#1 ===
 +
 +
This configuration was also quotable as UCS-B200M2-VCS1.
 +
 +
{| class="prettytable"
 +
|-
 +
|
 +
'''Quantity'''
 +
 +
|
 +
'''Cisco Part Number'''
 +
 +
|
 +
'''Description'''
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-B6620-1'''
 +
 +
|
 +
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-X00002'''
 +
 +
|
 +
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 +
 +
|-
 +
|
 +
'''8'''
 +
 +
|
 +
'''N01-M304GB1'''
 +
 +
|
 +
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''A03-D146GA2'''
 +
 +
|
 +
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-AQ0002'''
 +
 +
|
 +
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-BHTS1'''
 +
 +
|
 +
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
 +
|}
 +
 +
 +
 +
<br>
 +
 +
=== B200 M1 TRC#2 ===
 +
 +
{| class="prettytable"
 +
|-
 +
|
 +
'''Quantity'''
 +
 +
|
 +
'''Cisco Part Number'''
 +
 +
|
 +
'''Description'''
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-B6620-1'''
 +
 +
|
 +
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-X00002'''
 +
 +
|
 +
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 +
 +
|-
 +
|
 +
'''8'''
 +
 +
|
 +
'''N01-M304GB1'''
 +
 +
|
 +
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
 +
|-
 +
|
 +
|
 +
|Diskless
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-AQ0002'''
 +
 +
|
 +
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-BHTS1'''
 +
 +
|
 +
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
 +
|}
 +
=== C210 M2 TRC#1 ===
=== C210 M2 TRC#1 ===
Line 1,913: Line 2,135:
*'''N2XX-ABPCI03
*'''N2XX-ABPCI03
*'''N2XX-ABPCI03-M3
*'''N2XX-ABPCI03-M3
 +
*'''N2XX-AIPCI02
|<br>
|<br>
*Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
*Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
*Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
*Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
 +
*Intel Quad port GbE Controller (E1G44ETG1P20)
|-
|-
Line 2,029: Line 2,253:
*'''N2XX-ABPCI03
*'''N2XX-ABPCI03
*'''N2XX-ABPCI03-M3
*'''N2XX-ABPCI03-M3
 +
*'''N2XX-AIPCI02
|<br>
|<br>
*Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
*Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
*Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
*Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
 +
*Intel Quad port GbE Controller (E1G44ETG1P20)
|-
|-
Line 2,150: Line 2,376:
*'''N2XX-ABPCI03
*'''N2XX-ABPCI03
*'''N2XX-ABPCI03-M3
*'''N2XX-ABPCI03-M3
 +
*'''N2XX-AIPCI02
|<br>
|<br>
*Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
*Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
*Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
*Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
 +
*Intel Quad port GbE Controller (E1G44ETG1P20)
|-
|-
Line 2,919: Line 3,147:
-
<br>
 
 +
<br>
=== C200 M2 TRC#1 ===
=== C200 M2 TRC#1 ===
{{ note | This TRC has special rules for allowed VM OVA templates and allowed co-residency.}}
{{ note | This TRC has special rules for allowed VM OVA templates and allowed co-residency.}}
Line 2,934: Line 3,162:
|
|
'''Quantity'''
'''Quantity'''
-
 
|
|
'''Cisco Part Number'''
'''Cisco Part Number'''
-
 
|
|
'''Description'''
'''Description'''
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-1120402W'''
'''R200-1120402W'''
-
 
|
|
UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card
UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card
-
 
|-
|-
|
|
'''2'''
'''2'''
-
 
|
|
'''A01-X0113'''
'''A01-X0113'''
-
 
|
|
2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz
2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz
-
 
|-
|-
|'''6'''
|'''6'''
Line 2,972: Line 3,191:
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
-
 
|-
|-
|
|
'''4'''
'''4'''
-
 
|
|
'''R200-D1TC03'''
'''R200-D1TC03'''
-
 
|
|
Gen 2 1TB SAS 7.2K RPM
Gen 2 1TB SAS 7.2K RPM
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-PL004'''
'''R200-PL004'''
-
 
|
|
LSI 6G MegaRAID 9260-4i card (C200 only)
LSI 6G MegaRAID 9260-4i card (C200 only)
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|Either:
|Either:
*'''R2XX-LBBU
*'''R2XX-LBBU
*'''UCSC-LBBU02'''
*'''UCSC-LBBU02'''
-
 
|<br>
|<br>
*Battery Back-up  
*Battery Back-up  
*Battery back unit for C200 LFF and SFF M2
*Battery back unit for C200 LFF and SFF M2
-
 
|-
|-
|'''1
|'''1
Line 3,015: Line 3,224:
*Rail Kit for the UCS 200, 210, C250 Rack Servers
*Rail Kit for the UCS 200, 210, C250 Rack Servers
*Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
*Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
-
 
|-
|-
|
|
'''2'''
'''2'''
-
 
|
|
'''R200-BHTS1'''
'''R200-BHTS1'''
-
 
|
|
Included: CPU heat sink for UCS C200 M1 Rack Server
Included: CPU heat sink for UCS C200 M1 Rack Server
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-PCIBLKF1'''
'''R200-PCIBLKF1'''
-
 
|
|
Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-SASCBL-001'''
'''R200-SASCBL-001'''
-
 
|
|
Included: Internal SAS Cable for a base UCS C200 M1 Server
Included: Internal SAS Cable for a base UCS C200 M1 Server
-
 
|-
|-
|'''1
|'''1
Line 3,054: Line 3,253:
*650W power supply, w/added 5A Standby for UCS C200 or C210
*650W power supply, w/added 5A Standby for UCS C200 or C210
*650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
*650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R2XX-PSUBLKP'''
'''R2XX-PSUBLKP'''
-
 
|
|
Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1
Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1
-
 
|}
|}
 +
 +
 +
 +
 +

Revision as of 17:57, 27 September 2013

Go to: Guidelines to Edit UC Virtualization Pages


Contents

Introduction

Note Note: Not all UC apps support all hardware options. Click here for supported apps matrix.

This web page describes supported compute, storage and network hardware for Virtualization of Cisco Unified Communications, including UC on UCS (Cisco Unified Communications on Cisco Unified Computing System). Click here for a checklist to design, quote and procure a virtualized UC solution that follows Cisco's support policy.

Cisco uses three different support models:


"TRC" used by itself means "UC on UCS Tested Reference Configuration (TRC)". "UC on UCS" used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.
"Specs-based" used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based. 

Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen:

  • Virtual machine (OVA) definitions
  • VMware product, version and feature support
  • VMware configuration requirements for UC
  • Application/VM Co-residency policy (specifically regarding application mix, 3rd-party support, no reservations / oversubscription, virtual/physical sizing rules and max VM count per server).



UC on UCS TRC UC on UCS Specs-based Third-Party Server Specs-based Other hardware
Basic Approach Configuration-based Rules-based Rules-based Not supported - does not satisfy this page's policy.
Allowed for which UC apps? Click here for supported apps matrix Click here for supported apps matrix Click here for supported apps matrix Not supported
UC-required Virtualization Software
  • Click here for general requirements.
  • VMware vCenter is optional.
  • One of the following is mandatory:
    • Cisco UC Virtualization Foundation
    • VMware vSphere
    • Click here for supported versions, editions, features, capacities and purchase options.
  • Click here for general requirements.
  • VMware vCenter is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.
  • One of the following is mandatory:
    • Cisco UC Virtualization Foundation
    • VMware vSphere
    • Click here for supported versions, editions, features and capacities and purchase options.
  • Click here for general requirements.
  • VMware vCenter is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.
  • VMware vSphere is mandatory:
    • Click here for supported versions, features and capacities and purchase options.
N/A - not supported.
Allowed Servers Select Cisco UCS listed in Table 1. Must follow all TRC rules in this policy. Any Cisco UCS that satisfies this page's policy Any 3rd-party server model that satisfies this page's policy None
Required Level of Virtualization/Server Experience
Low/medium
High
High
N/A
Cisco-tested? Joint validation of apps and server hardware by UC and UCS teams. Generic server hardware validation by UCS team. Not jointly validated with UC apps by Cisco.
No server hardware validation by Cisco.  Not jointly validated with UC apps by Cisco.
No Cisco testing (unsupported hardware)
Server Model, CPU and Component Choices
Less (customer accepts tradeoff of less hardware flexibility for more UC predictability).
More (customer assumes more test/design ownership to get more hardware flexibility)
More (customer assumes more test/design ownership to get more hardware flexibility)
None (unsupported hardware)
Does Cisco TAC support UC apps?
Yes, when all TRC rules in this policy are followed.

UC apps on C-Series DAS-only TRC: Supported with Guaranteed performance
UC apps on C-Series FC SAN TRC or B-Series FC SAN TRC: Supported with Guaranteed performance provided all shared storage requirements in this policy are met.

Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only
Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only
UC apps not supported when deployed on unsupported hardware.
Does Cisco TAC support the server?
Yes. If used with UC apps, then all TRC rules in this policy must be followed.
Yes. If used with UC apps, then all UC on UCS Specs-based rules in this policy must be followed.
No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract.
No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract. Also note UC apps also not supported when deployed on unsupported hardware.
Who designs/determines the server's BOM?
Customer wants Cisco to own Customer wants to own, with assistance from Cisco Customer wants to own N/A


For more details on Cisco UCS servers in general, see the following:


UC on UCS Tested Reference Configurations

Note Note:

What does a TRC definition include?

  • Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level.
  • Required RAID configuration (e.g. RAID5, RAID10, etc.) - including battery backup cache or SuperCap - when the TRC uses DAS storage
  • Guidance on hardware installation and basic setup (e.g. click here).
  • Design, installation and configuration of external hardware is not included in TRC definition, such as:
    • Network routing and switching (e.g. routers, gateways, MCUs, ethernet/FC/FCoE switches, Cisco Catalyst/Nexus/MDS, etc.)
    • QoS configuration of route/switch network devices
    • Cisco UCS B-Series chassis and switching components (e.g. Cisco UCS 6100/6200, Cisco UCS 2100/2200, Cisco UCS 5100)
    • Storage arrays (such as those from EMC, NetApp or other vendors)
  • Configuration settings, patch recommendations or step by step procedures for VMware software are not included in TRC definition.
  • Infrastructure solutions such as Vblock from Virtual Computing Environment may also be leveraged for configuration details not included in the TRC definition.


Click here for basic guidance on TRC hardware setup.

Table 1 - UC on UCS TRCs

Note Note: Partners may find convenience bundle SKUs (hardware-only) for most TRCs at Cisco Build & Price: http://apps.cisco.com/ccw/cpc/offers/uconucs
"Size" Tested Reference Configuration (TRC)
and Part Numbers / SKUs / BOM
Form Factor, CPU Model and Specs Capacity Available to VMs
(using required Sizing Rules )
Extra-Extra-Large
(2XL)
UCS B440 M2 TRC#1
Click here for BOM
  • Full-width Blade Server
  • Quad E7-4870 (10-core / 2.4 GHz)
  • 256 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • Cisco VIC (UCS M81KR)
  • 40 total physical cores ("Full UC Performance" CPU type)
  • 254 GB physical RAM
  • Storage capacity dependent on SAN/NAS.
  • 2x 10Gb ports for LAN+storage access.
Extra-Large
(XL)
UCS C260 M2 TRC#1
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E7-2870 (10-core / 2.4 GHz)
  • 128 GB RAM
  • VMware + UC apps boot from DAS (2 logical volumes, each 8x 300 GB 10K disks, RAID5)
  • Ethernet ports on motherboard + 3rd-party NIC
  • 20 total physical cores ("Full UC Performance" CPU type)
  • 126 GB physical RAM
  • 2 volumes, each of 1.93 TB
  • 6x 1GbE ports for LAN access (not counting CIMC).
UCS B230 M2 TRC#1
Click here for BOM
  • Half-width Blade Server
  • Dual E7-2870 (10-core / 2.4 GHz)
  • 128 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • Cisco VIC (UCS M81KR)
  • 20 total physical cores ("Full UC Performance" CPU type)
  • 126 GB physical RAM
  • Storage capacity dependent on SAN/NAS.
  • 2x 10Gb ports for LAN+storage access.


Large
(L)
UCS C240 M3S (SFF) TRC#1
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5-2680 (8-core, 2.7 GHz)
  • 96 GB RAM
  • VMware + UC apps boot from DAS (2 logical volumes, each 8x 300GB 15K SFF disks, RAID5)
  • Ethernet ports on motherboard + 3rd-party NICs
  • 16 total physical cores ("Full UC Performance" CPU type)
  • 94 GB physical RAM
  • Two volumes of 1.93 TB each
  • 12x 1GbE ports for LAN access (not counting CIMC)


UCS B200 M3 TRC#1
Click here for BOM
  • Half-width Blade Server
  • Dual E5-2680 (8-core / 2.7 GHz)
  • 96 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • Cisco VIC 1240
  • 16 total physical cores ("Full UC Performance" CPU type)
  • 94 GB physical RAM
  • Storage capacity dependent on SAN/NAS
  • 2x/4x 10GbE ports for LAN+storage access (dependent on IOM)
Medium
(M)
UCS C220 M3S (SFF) TRC#1
Click here for BOM
  • 1RU Rack-mount Server
  • Dual E5-2643 (4-core, 3.3 GHz)
  • 64 GB RAM
  • VMware + UC apps boot from DAS (8x 300GB 15K SFF disks, RAID5)
  • Ethernet ports on motherboard + 3rd-party NICs
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 62 GB physical RAM
  • 1.93 TB disk space
  • 6x 1GbE ports for LAN access (not counting CIMC)
Small Plus
(S+)
UCS C220 M3S (SFF) TRC#3
Click here for BOM

(also used as "High Density (HD)" Server for Cisco Business Edition 6000)
  • 1RU Rack-mount Server
  • Dual E5-2665 (8-core, 2.4 GHz)
  • 48 GB RAM
  • VMware + UC apps boot from DAS (8x 300GB 15K SFF disks, RAID5)
  • Ethernet ports on motherboard + 3rd-party NIC
  • 16 total physical cores ("Restricted UC Performance" CPU type)
  • 46 GB physical RAM
  • 1.93 TB disk space
  • 6x 1GbE ports for LAN access (not counting CIMC)
Small
(S)
UCS C220 M3S (SFF) TRC#2
Click here for BOM

(also used as "Medium Density (MD)" Server for Cisco Business Edition 6000)
  • 1RU Rack-mount Server
  • Dual E5-2609 (4-core, 2.4 GHz)
  • 32 GB RAM
  • VMware + UC apps boot from DAS (4x 500GB 7.2K SFF disks, RAID10)
  • Ethernet ports on motherboard
  • 8 total physical cores ("Restricted UC Performance" CPU type)
  • 30 GB physical RAM
  • 929.46 GB
  • 2x 1GbE ports for LAN access (not counting CIMC)
Older (End of Sale) Configurations
Medium
(M)
UCS C210 M2 TRC#1
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5640 (4-core, 2.66 GHz)
  • 48 GB RAM
  • VMware boots from DAS (2x 146/300 GB 15K, RAID1)
  • UC apps boot from DAS (8x 146/300 GB 15K, RAID5)
  • Ethernet ports on motherboard + 3rd-party NIC
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 46 GB physical RAM
  • 947 GB
  • 6x 1GbE ports for LAN access (not counting CIMC)
UCS C210 M2 TRC#2
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5640 (4-core, 2.66 GHz)
  • 48 GB RAM
  • VMware boots from DAS (2x 146/300 GB 15K, RAID1)
  • UC apps boot from FC SAN
  • Ethernet ports on motherboard + 3rd-party NIC
  • FC ports on 3rd-party HBA
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 46 GB physical RAM
  • 2x 4Gb FC ports for SAN access.
  • 6x 1GbE ports for LAN access (not counting CIMC)
UCS C210 M2 TRC#3
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5640 (4-core, 2.66 GHz)
  • 48 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • Ethernet ports on motherboard + 3rd-party NIC
  • FC ports on 3rd-party HBA
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 46 GB physical RAM
  • 2x 4Gb FC ports for SAN access.
  • 6x 1GbE ports for LAN access (not counting CIMC)
UCS C210 M1 TRC#1
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5540 (4-core, 2.53 GHz)
  • 12 GB RAM
  • VMware boots from DAS (2x 146 GB 15K, RAID1)
  • UC apps boot from DAS (4x 146 GB 15K, RAID5)
  • Ethernet ports on motherboard + 3rd-party NIC
  • NOTE: Application co-residency NOT supported on this TRC. Single VM only.
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 10 GB physical RAM
  • 6x 1GbE ports for LAN access (not counting CIMC).
UCS C210 M1 TRC#2
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5540 (4-core, 2.53 GHz)
  • 36 GB RAM
  • VMware boots from DAS (2x 146 GB 15K, RAID1)
  • UC apps boot from DAS (8x 146 GB 15K, RAID5)
  • Ethernet ports on motherboard + 3rd-party NIC
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 34 GB physical RAM
  • 947 GB disk space
  • 6x 1GbE ports for LAN access (not counting CIMC).
UCS C210 M1 TRC#3
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5540 (4-core, 2.53 GHz)
  • 36 GB RAM
  • VMware boots from DAS (2x 146 GB 15K, RAID1)
  • UC apps boot from FC SAN
  • Ethernet ports on motherboard + 3rd-party NIC
  • FC ports on 3rd-party HBA
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 34 GB physical RAM
  • 2x 4Gb FC ports for SAN access.
  • 6x 1GbE ports for LAN access (not counting CIMC).
UCS C210 M1 TRC#4
Click here for BOM
  • 2RU Rack-mount Server
  • Dual E5540 (4-core, 2.53 GHz)
  • 36 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • Ethernet ports on motherboard + 3rd-party NIC
  • FC ports on 3rd-party HBA
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 34 GB physical RAM
  • 2x 4Gb FC ports for SAN access.
  • 6x 1GbE ports for LAN access (not counting CIMC).
UCS B200 M2 TRC#1
Click here for BOM
  • Half-width Blade Server
  • Dual E5640 (4-core / 2.66 GHz)
  • 48 GB RAM
  • VMware boot from DAS (2 disks RAID1)
  • UC apps boot from FC SAN
  • Cisco VIC (UCS M81KR)
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 46 GB physical RAM
  • Storage capacity dependent on SAN/NAS.
  • 2x 10GbE ports for LAN+storage access.
UCS B200 M2 TRC#2
Click here for BOM
  • Half-width Blade Server
  • Dual E5640 (4-core / 2.66 GHz)
  • 48 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • Cisco VIC (UCS M81KR)
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 46 GB physical RAM
  • Storage capacity dependent on SAN/NAS.
  • 2x 10GbE ports for LAN+storage access.
UCS B200 M1 TRC#1
Click here for BOM
  • Half-width Blade Server
  • Dual E5540 (4-core / 2.53 GHz)
  • 36 GB RAM
  • VMware boot from DAS (2 disks RAID1)
  • UC apps boot from FC SAN
  • 3rd-party CNA (UCS M71KR-Q)
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 34 GB physical RAM
  • Storage capacity dependent on SAN/NAS.
  • 2x 10GbE ports for LAN+storage access.
UCS B200 M1 TRC#2
Click here for BOM
  • Half-width Blade Server
  • Dual E5540 (4-core / 2.53 GHz)
  • 36 GB RAM
  • Diskless - VMware + UC apps boot from FC SAN
  • 3rd-party CNA (UCS M71KR-Q)
  • 8 total physical cores ("Full UC Performance" CPU type)
  • 34 GB physical RAM
  • Storage capacity dependent on SAN/NAS.
  • 2x 10GbE ports for LAN+storage access.


Small
(S)
UCS C200 M2 TRC#1
Click here for BOM

(also used as older "Medium Density (MD)" Server for Cisco Business Edition 6000)
  • 1RU Rack-mount Server
  • Dual E5506 (4-core, 2.13 GHz)
  • 24 GB RAM
  • VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)
  • Ethernet ports on motherboard + 3rd-party NIC
  • 8 total physical cores ("Restricted UC Performance" CPU type)
  • 22 GB physical RAM
  • 1.8 TB
  • 2x 1GbE ports for LAN access (not counting CIMC).



VMware Requirements

VMware virtualization software is required for Cisco TAC support.

  • See the Introduction for basic virtualization software requirements, including what is optional and what is mandatory.
  • For Cisco UCS, no UC applications run or install directly on the server hardware; all applications run only as virtual machines. Cisco UC does not support a physical, bare-metal, or nonvirtualized installation on Cisco UCS server hardware.

All UC virtualization deployments must align with the VMware Hardware Compatibility List (HCL).

All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features as described here.

Note Note: For UC on UCS Specs-based and Third-party Server Specs-based, use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory. Click here for how to configure VMware vCenter to capture these logs. If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems.



"Can I use this server?"

UC virtualization hardware support is most dependent on the Intel CPU model and the VMware Hardware Compatibility List (HCL).


The server model only matters in the context of:

  • whether or not it is on the VMware HCL
  • what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)
  • can its hardware component options satisfy all other requirements of this policy
  • For additional considerations, see TAC TechNote 115955.

Note Note:
  • UC does not support every CPU model
  • A given server model may not carry every (or any) CPU model that UC supports.
  • Therefore your server model choices may be artificially limited by which CPUs the server models carry.


UC on UCS TRC UC on UCS Specs-based Third-Party Server Specs-based Not supported


Allowed Servers:
  • Vendors
  • Models / Generations
  • Form Factors

only Cisco Unified Computing System B-Series Blade Servers and C-Series Rack-mount Servers listed in Table 1 are supported.

any Cisco Unified Computing System server is supported as long as:

any 3rd-party server model is supported as long as:

The following are NOT supported:

  • Cisco or 3rd-party server models that do not satisfy the rules of this policy.
  • Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model
  • Cisco UCS Express (SRE-V 9xx on ISR router hardware)
  • Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware)
  • For additional considerations, please see TAC TechNote 115955.

Server or Component "Embedded Software"

  • BIOS
  • Firmware
  • Drivers

There are no UC-specific requirements.

UC apps will specify the required version of VMware vSphere ESXi. Customers should follow server vendor guidelines for what to use with this VMware version.

For Cisco UCS:

  • UCS Software or UCS Manager Software in UCS 6x00 hardware: use the latest recommended version for the VMware vSphere ESXi version
  • Other B-Series / C-Series BIOS, firmware, drivers: use the latest recommended version for the VMware vSphere ESXi version
  • If "Intel Virtualization Technology" BIOS option is available, UC recommends enabling.
  • If "Hyper-threading" BIOS option is available (and the CPU supports hyper-threading), UC recommends enabling.
    • Note that the resultant "Logical Cores" do not factor into UC sizing rules for co-residency. UC still requires mapping one physical core to one vcpu core (not to one "Logical Core").


Mechanical and Environmental
Note Note: Energy-saving features that cause reduction in CPU performance or real-time relocation/powering-down of virtual machines (such as CPU throttling or VMware Dynamic Power Management) are not supported.

Otherwise, there are no UC-specific requirements for form factor, rack mounting hardware, cable management hardware, power supplies, fans or cooling systems. Follow server vendor guidelines for these components.

If you use a Cisco UCS bundle SKU, note that the rail kit, cable management and power supply options may not match what is available with non-bundled Cisco UCS.

Redundant power supplies are highly recommended, particularly for UC on UCS.

For Cisco UCS, it is strongly recommended to use the Cisco default rail kit, unless you have different rack types such as telco racks or racks proprietary to another server vendor. Cisco does not sell any other types of rack-mounting hardware; you must purchase such hardware from a third party.




Processors / CPUs

Cisco Collaboration is a set of mission-critical, "Tier 0" applications, where customer expectations for availability, stability and predictable performance are higher than for traditional business applications. Cisco Collaboration apps are real-time and latency-sensitive, with extremely different resource footprints and operational characteristics than traditional business applications. Therefore note the following:


Note Note:
  • Explicit qualification of new or existing CPU architectures is required.
  • Until qualification occurs, new CPU architectures are not allowed or supported, even if they are believed to be "better" than currently supported CPU models.
  • Some CPU architectures may never be allowed or supported by Cisco Collaboration (e.g. Intel desktop architecture or Xeon 65xx). Within an allowed or supported CPU architecture, some CPU models may never be supported (e.g. due to insufficient physical core speed).
  • Collaboration application support for new CPU architectures/models may lag the release date from Intel and/or server vendors.
  • Collaboration applications require minimum physical core speeds on allowed CPU architectures. Higher capacity VM configurations require higher minimum physical core speeds.

CPUs for virtualized Cisco Collaboration must meet these requirements:

  • Collaboration-defined "physical CPU type", either "Full UC Performance" or "Restricted UC Performance" (described in first table below).
  • CPU rules for UC on UCS TRC or Specs-based in second table below.
  • Allowed for the specific Cisco Collaboration app (see links for each app on www.cisco.com/go/uc-virtualized)


Note Note: Only certain VM configurations of certain Collaboration apps are allowed to run on a "Restricted UC Performance" CPU. See the application pages on www.cisco.com/go/uc-virtualized.


CPU Architecture & Models
Full UC Performance CPUs
Restricted UC Performance CPUs
May not be supported for your UC app - see links at www.cisco.com/go/uc-virtualized
Shipping CPUs
Intel Xeon E7-2800, E7-4800 or E7-8800
(Westmere-EX)
Any model with physical core speed 2.40 GHz or higher.
Any model with physical core speed 2.00 GHz to 2.39 GHz.
Intel Xeon E5-2600v2
(Ivy Bridge-EP - note E5-16xx not supported)
Any model with physical core speed 2.50 GHz or higher.
Any model with physical core speed 2.00 GHz to 2.49 GHz.
Intel Xeon E5-2600 or E5-4600
(Sandy Bridge-EP)
Any model with physical core speed 2.50 GHz or higher.
Any model with physical core speed 2.00 GHz to 2.49 GHz.
Intel Xeon E5-2400
(Sandy Bridge-EN)
N/A
Any model with physical core speed 2.00 GHz or higher.
Older (end of sale) CPUs
Intel Xeon 7500
(Nehalem-EX)
Any model with minimum physical core speed of 2.53 GHz or higher.

Any model with minimum physical core speed of 2.00 GHz to 2.52 GHz.

Note: not supported by majority of Collaboration apps.  Recommend try shipping models.

Intel Xeon 5600
(Westmere-EP)
Any model with minimum physical core speed of 2.53 GHz or higher.

Any model with minimum physical core speed of 2.00 GHz to 2.52 GHz.

Note: not supported by majority of Collaboratoin apps.  Recommend try shipping models.

For purposes of sizing rules and co-residency, virtualized UC apps see equivalent performance from one physical CPU core on any of the above architectures. E.g. UC apps perform eqiuvalently on 1 physical core of 5600 at 2.53+ GHz or 1 physical core of E5-2600 at 2.50+ GHz or 1 physical core of E7-2800 at 2.40+ GHz.




UC on UCS TRC UC on UCS Specs-based Third-party Server Specs-based Not supported


Physical Sockets / CPU Quantity must exactly match what is listed in Table 1. Customer choice (subject to what server model allows).

The following CPUs are NOT supported for UC:

  • Intel CPUs that are IN one of the supported architectures/families, but do NOT meet minimum physical core speeds, are not supported for UC.
  • Unlisted Intel CPU architectures/families (such as Intel Xeon 6500, E5-16xx, Core-i7, etc.) are not supported for UC. An Intel CPU architecture is not supported for UC unless listed here.
  • Other CPU vendors such as AMD are not supported for UC.


Cisco TAC is not obligated to troubleshoot UC app issues when deployed on unsupported hardware.



Physical CPU Vendor and CPU model

Must either exactly match the TRC BOM's CPU model in Table 1 or use a CPU model that satisfies the following requirements:

  • Same physical CPU core count as the TRC BOM's CPU model in Table 1.
  • Same CPU architecture as the TRC BOM's CPU model in Table 1.
  • Physical CPU core speed same or higher than that of the TRC BOM's CPU model in Table 1.

E.g. if the TRC BOM was tested with 2-socket Intel Xeon E5-2680 (Romley/SandyBridge-EP, 8-core, 2.7 GHz), then 2-sockets of any 8-core CPU of 2.7 GHz or higher with the E5-26xx/46xx (Intel Xeon Romley/SandyBridge-EP architecture) may be substituted.

See app links in table on http://www.cisco.com/go/uc-virtualized or Supported Applications for which CPU Types are allowed for a given VM configuration of a UC app. E.g. the Unified Communications Manager 7500 user VM configuration is only allowed on a Full UC Performance CPU.


Total physical CPU cores

Total available is fixed based on the CPU models in Table 1.

Total available depends on the physical server's socket count and the CPU model selected.


Total required is based on:

Per these policies, recall that physical CPU cores may not be over-subscribed for UC VMs

  • I.e. one physical CPU core must equal one VM vCPU core.
  • Hyper-threading on the CPU should be enabled when available, but the resulting Logical Cores do not change UC app rules. UC rules are based on 1:1 mapping of physical cores to vcpu, not Logical Cores to vcpu.


Cisco TAC is not obligated to troubleshoot UC app issues in deployments with insufficient physical processor cores or speed.



Memory / RAM

Note Note: Virtualization software licenses such as Cisco UC Virtualization Foundation or VMware vSphere limit the amount of total vRAM that can be used (and therefore the amount of physical RAM that can be used for UC VMs, due to UC sizing rules). See Unified Communications VMware Requirements for these limits. In general larger deployments, or deployments with high VM counts, will require very high vRAM totals and will therefore need to use VMware vSphere instead of Cisco UC Virtualization Foundation. If using high-memory-capacity servers, use VMware vSphere instead to ensure use of all physical memory.
UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Physical RAM Total available is listed in Table 1. Additional memory may be added. Total available depends on the server chosen.

Total required is dependent on the virtual machine quantity/size mix deployed on the hardware:

  • 2GB required for virtualization software (VMware vSphere or Cisco UC Virtualization Foundation)
  • plus the sum of UC virtual machines' vRAM.
  • while following co-residency support policy rules. Per these rules, recall that UC does not support physical memory oversubscription (1 GB of vRAM must equal 1 GB of physical RAM). Cisco TAC is not obligated to troubleshoot UC app issues if the deployment has insufficient physical RAM.
Memory Module/DIMM Speed and Population

For what was tested in a TRC, see Table 1.

Follow server vendor guidelines for optimum memory population for the memory capacity required by UC.

  • For Cisco UCS, use the Specs Sheets at UCS Quick Catalog. E.g. for a UCS B200 M3 with 96GB total RAM, optimal is 4x8GB DIMM + 4x4GB DIMM. Using 6x16GB DIMM is not optimal.

Otherwise, there are no UC-specific requirements (primarily because UC does not support memory oversubscription).

  • UC allows any DIMM speed (e.g. 1333 MHz, 1600 MHz, etc.).
  • UC allows any memory hardware module size, density and quantity as long as UC-required RAM capacity is met, and the server vendor supports it.

Storage

To be supported for UC, all storage systems - whether TRC or specs-based - must meet the following requirements:

  • Compatible with the VMware HCL and compatible with the supported server model used
  • kernel disk command latency < 4ms (no spikes above) and physical device command latency < 20 ms (no spikes above). For NFS NAS, guest latency < 24 ms (no spikes above)
  • Published vDisk capacity requirements of UC VMs.
  • For DAS-only TRCs (including Cisco Business Edition 6000, Thin Provisioning (either from VMware or from storage array) is not supported. Thick provisioning must be used.
  • For diskless TRCs and any Specs-based server, thin provisioning (either from VMware or from storage array) is allowed with the caveat that disk space must be available to the VM as needed. Running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).

Note Note: UC on UCS TRCs using only DAS storage (such as C220 M3S TRC#1) have been pre-designed and tested to meet the above requirements for any UC with UC co-residency scenario that will fit on the TRC. Detailed capacity planning is not required unless deploying
  • non-UC/3rd-party apps
  • VM OVA templates created later than the TRC
  • VM OVA templates with very large vDisks (300GB+).

Note Note: All of the above requirements must be met for Cisco UC to function properly. Except for UC on UCS TRCs using DAS only, it is the customer's responsibility to design a storage system that meets the above requirements. Cisco TAC is not obligated to troubleshoot UC app issues when customer-provided storage is insufficient, overloaded or otherwise not meeting the above requirements.


See below for supported storage hardware options.


UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Supported Storage Options TRCs are only defined for:
  • DAS-only with UC-specified configuration (C260 M2, C240 M3S, C220 M3S, C210 M1/M2, C200 M2)
  • FC SAN with VMware local boot from DAS (B200 M1/M2, C210 M1/M2)
  • Diskless / boot from FC SAN (B440 M2, B230 M2, B200 M3, C210 M2)
  • DAS with customer-defined configuration (including local disks, external SAS, etc.)
  • FC, iSCSI, FCoE or Infiniband SAN
  • Diskless / boot from SAN via above transport options (only supported with VMware vSphere ESXi 4.1+ and compatible UC app versions). 
  • NFS NAS


DAS Support Details


UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Disk Size and Speed

B-Series TRC
may use the disk size/speed listed in Table 1 BOMs, or any other orderable size/speed for the blade server (since local disks are only used to boot VMware).

C-Series TRC
Both must be same or higher than specs listed in Table 1.
E.g. for a TRC tested with 300 GB 10K rpm disks, then:

  • 300GB 15K rpm is supported (faster)
  • 146GB 10K rpm not supported (too small)
  • 7.2K rpm disk of any size not supported (too slow)

DAS is supported with customer-determined disk size, speed, quantity, technology, form factor and RAID configuration as long as:

  • compatible with the VMware HCL and compatible with the server model used
  • all UC latency, performance and capacity requirements are met. To ensure optimum UC app performance, be sure to use Battery Backup cache or SuperCap on RAID controllers for DAS.
TRC BOMs are updated as orderable disk drive options change. E.g. UCS C210 M2 TRC#1 was tested with 146GB 15K rpm disks, but due to 146GB disk EOL, the BOM now specifies 300GB 15K rpm disks (still supported as TRC since both size and speed are "same or higher" than what was tested).
Disk Quantity, Technology, Form Factor Must exactly match what is listed in Table 1. E.g. if the TRC was tested with ten 2.5" SAS drives, then that must be used regardless of disk size or speed.
RAID Configuration RAID configuration, including physical-to-logical volume mapping, must exactly match Table 1 and the RAID instructions in the document Installing CUCM on Virtual Servers here.


SAN / NAS Support Details

  • Applies to any TRC or Specs-based configuration connecting to FC, iSCSI, FCoE or NFS storage.
  • No UC requirement to dedicate arrays or storage groups to UC (vs. non-UC), or to one UC app vs. other UC apps.
  • The storage solution must be compatible with the server model used. E.g. for Cisco Unified Computing System: Cisco UCS Interoperability
  • The storage solution must be compatible with the VMware HCL. For example, refer to the “SAN/Storage” tab at http://www.vmware.com/resources/compatibility/search.php?sourceid=ie7&rls=com.microsoft:en-us:IE-SearchBox&ie=&oe=
  • No UC requirements on disk size, speed, technology (SAS, SATA, FC disk), form factor or RAID configuration as long as requirements for compatibility, latency, performance and capacity are met. "Tier 1 Storage" is generally recommended for UC deployments. See the UC Virtualization Storage System Design Requirements for an illustration of a best practices storage array configuration for UC.
  • There is no UC-specific requirement for NFS version. Use what VMware and the server vendor recommend for the vSphere ESXi version required by UC.
  • Use of storage network and array "features" (such as thin provisioning or EMC Powerpath) is allowed.
  • Otherwise any shared storage configuration is allowed as long as UC requirements for VMware HCL, server compatibility, latency, capacity and performance are met.




Removable Media


UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Boot from USB devices or SD cards
  • Not allowed or supported for UC apps. Must boot them from DAS or FC SAN depending on Table 1.
  • Not allowed or supported for VMware vSphere ESXi. Must boot from DAS or FC SAN depending on Table 1.
  • Note all current TRCs are either diskless blades or C-Series DAS/HDD. SD cards in C-Series TRCs are used for convenience to get the UCS utilities (like SCU and HUU), in lieu of a DVD drive.
  • Not allowed or supported for UC apps. Must boot them from DAS, SAN or NAS per Specs-based Storage requirements.
  • Allowed for VMware vSphere ESXi (with same support demarcations as with "boot from FC SAN" ).

Otherwise, there are no UC-specific requirements or restrictions. The different methods of installing UC apps into VMs can leverage the following distribution types of Cisco UC software:

  • Physical delivery of UC apps via ISO image file on DVD.
  • Cisco eDelivery of UC apps via email with link to ISO image file download.


IO Adapters, Controllers and Devices for LAN Access and Storage Access

All adapters used (NIC, HBA, CNA, VIC, etc.) must be on the VMware Hardware Compatibility List for the version of vSphere ESXi required by UC.

UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Physical Adapter Hardware (NIC, HBA, VIC, CNA)
  • UCS B-Series TRC may use either the adapters listed in Table 1 BOMs or substitute with any other supported adapter for the blade server model. Which adapter "should" be used is dependent on deployment, design and UC apps.
  • For UCS C-Series TRC:
    • must exactly match adapter vendor/model/technology (e.g. Intel i350 for 1GbE or QLogic QLE2462 for FC) listed in Table 1 BOMs.
    • Allowed NIC quantity must be same or higher than what is listed in Table 1 BOMs.
    • Allowed HBA/VIC/CNA quantity must exactly match Table 1 BOMs.
    • Any other changes are not allowed for a UC on UCS TRC, but are allowed for UC on UCS Specs-based.
  • Only the following I/O Devices are supported:
    • HBA for storage access
      • Fibre Channel – 2Gbps or faster
      • InfiniBand
    • NIC for LAN and/or shared storage access
      • Ethernet – 1Gbps or faste.  Includes NFS and iSCSI for storage access.
    • Cisco VIC or 3rd-party Converged Network Adapter for LAN and/or storage access
      • FCoE - 10Gbps or faster
    • RAID Controllers for DAS storage access
      • SAS
      • SAS SATA Combo
      • SAS-RAID
      • SAS/SATA-RAID
      • SATA
  • The customer is also responsible for configuring redundant devices on the server (e.g. redundant NIC, HBA, VIA or CNA adapters).
  • There are no UC restrictions on hardware vendors for I/O Devices other than that VMware HCL and the server vendor/model must be compatible with them and support them.


IO Capacity and Performance

In most cases detailed capacity planning is not required for LAN IO or storage access IO. TRC adapter choices have been made to accommodate the IO of all UC on UCS app co-residency scenarios that will fit on the TRC. For guidance on active vs. standby network ports, see the Cisco UC Design Guide] and QoS Design Considerations for Virtual UC with UCS
It is the customer's responsibility to ensure the external LAN and storage access meet UC app design requirements.

  • LAN access adapters must be able to accommodate the LAN usage of UC VMs (described in UC app design guides).
  • Storage access adapters must be able to accommodate the storage IOPS (described in the Storage section of this policy).

Cisco TAC is not obligated troubleshoot UC apps issues in a deployment with insufficient or overloaded I/O devices.


UC on UCS TRC Bills of Material (BOMs)

Note Note: Bundle SKUs for UC on UCS TRCs are listed in the UC on UCS section of Cisco Commerce Build and Price tool.

Do not assume that other UCS bundle SKUs on Cisco Commerce Build and Price can be used with UC on UCS. Before quoting one of these bundles, identify the BOM that it ships and see below:

  • If the bundle meets TRC requirements on this page, it may be quoted for UC on UCS TRC.
  • If the bundle does NOT meet TRC requirements but DOES meet Specs-based requirements, then it may be quoted for UC on UCS Specs-based only.
  • If the bundle does NOT meet TRC requirements and also does NOT meet Specs-based requirements, then it may NOT be quoted for UC on UCS at all without modification.

B440 M2 TRC#1

This BOM was also quotable via fixed-configuration bundle UCS-B440M2-VCDL1.

Quantity

Cisco Part Number

Description

1

B440-BASE-M2UPG

UCS B440 M2 Blade Server w/o CPU, memory, HDD, mezzanine

4

UCS-CPU-E74870

2.4 GHz E7-4870 130W 10C CPU/30M Cache

16

UCS-MR-2X082RX-C

2X8GB DDR3-1333-MHz RDIMM/PC3-10600/dual rank/x2/1.35v

1

N20-AC0002

UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb

32

UCS-MKIT-082RX-C

Auto-included: Mem kit for UCS-MR-2X082RX-C

4

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

4

N20-BHTS3

Auto-included: CPU heat sink for UCS B440 Blade Server

1

N20-LBLKU

Auto-included: Blanking panel for B440 M1 battery backup bay


B230 M2 TRC#1

This BOM was also quotable via fixed-configuration bundle UCS-B230M2-VCDL1 (has extra RAM vs. minimum below).

Quantity

Cisco Part Number

Description

1

B230-BASE-M2UPG

UCS B230 M2 Blade Server w/o CPU, memory, SSD, mezzanine1

2

UCS-CPU-E72870

2.4 GHz E7-2870 130W 10C/30M Cache

8

UCS-MR-2X082RX-B

2X8GB DDR3-1333-MHz RDIMM/PC3-10600/dual rank/x4/1.35v

1

N20-AC0002

UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1

16

UCS-MKIT-082RX-B

Auto-included: Mem kit for UCS-MR-2X082RX-B

2

N20-BBLKD-7MM

Auto-included: UCS 7MM SSD Blank Filler

2

N20-BHTS6

Auto-included: CPU heat sink for UCS B230 Blade Server



B200 M3 TRC#1

This configuration is also quotable as either UCUCS-EZ-B200M3 (single blade) or UCSB-EZ-UC-B200M3 (multiple blades with chassis and switching).

Quantity

Cisco Part Number

Description

1

UCSB-B200-M3-U

UCS B200 M3 Blade Server w/o CPU, mem, HDD, mLOM/mezz (UPG)

2

UCS-CPU-E5-2680

2.70 GHz E5-2680 130W 8C/20MB Cache/DDR3 1600MHz

8 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-MR-1X041RY-A 4GB DDR3-1600-MHz RDIMM/PC3-12800/single rank/1.35v
Diskless

1

UCSB-MLOM-40G-01

VIC 1240 modular LOM for M3 blade servers

2

N20-BBLKD

Auto-included: UCS 2.5 inch HDD blanking panel


2

UCSB-HS-01-EP

Auto-included: Heat Sink for UCS B200 M3 server







C260 M2 TRC#1

This configuration was also quotable as UCS-C260M2-VCD2.

Quantity

Cisco Part Number

Description

1

C260-BASE-2646

UCS C260 M2 Rack Server (w/o CPU, MRB, PSU)

2

UCS-CPU-E72870

2.4 GHz E7-2870 130W 10C/30M Cache

16

C260-MRBD-002

2 DIMM Memory Riser Board For C260

16

UCS-MR-2X041RX-C

2X4GB DDR3-1333-MHz RDIMM/PC3-10600/single rank/x1/1.35v

16

A03-D300GA2

300GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

2

UCSC-DBKP-08E

8 Drive Backplane W/Expander For C-Series

1

R2XX-PL003

LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC

1

UCSC-BBU-11-C260

RAID battery backup for LSI Electr controller for C260

1

One of:
  • N2XX-AIPCI02
  • UCSC-PCIE-IRJ45

  • Intel Quad port GbE Controller (E1G44ETG1P20)
  • Intel i350 Quad Port 1Gb Adapter

2

UCSC-PSU2-1200

1200W 2u Power Supply For UCS

1

UCSC-RAIL-2U

2U Rail Kit for UCS C-Series servers



DVD drive not provided nor supported on this model

1

UCS-SD-16G

16GB SD Card module for UCS Servers

1

UCSX-MLOM-001

Modular LOM For UCS

32

UCS-MKIT-041RX-C

Auto-Included: Mem kit for UCS-MR-2X041RX-C

2

UCSC-HS-01-C260

Auto-Included: CPU HEAT SINK for UCS C260 M2 RACK SERVER

2

UCSC-PCIF-01F

Auto-Included: Full height PCIe filler for C-Series

2

UCSC-PCIF-01H

Auto-Included: Half height PCIe filler for UCS

2

UCSC-RC-P8M-C260

Auto-Included: .79m SAS RAID Cable for C260



C240 M3S (SFF) TRC#1

Note Note: The C240 M3L (LFF) is only supported under UC on UCS Specs-based.

This configuration is also available via bundle UCUCS-EZ-C240M3S. Note that the RAID controller shipped with this bundle depends on date of purchase.

Quantity Cisco Part Number Description
1 UCSC-C240-M3S UCS C240 M3 SFF w/o CPU, mem, HD, PCIe, w/ rail kit
2 UCS-CPU-E5-2680 2.70 GHz E5-2680 130W 8C/20MB Cache/DDR3 1600MHz
8 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-MR-1X041RY-A 4GB DDR3-1600-MHz RDIMM/PC3-12800/single rank/1.35v
16 UCS-HDD300GI2F105 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 UCSC-SD-16G-C240 16GB SD Card Module for C240 Servers
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV
  • UCS-RAID9271CV-8I

  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap
  • MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
DVD drive not offered with C240 M3.
2 UCSC-PCIE-IRJ45 Intel i350 Quad Port 1Gb Adapter
2 UCSC-PSU2-1200 1200W 2u Power Supply For UCS
2 UCSC-HS-C240M3 Auto-included: Heat Sink for UCS C240 M3 Rack Server
1 UCSC-RAIL-2U Auto-included: 2U Rail Kit for UCS C-Series servers
8 N20-BBLKD Auto-included: UCS 2.5 inch HDD blanking panel
2 UCSC-PCIF-01F Auto-included:Full height PCIe filler for C-Series



C220 M3S (SFF) TRC#1

Note Note: This TRC is NOT supported for use with Cisco Business Edition 6000.
Note Note: The C220 M3L (LFF) is only supported under UC on UCS Specs-based.

This configuration is also available as bundle UCUCS-EZ-C220M3S. Note that the RAID controller shipped with this bundle depends on date of purchase.


Quantity Cisco Part Number Description
1 UCSC-C220-M3S UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit
2 UCS-CPU-E5-2643 3.30 GHz E5-2643/130W 4C/10MB Cache/DDR3 1600MHz
8 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-HDD300GI2F105 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV
  • UCS-RAID9271CV-8I

  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap
  • MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
1 UCSC-SD-16G-C220 16GB SD Card Module for C220 Servers
DVD drive not offered with C220 M3.
1 UCSC-PCIE-IRJ45 Intel i350 Quad Port 1Gb Adapter
2 UCSC-PSU-650W 650W power supply for C-series rack servers
2 UCSC-HS-C220M3 Auto-included: Heat Sink for UCS C220 M3 Rack Server
1 UCSC-RAIL1 Auto-included: 2U Rail Kit for C220 servers



C220 M3S (SFF) TRC#2

Note Note:
  • This hardware configuration is supported for use as:
    • a "Medium Density (MD)" server for Cisco Business Edition 6000 (as auto-included option in a BE6K bundle)
    • a "Small TRC" for UC on UCS (as a separately ordered hardware-only bundle: UCSC-C220-M3SBE= )

The RAID controller shipped with the above bundles depends on the date purchased.

Note Note: The C220 M3L (LFF) is only supported under UC on UCS Specs-based.

Quantity Cisco Part Number Description
1 UCSC-C220-M3S UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit
2 UCS-CPU-E5-2609 2.4 GHz E5-2609/80W 4C/10MB Cache/DDR3 1066MHz
4 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
4 A03-D500GC3 500GB 6Gb SATA 7.2K RPM SFF hot plug/drive sled mounted
1 UCSC-SD-16G-C220 16GB SD Card Module for C220 Servers
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV
  • UCS-RAID9271CV-8I

  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap
  • MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
DVD drive not offered with C220 M3.
1 R2XX-RAID10 Enable RAID 10 Setting
1 UCSC-PSU-650W 650W power supply for C-series rack servers
4 N20-BBLKD Auto-included: UCS 2.5 inch HDD blanking panel
2 UCSC-HS-C220M3 Auto-included: Heat Sink for UCS C220 M3 Rack Server
1 UCSC-PSU-BLKP Auto-included: Power supply blanking panel/filler (same as San Mateo)
1 UCSC-RAIL1 Auto-included: 2U Rail Kit for C220 servers
1 UCSC-PCIF-01F Auto-included: Full height PCIe filler for C-Series


C220 M3S (SFF) TRC#3

Note Note:
  • This hardware configurations is supported for use as either:
    • a "High Density (HD)" server for Cisco Business Edition 6000 (as auto-included option in a BE6K bundle)
    • a "Small Plus TRC" for UC on UCS (as separately ordered hardware-only a la carte using BOM below)

The RAID controller shipped with above bundles is dependent on the date purchased.

Note Note: The C220 M3L (LFF) is only supported under UC on UCS Specs-based.

Quantity Cisco Part Number Description
1 UCSC-C220-M3S UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit
2 UCS-CPU-E5-2665 2.40 GHz E5-2665/115W 8C/20MB Cache/DDR3 1600MHz
6 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-HDD300GI2F105 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV
  • UCS-RAID9271CV-8I

  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap
  • MegaRAID 9271CV Raid card with 8 internal SAS/SATA parts, S
DVD drive not offered with C220 M3.
1 UCSC-PCIE-IRJ45 Intel i350 Quad Port 1Gb Adapter
2 UCSC-PSU-650W 650W power supply for C-series rack servers
2 UCSC-HS-C220M3 Auto-included: Heat Sink for UCS C220 M3 Rack Server
1 UCSC-RAIL1 Auto-included: 2U Rail Kit for C220 servers




End of Sale UC on UCS TRC Bills of Material (BOMs)

B200 M2 TRC#1

This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1. Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 N20-B6625-1 UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
2 A01-X0109 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
2 Either:
  • A03-D146GC2
  • UCS-HDD300GI2F105

  • 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
  • 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 N20-AC0002 UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
2 N20-BHTS1 Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server



B200 M2 TRC#2

Memory and hard drives changes are due to industry transitions and not UC app requirements.

Quantity Cisco Part Number Description
1 N20-B6625-1 UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
2 A01-X0109 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
Diskless
1 N20-AC0002 UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
2 N20-BHTS1 Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server





B200 M1 TRC#1

This configuration was also quotable as UCS-B200M2-VCS1.

Quantity

Cisco Part Number

Description

1

N20-B6620-1

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

8

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

2

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

N20-AQ0002

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

2

N20-BHTS1

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server



B200 M1 TRC#2

Quantity

Cisco Part Number

Description

1

N20-B6620-1

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

8

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

Diskless

1

N20-AQ0002

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

2

N20-BHTS1

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server


C210 M2 TRC#1

This configuration was also quotable as UCS-C210M2-VCD2. Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 R210-2121605W

UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card


2 A01-X0109

2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
10

Either:

  • A03-D146GC2
  • UCS-HDD300GI2F105

  • 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
  • 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 R2XX-PL003 LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC
1 R2XX-LBBU2 Battery Back-up for 6G based LSI MegaRAID Card
1 R210-SASXPAND SAS Pass-Thru Expander (srvr requiring > 8 HDDs) - C210 M1
1 Either:
  • N2XX-ABPCI03
  • N2XX-ABPCI03-M3
  • N2XX-AIPCI02

  • Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
  • Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
  • Intel Quad port GbE Controller (E1G44ETG1P20)
1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
1 Either:
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
1 R210-ODVDRW DVD-RW Drive for UCS C210 M1 Rack Servers
6 N20-BBLKD Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers
3 R200-PCIBLKF1 Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
2 R210-BHTS1 Auto-Included: CPU heat sink for UCS C210 M1 Rack Server
1 R2X0-PSU2-650W Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server
1 SASCBLSHORT-003 Auto-Included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C210 M2 TRC#2

Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 R210-2121605W

UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card


2 A01-X0109

2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
2

Either:

  • A03-D146GC2
  • UCS-HDD300GI2F105

  • 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
  • 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 R2XX-PL003 LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC
1 R2XX-LBBU2 Battery Back-up for 6G based LSI MegaRAID Card
1 Either:
  • N2XX-ABPCI03
  • N2XX-ABPCI03-M3
  • N2XX-AIPCI02

  • Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
  • Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
  • Intel Quad port GbE Controller (E1G44ETG1P20)
1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
1 Either:
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
1 R210-ODVDRW DVD-RW Drive for UCS C210 M1 Rack Servers
1 N2XX-AQPCI03 Qlogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter
14 N20-BBLKD Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers
2 R200-PCIBLKF1 Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
2 R210-BHTS1 Auto-Included: CPU heat sink for UCS C210 M1 Rack Server
1 R210-SASCBL-002 Auto-Included: Long SAS Cable for C210 (connects to SAS Extender)
1 R210-SASXTDR Auto-Included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1
1 R2X0-PSU2-650W Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server


C210 M2 TRC#3

Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 R210-2121605W

UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card


2 A01-X0109

2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
Diskless
1 R2XX-PL003 LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC
1 R2XX-LBBU2 Battery Back-up for 6G based LSI MegaRAID Card
1 Either:
  • N2XX-ABPCI03
  • N2XX-ABPCI03-M3
  • N2XX-AIPCI02

  • Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
  • Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
  • Intel Quad port GbE Controller (E1G44ETG1P20)
1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
1 Either:
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
1 R210-ODVDRW DVD-RW Drive for UCS C210 M1 Rack Servers
1 N2XX-AQPCI03 Qlogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter
14 N20-BBLKD Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers
2 R200-PCIBLKF1 Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
2 R210-BHTS1 Auto-Included: CPU heat sink for UCS C210 M1 Rack Server
1 R210-SASCBL-002 Auto-Included: Long SAS Cable for C210 (connects to SAS Extender)
1 R210-SASXTDR Auto-Included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1
1 R2X0-PSU2-650W Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server


C210 M1 TRC#1

Note Note: Application co-residency not supported on this configuration - single VM only.

This BOM was also quotable as UCS-C210M1-VCD1.

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

10

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

4

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

2

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

2

R210-SASCBL-002

Auto-included: Long SAS Cable for C210 (connects to SAS Extender)

1

R210-SASXTDR

Auto-included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1


C210 M1 TRC#2

This BOM was also quotable as UCS-C210M1-VCD2.

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

10

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

N2XX-ABPCI03

Broadcom BCM5709 Quad Gig E card (10/100/1GbE)

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

1

R210-SASXPAND

SAS Pass-Thru Expander (srvr requiring > 8 HDDs) - C210 M1

1

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

1

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

1

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

1

SASCBLSHORT-003

Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C210 M1 TRC#3

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

''''2

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

N2XX-ABPCI03

Broadcom BCM5709 Quad Gig E card (10/100/1GbE)

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

1

N2XX-AQPCI03

QLogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter

14

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

2

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

2

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

2

R210-SASCBL-002

Auto-included: Long SAS Cable for C210 (connects to SAS Extender)

1

SASCBLSHORT-003

Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C210 M1 TRC#4

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

N2XX-ABPCI03

Broadcom BCM5709 Quad Gig E card (10/100/1GbE)

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

1

N2XX-AQPCI03

QLogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter

14

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

2

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

2

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

2

R210-SASCBL-002

Auto-included: Long SAS Cable for C210 (connects to SAS Extender)

1

SASCBLSHORT-003

Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C200 M2 TRC#1

Note Note: This TRC has special rules for allowed VM OVA templates and allowed co-residency.

This configuration was also quotable as UCS-C200M2-VCD2.

When quoted as part of Cisco Business Edition 6000, it was also quotable as either UCS-C200M2-VCD2BE, UCS-C200M2-BE6K or UCS-C200M2-WL8 (in CMBE6K-UCL or CMBE6K-UWL).

Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity

Cisco Part Number

Description

1

R200-1120402W

UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card

2

A01-X0113

2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz

6

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v

4

R200-D1TC03

Gen 2 1TB SAS 7.2K RPM

1

R200-PL004

LSI 6G MegaRAID 9260-4i card (C200 only)

1

Either:
  • R2XX-LBBU
  • UCSC-LBBU02

  • Battery Back-up
  • Battery back unit for C200 LFF and SFF M2
1 Either:
  • R250-SLDRAIL
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")

2

R200-BHTS1

Included: CPU heat sink for UCS C200 M1 Rack Server

1

R200-PCIBLKF1

Included: PCIe Full Height blanking panel for UCS C-Series Rack Server

1

R200-SASCBL-001

Included: Internal SAS Cable for a base UCS C200 M1 Server

1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R2XX-PSUBLKP

Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1







Back to: Unified Communications in a Virtualized Environment

Rating: 3.0/5 (35 votes cast)

Personal tools