UC Virtualization Supported Hardware

From DocWiki

(Difference between revisions)
Jump to: navigation, search
m (1 revision)
m (1 revision)
(7 intermediate revisions not shown)
Line 13: Line 13:
*'''[[#UC on UCS Tested Reference Configurations | UC on UCS Tested Reference Configuration]] (TRC)'''  
*'''[[#UC on UCS Tested Reference Configurations | UC on UCS Tested Reference Configuration]] (TRC)'''  
*'''UC on UCS Specs-based '''
*'''UC on UCS Specs-based '''
-
*'''HP/IBM Specs-based'''
+
*'''Third-party Server Specs-based'''
<br>
<br>
'''"TRC"''' used by itself means "UC on UCS Tested Reference Configuration (TRC)".
'''"TRC"''' used by itself means "UC on UCS Tested Reference Configuration (TRC)".
'''"UC on UCS"''' used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.<br>
'''"UC on UCS"''' used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.<br>
-
'''"Specs-based"''' used by itself refers to the common rules of UC on UCS Specs-based and HP/IBM Specs-based.&nbsp; <br><br>
+
'''"Specs-based"''' used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based.&nbsp; <br><br>
Below is a comparison of the hardware support options.  Note that the following are identical regardless of the support model chosen:
Below is a comparison of the hardware support options.  Note that the following are identical regardless of the support model chosen:
Line 32: Line 32:
! UC on UCS TRC  
! UC on UCS TRC  
! UC on UCS Specs-based  
! UC on UCS Specs-based  
-
! HP/IBM Specs-based  
+
! Third-Party Server Specs-based  
-
! Any other Cisco or 3rd-party hardware
+
! Other hardware
|-
|-
! Basic Approach  
! Basic Approach  
Line 73: Line 73:
| Select Cisco UCS listed in [[#UC on UCS Tested Reference Configurations | Table 1]].  Must follow all TRC rules in this policy.
| Select Cisco UCS listed in [[#UC on UCS Tested Reference Configurations | Table 1]].  Must follow all TRC rules in this policy.
| Any Cisco UCS that satisfies this page's policy
| Any Cisco UCS that satisfies this page's policy
-
| Any HP/IBM that satisfies this page's policy
+
| Any 3rd-party server model that satisfies this page's policy
| None
| None
|-
|-
Line 133: Line 133:
What does a TRC definition include?
What does a TRC definition include?
*Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level.  
*Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level.  
-
*Required RAID configuration (e.g. RAID10) when the TRC uses DAS storage  
+
*Required RAID configuration (e.g. RAID5, RAID10, etc.) - including battery backup cache or SuperCap - when the TRC uses DAS storage  
*Guidance on hardware installation and basic setup (e.g. [http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/servers.html click here]).
*Guidance on hardware installation and basic setup (e.g. [http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/servers.html click here]).
**[http://www.cisco.com/go/ucs Click here for detailed Cisco UCS server documentation] regarding hardware configuration procedures.
**[http://www.cisco.com/go/ucs Click here for detailed Cisco UCS server documentation] regarding hardware configuration procedures.
Line 172: Line 172:
Cisco VIC (UCS M81KR)
Cisco VIC (UCS M81KR)
|  
|  
 +
"Extra-extra-large" blade server<br>
40 total physical cores<br>  
40 total physical cores<br>  
After ESXi, 254 GB physical RAM<br>
After ESXi, 254 GB physical RAM<br>
Line 186: Line 187:
Cisco VIC (UCS M81KR)
Cisco VIC (UCS M81KR)
|  
|  
 +
"Extra-large" blade server<br>
20 total physical cores<br>
20 total physical cores<br>
After ESXi, 126 GB physical RAM<br>
After ESXi, 126 GB physical RAM<br>
Line 201: Line 203:
Cisco VIC 1240
Cisco VIC 1240
|  
|  
 +
"Large" blade server<br>
16 total physical cores<br>
16 total physical cores<br>
After ESXi, 94 GB physical RAM<br>
After ESXi, 94 GB physical RAM<br>
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
4x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
4x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
-
 
-
|-
 
-
! UCS B200 M2 <br>TRC#1<br>
 
-
| [[#B200 M2 TRC#1 | Click here for BOM]]
 
-
|
 
-
Half-width Blade Server<br>
 
-
Dual E5640 (4-core / 2.66 GHz)<br>
 
-
48 GB RAM<br>
 
-
VMware boot from DAS (2 disks RAID1)<br>
 
-
UC apps boot from FC SAN<br>
 
-
Cisco VIC (UCS M81KR)
 
-
|
 
-
8 total physical cores<br>
 
-
After ESXi, 46 GB physical RAM<br>
 
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
 
-
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
 
-
 
-
|-
 
-
! UCS B200 M2 <br>TRC#2<br>
 
-
| [[#B200 M2 TRC#2 | Click here for BOM]]
 
-
|
 
-
Half-width Blade Server<br>
 
-
Dual E5640 (4-core / 2.66 GHz)<br>
 
-
48 GB RAM<br>
 
-
Diskless - VMware + UC apps boot from FC SAN<br>
 
-
Cisco VIC (UCS M81KR)
 
-
|
 
-
8 total physical cores<br>
 
-
After ESXi, 46 GB physical RAM<br>
 
-
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
 
-
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.
 
|-
|-
Line 247: Line 219:
Ethernet ports on motherboard + 3rd-party NIC
Ethernet ports on motherboard + 3rd-party NIC
|  
|  
 +
"Extra-large" server<br>
20 total physical cores<br>
20 total physical cores<br>
After ESXi, 126 GB physical RAM<br>
After ESXi, 126 GB physical RAM<br>
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br>
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
|-
|-
Line 262: Line 235:
Ethernet ports on motherboard + 3rd-party NICs
Ethernet ports on motherboard + 3rd-party NICs
|  
|  
 +
"Large" server<br>
16 total physical cores<br>
16 total physical cores<br>
After ESXi, 94 GB physical RAM<br>
After ESXi, 94 GB physical RAM<br>
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br>
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br>
-
9x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
12x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
|-
|-
Line 277: Line 251:
Ethernet ports on motherboard + 3rd-party NIC
Ethernet ports on motherboard + 3rd-party NIC
|  
|  
 +
"Medium" server<br>
8 total physical cores<br>
8 total physical cores<br>
After ESXi, 62 GB physical RAM<br>
After ESXi, 62 GB physical RAM<br>
After RAID/VMFS overhead, 1.93 TB (not counting VM overhead)<br>
After RAID/VMFS overhead, 1.93 TB (not counting VM overhead)<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.  
+
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.  
|-
|-
Line 292: Line 267:
Ethernet ports on motherboard
Ethernet ports on motherboard
|  
|  
 +
"Small" server<br>
Restricted VM OVA template choices and co-residency.
Restricted VM OVA template choices and co-residency.
8 total physical cores<br>
8 total physical cores<br>
Line 299: Line 275:
|-
|-
-
! UCS C210 M2 <br>TRC#1<br>
+
! colspan="4" | Older (End of Sale) Configurations
-
| [[#C210 M2 TRC#1 | Click here for BOM]]
+
-
|
+
-
2RU Rack-mount Server<br>
+
-
Dual E5640 (4-core, 2.66 GHz)<br>
+
-
48 GB RAM<br>
+
-
VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br>
+
-
UC apps boot from DAS (8x 146/300 GB 15K, RAID5)<br>
+
-
Ethernet ports on motherboard + 3rd-party NIC
+
-
|  
+
-
8 total physical cores<br>
+
-
After ESXi, 46 GB physical RAM<br>
+
-
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br>
+
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
|-
|-
-
! UCS C210 M2 <br>TRC#2<br>  
+
! UCS B200 M2 <br>TRC#1<br>  
-
| [[#C210 M2 TRC#2 | Click here for BOM]]  
+
| [[#B200 M2 TRC#1 | Click here for BOM]]  
|  
|  
-
2RU Rack-mount Server<br>
+
Half-width Blade Server<br>
-
Dual E5640 (4-core, 2.66 GHz)<br>
+
Dual E5640 (4-core / 2.66 GHz)<br>
48 GB RAM<br>
48 GB RAM<br>
-
VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br>
+
VMware boot from DAS (2 disks RAID1)<br>
UC apps boot from FC SAN<br>
UC apps boot from FC SAN<br>
-
Ethernet ports on motherboard + 3rd-party NIC<br>
+
Cisco VIC (UCS M81KR)
-
FC ports on 3rd-party HBA<br>
+
|  
|  
8 total physical cores<br>
8 total physical cores<br>
After ESXi, 46 GB physical RAM<br>
After ESXi, 46 GB physical RAM<br>
-
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
+
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.  
|-
|-
-
! UCS C210 M2 <br>TRC#3<br>  
+
! UCS B200 M2 <br>TRC#2<br>  
-
| [[#C210 M2 TRC#3 | Click here for BOM]]
+
| [[#B200 M2 TRC#2 | Click here for BOM]]  
|  
|  
-
2RU Rack-mount Server<br>
+
Half-width Blade Server<br>
-
Dual E5640 (4-core, 2.66 GHz)<br>
+
Dual E5640 (4-core / 2.66 GHz)<br>
48 GB RAM<br>
48 GB RAM<br>
Diskless - VMware + UC apps boot from FC SAN<br>
Diskless - VMware + UC apps boot from FC SAN<br>
-
Ethernet ports on motherboard + 3rd-party NIC<br>
+
Cisco VIC (UCS M81KR)
-
FC ports on 3rd-party HBA<br>
+
|  
|  
8 total physical cores<br>
8 total physical cores<br>
After ESXi, 46 GB physical RAM<br>
After ESXi, 46 GB physical RAM<br>
-
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
+
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.  
-
 
+
-
|-
+
-
! UCS C200 M2 <br>TRC#1<br>
+
-
| [[#C200 M2 TRC#1 | Click here for BOM]]
+
-
|
+
-
1RU Rack-mount Server<br>
+
-
Dual E5506 (4-core, 2.13 GHz)<br>
+
-
24 GB RAM<br>
+
-
VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)<br>
+
-
Ethernet ports on motherboard + 3rd-party NIC<br>
+
-
FC ports on 3rd-party HBA<br>
+
-
|
+
-
Restricted VM OVA template choices and co-residency.
+
-
8 total physical cores<br>
+
-
After ESXi, 22 GB physical RAM<br>
+
-
After RAID/VMFS overhead, 1.8 TB (not counting VM overhead)<br>
+
-
2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
-
 
+
-
|-
+
-
! colspan="4" | Older (End of Sale) Configurations
+
|-
|-
Line 397: Line 338:
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.<br>
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.<br>
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network.  
2x 10GbE ports for LAN+storage access.  LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network.  
 +
 +
|-
 +
! UCS C210 M2 <br>TRC#1<br>
 +
| [[#C210 M2 TRC#1 | Click here for BOM]]
 +
|
 +
2RU Rack-mount Server<br>
 +
Dual E5640 (4-core, 2.66 GHz)<br>
 +
48 GB RAM<br>
 +
VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br>
 +
UC apps boot from DAS (8x 146/300 GB 15K, RAID5)<br>
 +
Ethernet ports on motherboard + 3rd-party NIC
 +
|
 +
8 total physical cores<br>
 +
After ESXi, 46 GB physical RAM<br>
 +
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br>
 +
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
 +
 +
|-
 +
! UCS C210 M2 <br>TRC#2<br>
 +
| [[#C210 M2 TRC#2 | Click here for BOM]]
 +
|
 +
2RU Rack-mount Server<br>
 +
Dual E5640 (4-core, 2.66 GHz)<br>
 +
48 GB RAM<br>
 +
VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br>
 +
UC apps boot from FC SAN<br>
 +
Ethernet ports on motherboard + 3rd-party NIC<br>
 +
FC ports on 3rd-party HBA<br>
 +
|
 +
8 total physical cores<br>
 +
After ESXi, 46 GB physical RAM<br>
 +
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
 +
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
 +
 +
|-
 +
! UCS C210 M2 <br>TRC#3<br>
 +
| [[#C210 M2 TRC#3 | Click here for BOM]]
 +
|
 +
2RU Rack-mount Server<br>
 +
Dual E5640 (4-core, 2.66 GHz)<br>
 +
48 GB RAM<br>
 +
Diskless - VMware + UC apps boot from FC SAN<br>
 +
Ethernet ports on motherboard + 3rd-party NIC<br>
 +
FC ports on 3rd-party HBA<br>
 +
|
 +
8 total physical cores<br>
 +
After ESXi, 46 GB physical RAM<br>
 +
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
 +
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
|-
|-
Line 412: Line 402:
8 total physical cores<br>
8 total physical cores<br>
After ESXi, 10 GB physical RAM<br>
After ESXi, 10 GB physical RAM<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
|-
|-
Line 428: Line 418:
After ESXi, 34 GB physical RAM<br>
After ESXi, 34 GB physical RAM<br>
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br>
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
+
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.
|-
|-
Line 445: Line 435:
After ESXi, 34 GB physical RAM<br>
After ESXi, 34 GB physical RAM<br>
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
|-
|-
Line 461: Line 451:
After ESXi, 34 GB physical RAM<br>
After ESXi, 34 GB physical RAM<br>
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
2x 4Gb FC ports for SAN access.  Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br>
-
5x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
+
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
 +
 
 +
|-
 +
! UCS C200 M2 <br>TRC#1<br>
 +
| [[#C200 M2 TRC#1 | Click here for BOM]]
 +
|
 +
1RU Rack-mount Server<br>
 +
Dual E5506 (4-core, 2.13 GHz)<br>
 +
24 GB RAM<br>
 +
VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)<br>
 +
Ethernet ports on motherboard + 3rd-party NIC<br>
 +
FC ports on 3rd-party HBA<br>
 +
|
 +
Restricted VM OVA template choices and co-residency.
 +
8 total physical cores<br>
 +
After ESXi, 22 GB physical RAM<br>
 +
After RAID/VMFS overhead, 1.8 TB (not counting VM overhead)<br>
 +
2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.
Line 476: Line 483:
All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features [[Unified Communications VMWare Requirements| as described here]].
All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features [[Unified Communications VMWare Requirements| as described here]].
-
{{ note | For '''UC on UCS Specs-based''' and "HP/IBM Specs-based''', use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory.  [[Troubleshooting and Performance Monitoring Virtualized Environments#vCenter Settings| Click here]] for how to configure VMware vCenter to capture these logs.  If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems. }}
+
{{ note | For '''UC on UCS Specs-based''' and '''Third-party Server Specs-based''', use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory.  [[Troubleshooting and Performance Monitoring Virtualized Environments#vCenter Settings| Click here]] for how to configure VMware vCenter to capture these logs.  If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems. }}
<br>  
<br>  
<br>
<br>
Line 488: Line 495:
*what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)  
*what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)  
*can its hardware component options satisfy all other requirements of this policy
*can its hardware component options satisfy all other requirements of this policy
-
 
+
*For additional considerations, see [http://www.cisco.com/en/US/customer/products/ps6884/products_tech_note09186a0080bf23f5.shtml TAC TechNote 115955].
-
<br>
+
-
 
+
-
The server vendor matters less from a "will it work" perspective and more from a joint customer success perspective:
+
-
*Recall that Cisco TAC only supports products purchased from Cisco with a valid, paid-up maintenance contract.<br>
+
-
*Therefore, how to ensure UC customer success when the server is not a Cisco product ''and'' where there is no OEM control over the server?<br>
+
<br> {{note |  
<br> {{note |  
Line 506: Line 508:
! UC on UCS TRC
! UC on UCS TRC
! UC on UCS Specs-based  
! UC on UCS Specs-based  
-
! HP/IBM Specs-based
+
! Third-Party Server Specs-based
! Not supported  
! Not supported  
Line 525: Line 527:
*Otherwise, any Cisco UCS model, generation, form factor&nbsp;(rack, blade) may be used.
*Otherwise, any Cisco UCS model, generation, form factor&nbsp;(rack, blade) may be used.
|
|
-
any HP server or IBM server is supported as long as:  
+
any 3rd-party server model is supported as long as:  
*it is on the [http://www.vmware.com/go/hcl VMware HCL] for [[Unified Communications VMWare Requirements|the version of VMware vSphere ESXi required by UC]].  
*it is on the [http://www.vmware.com/go/hcl VMware HCL] for [[Unified Communications VMWare Requirements|the version of VMware vSphere ESXi required by UC]].  
*it carries a CPU model supported by UC [[UC Virtualization Supported Hardware#Processor|(described later in this policy)]].  
*it carries a CPU model supported by UC [[UC Virtualization Supported Hardware#Processor|(described later in this policy)]].  
*it satisfies all other requirements of this policy<br>  
*it satisfies all other requirements of this policy<br>  
-
*Otherwise, any HP/IBM model, generation, form factor&nbsp;(rack, blade) may be used.
+
*Otherwise, any 3rd-party vendor, model, generation, form factor&nbsp;(rack, blade) may be used.
| rowspan="3" |
| rowspan="3" |
The following are '''NOT supported''':
The following are '''NOT supported''':
-
* Cisco, HP or IBM server models that do not satisfy the rules of this policy.
+
* Cisco or 3rd-party server models that do not satisfy the rules of this policy.
* Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model
* Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model
* Cisco UCS Express (SRE-V 9xx on ISR router hardware)
* Cisco UCS Express (SRE-V 9xx on ISR router hardware)
* Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware)
* Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware)
-
* Any other 3rd-party server vendor (such as Dell, Fujitsu, Oracle/Sun, NEC, etc.)
+
* For additional considerations, please see [http://www.cisco.com/en/US/customer/products/ps6884/products_tech_note09186a0080bf23f5.shtml TAC TechNote 115955].
-
* Cisco TAC is not obligated to troubleshoot UC app issues if the apps are deployed on unsupported hardware.
+
|-
|-
Line 595: Line 596:
! UC on UCS TRC
! UC on UCS TRC
! UC on UCS Specs-based  
! UC on UCS Specs-based  
-
! HP/IBM Specs-based
+
! Third-party Server Specs-based
! Not supported  
! Not supported  
Line 615: Line 616:
| must exactly match what is listed in [[#UC on UCS Tested Reference Configurations | Table 1]].
| must exactly match what is listed in [[#UC on UCS Tested Reference Configurations | Table 1]].
| colspan="2" |  
| colspan="2" |  
 +
The following "Full UC Performance" models:
* Any [http://ark.intel.com/products/codename/33174/Westmere-EP Intel Xeon 5600 model] with minimum physical core speed of 2.53 GHz
* Any [http://ark.intel.com/products/codename/33174/Westmere-EP Intel Xeon 5600 model] with minimum physical core speed of 2.53 GHz
* Any [http://ark.intel.com/products/codename/33164/Nehalem-EX  Intel Xeon 7500 model] with minimum physical core speed of 2.53 GHz
* Any [http://ark.intel.com/products/codename/33164/Nehalem-EX  Intel Xeon 7500 model] with minimum physical core speed of 2.53 GHz
Line 653: Line 655:
!   
!   
! UC on UCS TRC
! UC on UCS TRC
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
Line 702: Line 704:
! <br>
! <br>
! UC on UCS TRC  
! UC on UCS TRC  
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
! Supported Storage Options  
! Supported Storage Options  
Line 725: Line 727:
! <br>
! <br>
! UC on UCS TRC  
! UC on UCS TRC  
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
! rowspan="2" | Disk Size and Speed  
! rowspan="2" | Disk Size and Speed  
Line 741: Line 743:
*compatible with the VMware HCL and compatible with the server model used  
*compatible with the VMware HCL and compatible with the server model used  
-
*all UC latency, performance and capacity requirements are met
+
*all UC latency, performance and capacity requirements are met.  To ensure optimum UC app performance, '''be sure to use Battery Backup cache or SuperCap on RAID controllers for DAS.'''
|-
|-
Line 784: Line 786:
!  
!  
! UC on UCS TRC  
! UC on UCS TRC  
-
! Specs-based (UCS or HP/IBM)
+
! Specs-based (UCS or 3rd-party Server)
|-
|-
! Physical Adapter Hardware (NIC, HBA, VIC, CNA)  
! Physical Adapter Hardware (NIC, HBA, VIC, CNA)  
Line 1,026: Line 1,028:
=== B200 M3 TRC#1 ===
=== B200 M3 TRC#1 ===
 +
 +
This configuration is also quotable as either UCUCS-EZ-B200M3 (single blade) or UCSB-EZ-UC-B200M3 (multiple blades with chassis and switching).
{| class="prettytable"
{| class="prettytable"
Line 1,112: Line 1,116:
<br>
<br>
-
 
-
=== B200 M2 TRC#1 ===
 
-
 
-
This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1.  Memory and hard drives changes due to industry technology transitions not UC app requirements.
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|'''Quantity'''
 
-
|'''Cisco Part Number'''
 
-
|'''Description'''
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-B6625-1'''
 
-
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|'''2'''
 
-
|'''A01-X0109'''
 
-
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|'''12'''
 
-
|
 
-
Either:
 
-
*'''N01-M304GB1
 
-
*'''A02-M304GB2-L
 
-
*'''UCS-MR-1X041RX-A
 
-
|<br>
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 
-
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 
-
 
-
|-
 
-
|'''2'''
 
-
|Either:
 
-
*'''A03-D146GC2
 
-
*'''UCS-HDD300GI2F105'''
 
-
|<br>
 
-
*146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 
-
*300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-AC0002'''
 
-
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 
-
 
-
|-
 
-
|'''2'''
 
-
 
-
|'''N20-BHTS1'''
 
-
 
-
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
|}
 
-
 
-
 
-
 
-
<br>
 
-
 
-
=== B200 M2 TRC#2 ===
 
-
Memory and hard drives changes are due to industry transitions and not UC app requirements.
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|'''Quantity'''
 
-
|'''Cisco Part Number'''
 
-
|'''Description'''
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-B6625-1'''
 
-
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|'''2'''
 
-
|'''A01-X0109'''
 
-
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|'''12'''
 
-
|
 
-
Either:
 
-
*'''N01-M304GB1
 
-
*'''A02-M304GB2-L
 
-
*'''UCS-MR-1X041RX-A
 
-
|<br>
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 
-
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 
-
 
-
|-
 
-
|
 
-
|
 
-
| Diskless
 
-
 
-
|-
 
-
|'''1'''
 
-
|'''N20-AC0002'''
 
-
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 
-
 
-
|-
 
-
|'''2'''
 
-
 
-
|'''N20-BHTS1'''
 
-
 
-
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
|}
 
-
 
-
 
-
 
-
<br>
 
-
 
-
 
-
<br>
 
-
 
-
=== B200 M1 TRC#1 ===
 
-
 
-
This configuration was also quotable as UCS-B200M2-VCS1.
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|
 
-
'''Quantity'''
 
-
 
-
|
 
-
'''Cisco Part Number'''
 
-
 
-
|
 
-
'''Description'''
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-B6620-1'''
 
-
 
-
|
 
-
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-X00002'''
 
-
 
-
|
 
-
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|
 
-
'''8'''
 
-
 
-
|
 
-
'''N01-M304GB1'''
 
-
 
-
|
 
-
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''A03-D146GA2'''
 
-
 
-
|
 
-
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-AQ0002'''
 
-
 
-
|
 
-
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-BHTS1'''
 
-
 
-
|
 
-
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
 
-
|}
 
-
 
-
 
-
 
-
<br>
 
-
 
-
=== B200 M1 TRC#2 ===
 
-
 
-
{| class="prettytable"
 
-
|-
 
-
|
 
-
'''Quantity'''
 
-
 
-
|
 
-
'''Cisco Part Number'''
 
-
 
-
|
 
-
'''Description'''
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-B6620-1'''
 
-
 
-
|
 
-
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-X00002'''
 
-
 
-
|
 
-
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 
-
 
-
|-
 
-
|
 
-
'''8'''
 
-
 
-
|
 
-
'''N01-M304GB1'''
 
-
 
-
|
 
-
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 
-
 
-
|-
 
-
|
 
-
|
 
-
|Diskless
 
-
 
-
|-
 
-
|
 
-
'''1'''
 
-
 
-
|
 
-
'''N20-AQ0002'''
 
-
 
-
|
 
-
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 
-
 
-
|-
 
-
|
 
-
'''2'''
 
-
 
-
|
 
-
'''N20-BHTS1'''
 
-
 
-
|
 
-
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 
-
 
-
|}
 
-
 
-
 
Line 1,483: Line 1,220:
'''1'''
'''1'''
-
|
+
| One of:<br>
-
'''N2XX-AIPCI02'''
+
*'''N2XX-AIPCI02'''
 +
*'''UCSC-PCIE-IRJ45'''
-
|
+
|<br>
-
Intel Quad port GbE Controller (E1G44ETG1P20)
+
* Intel Quad port GbE Controller (E1G44ETG1P20)
 +
* Intel i350 Quad Port 1Gb Adapter
|-
|-
Line 1,598: Line 1,337:
{{ note | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. }}
{{ note | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. }}
 +
This configuration is also available via bundle UCUCS-EZ-C240M3S.
{| class="prettytable"
{| class="prettytable"
Line 1,634: Line 1,374:
|-
|-
|'''1
|'''1
-
|'''UCS-RAID-9266  
+
|'''UCSC-SD-16G-C240
-
|MegaRAID 9266-8i + battery backup for C240 and C220
+
|16GB SD Card Module for C240 Servers
 +
 
 +
|-
 +
|'''1
 +
|'''One of:
 +
*UCS-RAID-9266  
 +
*UCS-RAID-9266CV
 +
|<br>
 +
*MegaRAID 9266-8i + battery backup for C240 and C220
 +
*MegaRAID 9266CV-8i w/TFM + Super Cap
|-
|-
Line 1,661: Line 1,410:
|'''UCSC-RAIL-2U
|'''UCSC-RAIL-2U
|Auto-included: 2U Rail Kit for UCS C-Series servers
|Auto-included: 2U Rail Kit for UCS C-Series servers
-
 
-
|-
 
-
|'''1
 
-
|'''UCSC-SD-16G-C240
 
-
|Auto-included: 16GB SD Card Module for C240 Servers
 
|-
|-
Line 1,686: Line 1,430:
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }}
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }}
 +
This configuration is also available as bundle UCUCS-EZ-C220M3S.
Line 1,718: Line 1,463:
|-
|-
|'''1
|'''1
-
|'''UCS-RAID-9266  
+
|'''One of:
-
|MegaRAID 9266-8i + battery backup for C240 and C220
+
*UCS-RAID-9266  
 +
*UCS-RAID-9266CV
 +
|<br>
 +
*MegaRAID 9266-8i + battery backup for C240 and C220
 +
*MegaRAID 9266CV-8i w/TFM + Super Cap
 +
 
 +
|-
 +
|'''1
 +
|'''UCSC-SD-16G-C220
 +
|16GB SD Card Module for C220 Servers
|-
|-
Line 1,735: Line 1,489:
|'''UCSC-PSU-650W  
|'''UCSC-PSU-650W  
|650W power supply for C-series rack servers
|650W power supply for C-series rack servers
-
 
-
|-
 
-
|'''1
 
-
|'''UCSC-SD-16G-C220
 
-
|16GB SD Card Module for C220 Servers
 
|-
|-
Line 1,760: Line 1,509:
{{ note |  
{{ note |  
*This TRC is supported for use with:
*This TRC is supported for use with:
-
** Cisco Business Edition 6000 (where it is quoted as UCSC-C220-M3SBE as part of CMBE6K-UCL or CMBE6K-UWL)
+
** Cisco Business Edition 6000 (where it is quoted as an auto-included option in BE6K bundle, UCSC-C220-M3SBE)
-
** UC on UCS TRC
+
** UC on UCS TRC (where this configuration is available as bundle UCSC-C220-M3SBE&#x3D; .)
-
*In either deployment scenario, there are special rules for allowed apps, allowed VM OVA templates and allowed co-residency.}}<br>
+
*In either deployment scenario, there are special rules for allowed apps, allowed VM OVA templates and allowed co-residency.
 +
}}<br>
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br>
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br>
Line 1,796: Line 1,546:
|-
|-
|'''1
|'''1
-
|'''UCS-RAID-9266  
+
|'''UCSC-SD-16G-C220
-
|MegaRAID 9266-8i + battery backup for C240 and C220
+
|16GB SD Card Module for C220 Servers
 +
 
 +
|-
 +
|'''1
 +
|'''One of:
 +
*UCS-RAID-9266  
 +
*UCS-RAID-9266CV
 +
(Bundles ship with -9266)
 +
|<br>
 +
* MegaRAID 9266-8i + battery backup for C240 and C220
 +
* MegaRAID 9266CV-8i w/TFM + Super Cap
 +
(Bundles ship with -9266-8i)
|-
|-
Line 1,813: Line 1,574:
|'''UCSC-PSU-650W  
|'''UCSC-PSU-650W  
|650W power supply for C-series rack servers
|650W power supply for C-series rack servers
-
 
-
|-
 
-
|'''1
 
-
|'''UCSC-SD-16G-C220
 
-
|16GB SD Card Module for C220 Servers
 
|-
|-
Line 1,848: Line 1,604:
<br>
<br>
 +
 +
<br>
 +
 +
 +
= End of Sale UC on UCS TRC Bills of Material (BOMs) =
 +
 +
=== B200 M2 TRC#1 ===
 +
 +
This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1.  Memory and hard drives changes due to industry technology transitions not UC app requirements.
 +
 +
{| class="prettytable"
 +
|-
 +
|'''Quantity'''
 +
|'''Cisco Part Number'''
 +
|'''Description'''
 +
 +
|-
 +
|'''1'''
 +
|'''N20-B6625-1'''
 +
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|'''2'''
 +
|'''A01-X0109'''
 +
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 +
 +
|-
 +
|'''12'''
 +
|
 +
Either:
 +
*'''N01-M304GB1
 +
*'''A02-M304GB2-L
 +
*'''UCS-MR-1X041RX-A
 +
|<br>
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 +
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 +
 +
|-
 +
|'''2'''
 +
|Either:
 +
*'''A03-D146GC2
 +
*'''UCS-HDD300GI2F105'''
 +
|<br>
 +
*146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 +
*300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
 +
 +
|-
 +
|'''1'''
 +
|'''N20-AC0002'''
 +
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 +
 +
|-
 +
|'''2'''
 +
 +
|'''N20-BHTS1'''
 +
 +
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
|}
 +
 +
 +
 +
<br>
 +
 +
=== B200 M2 TRC#2 ===
 +
Memory and hard drives changes are due to industry transitions and not UC app requirements.
 +
 +
{| class="prettytable"
 +
|-
 +
|'''Quantity'''
 +
|'''Cisco Part Number'''
 +
|'''Description'''
 +
 +
|-
 +
|'''1'''
 +
|'''N20-B6625-1'''
 +
|UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|'''2'''
 +
|'''A01-X0109'''
 +
|2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
 +
 +
|-
 +
|'''12'''
 +
|
 +
Either:
 +
*'''N01-M304GB1
 +
*'''A02-M304GB2-L
 +
*'''UCS-MR-1X041RX-A
 +
|<br>
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
 +
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
 +
 +
|-
 +
|
 +
|
 +
| Diskless
 +
 +
|-
 +
|'''1'''
 +
|'''N20-AC0002'''
 +
|UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
 +
 +
|-
 +
|'''2'''
 +
 +
|'''N20-BHTS1'''
 +
 +
|Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
|}
 +
 +
 +
 +
<br>
 +
 +
 +
<br>
 +
 +
=== B200 M1 TRC#1 ===
 +
 +
This configuration was also quotable as UCS-B200M2-VCS1.
 +
 +
{| class="prettytable"
 +
|-
 +
|
 +
'''Quantity'''
 +
 +
|
 +
'''Cisco Part Number'''
 +
 +
|
 +
'''Description'''
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-B6620-1'''
 +
 +
|
 +
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-X00002'''
 +
 +
|
 +
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 +
 +
|-
 +
|
 +
'''8'''
 +
 +
|
 +
'''N01-M304GB1'''
 +
 +
|
 +
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''A03-D146GA2'''
 +
 +
|
 +
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-AQ0002'''
 +
 +
|
 +
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-BHTS1'''
 +
 +
|
 +
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
 +
|}
 +
 +
 +
 +
<br>
 +
 +
=== B200 M1 TRC#2 ===
 +
 +
{| class="prettytable"
 +
|-
 +
|
 +
'''Quantity'''
 +
 +
|
 +
'''Cisco Part Number'''
 +
 +
|
 +
'''Description'''
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-B6620-1'''
 +
 +
|
 +
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-X00002'''
 +
 +
|
 +
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz
 +
 +
|-
 +
|
 +
'''8'''
 +
 +
|
 +
'''N01-M304GB1'''
 +
 +
|
 +
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
 +
 +
|-
 +
|
 +
|
 +
|Diskless
 +
 +
|-
 +
|
 +
'''1'''
 +
 +
|
 +
'''N20-AQ0002'''
 +
 +
|
 +
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb
 +
 +
|-
 +
|
 +
'''2'''
 +
 +
|
 +
'''N20-BHTS1'''
 +
 +
|
 +
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server
 +
 +
|}
 +
=== C210 M2 TRC#1 ===
=== C210 M2 TRC#1 ===
Line 2,919: Line 2,946:
-
<br>
 
 +
<br>
=== C200 M2 TRC#1 ===
=== C200 M2 TRC#1 ===
{{ note | This TRC has special rules for allowed VM OVA templates and allowed co-residency.}}
{{ note | This TRC has special rules for allowed VM OVA templates and allowed co-residency.}}
Line 2,934: Line 2,961:
|
|
'''Quantity'''
'''Quantity'''
-
 
|
|
'''Cisco Part Number'''
'''Cisco Part Number'''
-
 
|
|
'''Description'''
'''Description'''
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-1120402W'''
'''R200-1120402W'''
-
 
|
|
UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card
UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card
-
 
|-
|-
|
|
'''2'''
'''2'''
-
 
|
|
'''A01-X0113'''
'''A01-X0113'''
-
 
|
|
2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz
2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz
-
 
|-
|-
|'''6'''
|'''6'''
Line 2,972: Line 2,990:
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
-
 
|-
|-
|
|
'''4'''
'''4'''
-
 
|
|
'''R200-D1TC03'''
'''R200-D1TC03'''
-
 
|
|
Gen 2 1TB SAS 7.2K RPM
Gen 2 1TB SAS 7.2K RPM
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-PL004'''
'''R200-PL004'''
-
 
|
|
LSI 6G MegaRAID 9260-4i card (C200 only)
LSI 6G MegaRAID 9260-4i card (C200 only)
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|Either:
|Either:
*'''R2XX-LBBU
*'''R2XX-LBBU
*'''UCSC-LBBU02'''
*'''UCSC-LBBU02'''
-
 
|<br>
|<br>
*Battery Back-up  
*Battery Back-up  
*Battery back unit for C200 LFF and SFF M2
*Battery back unit for C200 LFF and SFF M2
-
 
|-
|-
|'''1
|'''1
Line 3,015: Line 3,023:
*Rail Kit for the UCS 200, 210, C250 Rack Servers
*Rail Kit for the UCS 200, 210, C250 Rack Servers
*Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
*Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
-
 
|-
|-
|
|
'''2'''
'''2'''
-
 
|
|
'''R200-BHTS1'''
'''R200-BHTS1'''
-
 
|
|
Included: CPU heat sink for UCS C200 M1 Rack Server
Included: CPU heat sink for UCS C200 M1 Rack Server
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-PCIBLKF1'''
'''R200-PCIBLKF1'''
-
 
|
|
Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R200-SASCBL-001'''
'''R200-SASCBL-001'''
-
 
|
|
Included: Internal SAS Cable for a base UCS C200 M1 Server
Included: Internal SAS Cable for a base UCS C200 M1 Server
-
 
|-
|-
|'''1
|'''1
Line 3,054: Line 3,052:
*650W power supply, w/added 5A Standby for UCS C200 or C210
*650W power supply, w/added 5A Standby for UCS C200 or C210
*650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
*650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
-
 
|-
|-
|
|
'''1'''
'''1'''
-
 
|
|
'''R2XX-PSUBLKP'''
'''R2XX-PSUBLKP'''
-
 
|
|
Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1
Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1
-
 
|}
|}
 +
 +
 +
 +
 +

Revision as of 15:55, 22 March 2013

Go to: Guidelines to Edit UC Virtualization Pages


Contents

Introduction

Note Note: Not all UC apps support all hardware options. Click here for supported apps matrix.

This web page describes supported compute, storage and network hardware for Virtualization of Cisco Unified Communications, including UC on UCS (Cisco Unified Communications on Cisco Unified Computing System). Click here for a checklist to design, quote and procure a virtualized UC solution that follows Cisco's support policy.

Cisco uses three different support models:


"TRC" used by itself means "UC on UCS Tested Reference Configuration (TRC)". "UC on UCS" used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.
"Specs-based" used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based. 

Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen:

  • Virtual machine (OVA) definitions
  • VMware product, version and feature support
  • VMware configuration requirements for UC
  • Application/VM Co-residency policy (specifically regarding application mix, 3rd-party support, no reservations / oversubscription, virtual/physical sizing rules and max VM count per server).



UC on UCS TRC UC on UCS Specs-based Third-Party Server Specs-based Other hardware
Basic Approach Configuration-based Rules-based Rules-based Not supported - does not satisfy this page's policy.
Allowed for which UC apps? Click here for supported apps matrix Click here for supported apps matrix Click here for supported apps matrix Not supported
UC-required Virtualization Software
  • Click here for general requirements.
  • VMware vCenter is optional.
  • One of the following is mandatory:
    • Cisco UC Virtualization Foundation
    • VMware vSphere
    • Click here for supported versions, editions, features, capacities and purchase options.
  • Click here for general requirements.
  • VMware vCenter is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.
  • One of the following is mandatory:
    • Cisco UC Virtualization Foundation
    • VMware vSphere
    • Click here for supported versions, editions, features and capacities and purchase options.
  • Click here for general requirements.
  • VMware vCenter is mandatory. Also mandatory to capture Statistics Level 4 for maximum duration at each level.
  • VMware vSphere is mandatory:
    • Click here for supported versions, features and capacities and purchase options.
N/A - not supported.
Allowed Servers Select Cisco UCS listed in Table 1. Must follow all TRC rules in this policy. Any Cisco UCS that satisfies this page's policy Any 3rd-party server model that satisfies this page's policy None
Required Level of Virtualization/Server Experience
Low/medium
High
High
N/A
Cisco-tested hardware? Yes by UC and DC Yes, but DC only No No
Server Model, CPU and Component Choices
Less (customer accepts tradeoff of less hardware flexibility for more UC predictability).
More (customer assumes more test/design ownership to get more hardware flexibility)
More (customer assumes more test/design ownership to get more hardware flexibility)
None (unsupported hardware)
Does Cisco TAC support UC apps?
Yes, when all TRC rules in this policy are followed.

UC apps on C-Series DAS-only TRC: Supported with Guaranteed performance
UC apps on C-Series FC SAN TRC or B-Series FC SAN TRC: Supported with Guaranteed performance provided all shared storage requirements in this policy are met.

Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only
Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only
UC apps not supported when deployed on unsupported hardware.
Does Cisco TAC support the server?
Yes. If used with UC apps, then all TRC rules in this policy must be followed.
Yes. If used with UC apps, then all UC on UCS Specs-based rules in this policy must be followed.
No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract.
No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract. Also note UC apps also not supported when deployed on unsupported hardware.
Who designs/determines the server's BOM?
Customer wants Cisco to own Customer wants to own, with assistance from Cisco Customer wants to own N/A


For more details on Cisco UCS servers in general, see the following:


UC on UCS Tested Reference Configurations

Note Note:

What does a TRC definition include?

  • Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level.
  • Required RAID configuration (e.g. RAID5, RAID10, etc.) - including battery backup cache or SuperCap - when the TRC uses DAS storage
  • Guidance on hardware installation and basic setup (e.g. click here).
  • Design, installation and configuration of external hardware is not included in TRC definition, such as:
    • Network routing and switching (e.g. routers, gateways, MCUs, ethernet/FC/FCoE switches, Cisco Catalyst/Nexus/MDS, etc.)
    • QoS configuration of route/switch network devices
    • Cisco UCS B-Series chassis and switching components (e.g. Cisco UCS 6100/6200, Cisco UCS 2100/2200, Cisco UCS 5100)
    • Storage arrays (such as those from EMC, NetApp or other vendors)
  • Configuration settings, patch recommendations or step by step procedures for VMware software are not included in TRC definition.
  • Infrastructure solutions such as Vblock from Virtual Computing Environment may also be leveraged for configuration details not included in the TRC definition.


Click here for basic guidance on TRC hardware setup.

Table 1 - UC on UCS TRCs

Tested Reference Configuration (TRC) Part Numbers / SKUs / BOM Form Factor, CPU Model and Specs Capacity Available to VMs
(using required UC sizing rules)
Shipping Configurations
UCS B440 M2
TRC#1
Click here for BOM

Full-width Blade Server
Quad E7-4870 (10-core / 2.4 GHz)
256 GB RAM
Diskless - VMware + UC apps boot from FC SAN
Cisco VIC (UCS M81KR)

"Extra-extra-large" blade server
40 total physical cores
After ESXi, 254 GB physical RAM
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.

UCS B230 M2
TRC#1
Click here for BOM

Half-width Blade Server
Dual E7-2870 (10-core / 2.4 GHz)
128 GB RAM
Diskless - VMware + UC apps boot from FC SAN
Cisco VIC (UCS M81KR)

"Extra-large" blade server
20 total physical cores
After ESXi, 126 GB physical RAM
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.

UCS B200 M3
TRC#1
Click here for BOM

Half-width Blade Server
Dual E5-2680 (8-core / 2.7 GHz)
96 GB RAM
Diskless - VMware + UC apps boot from FC SAN
Cisco VIC 1240

"Large" blade server
16 total physical cores
After ESXi, 94 GB physical RAM
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.
4x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.

UCS C260 M2
TRC#1
Click here for BOM

2RU Rack-mount Server
Dual E7-2870 (10-core / 2.4 GHz)
128 GB RAM
VMware + UC apps boot from DAS (2 logical volumes, each 8x 300 GB 10K disks, RAID5)
Ethernet ports on motherboard + 3rd-party NIC

"Extra-large" server
20 total physical cores
After ESXi, 126 GB physical RAM
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

UCS C240 M3S (SFF)
TRC#1
Click here for BOM

2RU Rack-mount Server
Dual E5-2680 (8-core, 2.7 GHz)
96 GB RAM
VMware + UC apps boot from DAS (2 logical volumes, each 8x 300GB 15K SFF disks, RAID5)
Ethernet ports on motherboard + 3rd-party NICs

"Large" server
16 total physical cores
After ESXi, 94 GB physical RAM
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)
12x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

UCS C220 M3S (SFF)
TRC#1
Click here for BOM

1RU Rack-mount Server
Dual E5-2643 (4-core, 3.3 GHz)
64 GB RAM
VMware + UC apps boot from DAS (8x 300GB 15K SFF disks, RAID5)
Ethernet ports on motherboard + 3rd-party NIC

"Medium" server
8 total physical cores
After ESXi, 62 GB physical RAM
After RAID/VMFS overhead, 1.93 TB (not counting VM overhead)
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

UCS C220 M3S (SFF)
TRC#2
Click here for BOM

1RU Rack-mount Server
Dual E5-2609 (4-core, 2.4 GHz)
32 GB RAM
VMware + UC apps boot from DAS (4x 500GB 7.2K SFF disks, RAID10)
Ethernet ports on motherboard

"Small" server
Restricted VM OVA template choices and co-residency. 8 total physical cores
After ESXi, 30 GB physical RAM
After RAID/VMFS overhead, 929.46 GB (not counting VM overhead)
2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

Older (End of Sale) Configurations
UCS B200 M2
TRC#1
Click here for BOM

Half-width Blade Server
Dual E5640 (4-core / 2.66 GHz)
48 GB RAM
VMware boot from DAS (2 disks RAID1)
UC apps boot from FC SAN
Cisco VIC (UCS M81KR)

8 total physical cores
After ESXi, 46 GB physical RAM
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.

UCS B200 M2
TRC#2
Click here for BOM

Half-width Blade Server
Dual E5640 (4-core / 2.66 GHz)
48 GB RAM
Diskless - VMware + UC apps boot from FC SAN
Cisco VIC (UCS M81KR)

8 total physical cores
After ESXi, 46 GB physical RAM
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network.

UCS B200 M1
TRC#1
Click here for BOM

Half-width Blade Server
Dual E5540 (4-core / 2.53 GHz)
36 GB RAM
VMware boot from DAS (2 disks RAID1)
UC apps boot from FC SAN
3rd-party CNA (UCS M71KR-Q)

8 total physical cores
After ESXi, 34 GB physical RAM
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network.

UCS B200 M1
TRC#2
Click here for BOM

Half-width Blade Server
Dual E5540 (4-core / 2.53 GHz)
36 GB RAM
Diskless - VMware + UC apps boot from FC SAN
3rd-party CNA (UCS M71KR-Q)

8 total physical cores
After ESXi, 34 GB physical RAM
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network.

UCS C210 M2
TRC#1
Click here for BOM

2RU Rack-mount Server
Dual E5640 (4-core, 2.66 GHz)
48 GB RAM
VMware boots from DAS (2x 146/300 GB 15K, RAID1)
UC apps boot from DAS (8x 146/300 GB 15K, RAID5)
Ethernet ports on motherboard + 3rd-party NIC

8 total physical cores
After ESXi, 46 GB physical RAM
After RAID/VMFS overhead, 947 GB (not counting VM overhead)
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

UCS C210 M2
TRC#2
Click here for BOM

2RU Rack-mount Server
Dual E5640 (4-core, 2.66 GHz)
48 GB RAM
VMware boots from DAS (2x 146/300 GB 15K, RAID1)
UC apps boot from FC SAN
Ethernet ports on motherboard + 3rd-party NIC
FC ports on 3rd-party HBA

8 total physical cores
After ESXi, 46 GB physical RAM
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.

UCS C210 M2
TRC#3
Click here for BOM

2RU Rack-mount Server
Dual E5640 (4-core, 2.66 GHz)
48 GB RAM
Diskless - VMware + UC apps boot from FC SAN
Ethernet ports on motherboard + 3rd-party NIC
FC ports on 3rd-party HBA

8 total physical cores
After ESXi, 46 GB physical RAM
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.

UCS C210 M1
TRC#1
Click here for BOM

2RU Rack-mount Server
Dual E5540 (4-core, 2.53 GHz)
12 GB RAM
VMware boots from DAS (2x 146 GB 15K, RAID1)
UC apps boot from DAS (4x 146 GB 15K, RAID5)
Ethernet ports on motherboard + 3rd-party NIC

Application co-residency NOT supported on this TRC. Single VM only. 8 total physical cores
After ESXi, 10 GB physical RAM
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.

UCS C210 M1
TRC#2
Click here for BOM

2RU Rack-mount Server
Dual E5540 (4-core, 2.53 GHz)
36 GB RAM
VMware boots from DAS (2x 146 GB 15K, RAID1)
UC apps boot from DAS (8x 146 GB 15K, RAID5)
Ethernet ports on motherboard + 3rd-party NIC

8 total physical cores
After ESXi, 34 GB physical RAM
After RAID/VMFS overhead, 947 GB (not counting VM overhead)
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network.

UCS C210 M1
TRC#3
Click here for BOM

2RU Rack-mount Server
Dual E5540 (4-core, 2.53 GHz)
36 GB RAM
VMware boots from DAS (2x 146 GB 15K, RAID1)
UC apps boot from FC SAN
Ethernet ports on motherboard + 3rd-party NIC FC ports on 3rd-party HBA

8 total physical cores
After ESXi, 34 GB physical RAM
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

UCS C210 M1
TRC#4
Click here for BOM

2RU Rack-mount Server
Dual E5540 (4-core, 2.53 GHz)
36 GB RAM
Diskless - VMware + UC apps boot from FC SAN
Ethernet ports on motherboard + 3rd-party NIC FC ports on 3rd-party HBA

8 total physical cores
After ESXi, 34 GB physical RAM
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.
6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.

UCS C200 M2
TRC#1
Click here for BOM

1RU Rack-mount Server
Dual E5506 (4-core, 2.13 GHz)
24 GB RAM
VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)
Ethernet ports on motherboard + 3rd-party NIC
FC ports on 3rd-party HBA

Restricted VM OVA template choices and co-residency. 8 total physical cores
After ESXi, 22 GB physical RAM
After RAID/VMFS overhead, 1.8 TB (not counting VM overhead)
2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network.


VMware Requirements

VMware virtualization software is required for Cisco TAC support.

  • See the Introduction for basic virtualization software requirements, including what is optional and what is mandatory.
  • For Cisco UCS, no UC applications run or install directly on the server hardware; all applications run only as virtual machines. Cisco UC does not support a physical, bare-metal, or nonvirtualized installation on Cisco UCS server hardware.

All UC virtualization deployments must align with the VMware Hardware Compatibility List (HCL).

All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features as described here.

Note Note: For UC on UCS Specs-based and Third-party Server Specs-based, use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory. Click here for how to configure VMware vCenter to capture these logs. If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems.



"Can I use this server?"

UC virtualization hardware support is most dependent on the Intel CPU model and the VMware Hardware Compatibility List (HCL).


The server model only matters in the context of:

  • whether or not it is on the VMware HCL
  • what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)
  • can its hardware component options satisfy all other requirements of this policy
  • For additional considerations, see TAC TechNote 115955.

Note Note:
  • UC does not support every CPU model
  • A given server model may not carry every (or any) CPU model that UC supports.
  • Therefore your server model choices may be artificially limited by which CPUs the server models carry.


UC on UCS TRC UC on UCS Specs-based Third-Party Server Specs-based Not supported


Allowed Servers:
  • Vendors
  • Models / Generations
  • Form Factors

only Cisco Unified Computing System B-Series Blade Servers and C-Series Rack-mount Servers listed in Table 1 are supported.

any Cisco Unified Computing System server is supported as long as:

any 3rd-party server model is supported as long as:

The following are NOT supported:

  • Cisco or 3rd-party server models that do not satisfy the rules of this policy.
  • Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model
  • Cisco UCS Express (SRE-V 9xx on ISR router hardware)
  • Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware)
  • For additional considerations, please see TAC TechNote 115955.

Server or Component "Embedded Software"

  • BIOS
  • Firmware
  • Drivers

There are no UC-specific requirements.

UC apps will specify the required version of VMware vSphere ESXi. Customers should follow server vendor guidelines for what to use with this VMware version.

For Cisco UCS:

  • UCS Software or UCS Manager Software in UCS 6x00 hardware: use the latest recommended version for the VMware vSphere ESXi version
  • Other B-Series / C-Series BIOS, firmware, drivers: use the latest recommended version for the VMware vSphere ESXi version
  • If "Intel Virtualization Technology" BIOS option is available, UC recommends enabling.
  • If "Hyper-threading" BIOS option is available (and the CPU supports hyper-threading), UC recommends enabling.
    • Note that the resultant "Logical Cores" do not factor into UC sizing rules for co-residency. UC still requires mapping one physical core to one vcpu core (not to one "Logical Core").


Mechanical and Environmental
Note Note: Energy-saving features that cause reduction in CPU performance or real-time relocation/powering-down of virtual machines (such as CPU throttling or VMware Dynamic Power Management) are not supported.

Otherwise, there are no UC-specific requirements for form factor, rack mounting hardware, cable management hardware, power supplies, fans or cooling systems. Follow server vendor guidelines for these components.

If you use a Cisco UCS bundle SKU, note that the rail kit, cable management and power supply options may not match what is available with non-bundled Cisco UCS.

Redundant power supplies are highly recommended, particularly for UC on UCS.

For Cisco UCS, it is strongly recommended to use the Cisco default rail kit, unless you have different rack types such as telco racks or racks proprietary to another server vendor. Cisco does not sell any other types of rack-mounting hardware; you must purchase such hardware from a third party.




Processors / CPUs

UC applications require explicit qualification of CPU architectures, due to real-time technical considerations and customer requirements for predictable design rules. Therefore:

  • not every CPU architecture will be supported
  • within a supported CPU architecture, not every CPU model will be supported
  • UC support of new CPU architectures/models may lag the release date from Intel and/or server vendors.


Note Note: Until UC qualification occurs, new CPU models are not supported, even if they are believed to be "better" than currently supported models.


Note that processor support varies by UC application - see the Supported Applications matrix

UC on UCS TRC UC on UCS Specs-based Third-party Server Specs-based Not supported


Physical CPU Quantity must exactly match what is listed in Table 1. Customer choice (subject to what server model allows).

The following CPUs are NOT supported for UC:

  • Intel CPUs that are IN one of the supported architectures/families, but do NOT meet minimum physical core speeds, are not supported for UC.
  • Unlisted Intel CPU architectures/families (such as Intel Xeon 6500 or Intel Xeon E5-2400) are not supported for UC. An Intel CPU architecture is not supported for UC unless qualified by UC and listed above.
  • Other CPU vendors such as AMD are not supported for UC.


Cisco TAC is not obligated to troubleshoot UC app issues when deployed on unsupported hardware.


Physical CPU Vendor and Model must exactly match what is listed in Table 1.

The following "Full UC Performance" models:


For purposes of sizing rules and co-residency, virtualized UC apps see equivalent performance from a physical CPU core on any of the above architectures.

Total physical CPU cores

Total available is fixed based on the CPU models in Table 1.

Total available depends on the physical server's socket count and the CPU model selected.


Total required is based on:

Per these policies, recall that physical CPU cores may not be over-subscribed for UC VMs

  • I.e. one physical CPU core must equal one VM vCPU core.
  • Hyper-threading on the CPU should be enabled when available, but the resulting Logical Cores do not change UC app rules. UC rules are based on 1:1 mapping of physical cores to virtual cores, not Logical Cores to virtual cores.


Cisco TAC is not obligated to troubleshoot UC app issues in deployments with insufficient physical processor cores or speed.



Memory / RAM

Note Note: Virtualization software licenses such as Cisco UC Virtualization Foundation or VMware vSphere limit the amount of total vRAM that can be used (and therefore the amount of physical RAM that can be used for UC VMs, due to UC sizing rules). See Unified Communications VMware Requirements for these limits. In general larger deployments, or deployments with high VM counts, will require very high vRAM totals and will therefore need to use VMware vSphere instead of Cisco UC Virtualization Foundation. If using high-memory-capacity servers, use VMware vSphere instead to ensure use of all physical memory.
UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Physical RAM Total available is listed in Table 1. Additional memory may be added. Total available depends on the server chosen.

Total required is dependent on the virtual machine quantity/size mix deployed on the hardware:

  • 2GB required for virtualization software (VMware vSphere or Cisco UC Virtualization Foundation)
  • plus the sum of UC virtual machines' vRAM.
  • while following co-residency support policy rules. Per these rules, recall that UC does not support physical memory oversubscription (1 GB of vRAM must equal 1 GB of physical RAM). Cisco TAC is not obligated to troubleshoot UC app issues if the deployment has insufficient physical RAM.
Memory Module/DIMM Speed and Population

For what was tested in a TRC, see Table 1.

Follow server vendor guidelines for optimum memory population for the memory capacity required by UC.

  • For Cisco UCS, use the Specs Sheets at UCS Quick Catalog. E.g. for a UCS B200 M3 with 96GB total RAM, optimal is 4x8GB DIMM + 4x4GB DIMM. Using 6x16GB DIMM is not optimal.

Otherwise, there are no UC-specific requirements (primarily because UC does not support memory oversubscription).

  • UC allows any DIMM speed (e.g. 1333 MHz, 1600 MHz, etc.).
  • UC allows any memory hardware module size, density and quantity as long as UC-required RAM capacity is met, and the server vendor supports it.

Storage

To be supported for UC, all storage systems - whether TRC or specs-based - must meet the following requirements:

  • Compatible with the VMware HCL and compatible with the supported server model used
  • kernel disk command latency < 4ms (no spikes above) and physical device command latency < 20 ms (no spikes above). For NFS NAS, guest latency < 24 ms (no spikes above)
  • Published vDisk capacity requirements of UC VMs . Disk space must be available to the VM as needed. If thin provisioned, running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).
  • Published IOPS performance requirements of UC VMs (including excess capacity provisioned to handle IOPS spikes such as during Cisco Unified Communications Manager upgrades).
  • Other storage system design requirements (click here).

Note Note: UC on UCS TRCs using only DAS storage (such as C220 M3S TRC#1) have been pre-designed and tested to meet the above requirements for any UC with UC co-residency scenario that will fit on the TRC. Detailed capacity planning is not required unless deploying
  • non-UC/3rd-party apps
  • VM OVA templates created later than the TRC
  • VM OVA templates with very large vDisks (300GB+).

Note Note: All of the above requirements must be met for Cisco UC to function properly. Except for UC on UCS TRCs using DAS only, it is the customer's responsibility to design a storage system that meets the above requirements. Cisco TAC is not obligated to troubleshoot UC app issues when customer-provided storage is insufficient, overloaded or otherwise not meeting the above requirements.


See below for supported storage hardware options.


UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Supported Storage Options TRCs are only defined for:
  • DAS-only with UC-specified configuration (C260 M2, C240 M3S, C220 M3S, C210 M1/M2, C200 M2)
  • FC SAN with VMware local boot from DAS (B200 M1/M2, C210 M1/M2)
  • Diskless / boot from FC SAN (B440 M2, B230 M2, B200 M3, C210 M2)
  • DAS with customer-defined configuration (including local disks, external SAS, etc.)
  • FC, iSCSI, FCoE or Infiniband SAN
  • Diskless / boot from SAN via above transport options (only supported with VMware vSphere ESXi 4.1+ and compatible UC app versions). 
  • NFS NAS


DAS Support Details


UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Disk Size and Speed

B-Series TRC
may use the disk size/speed listed in Table 1 BOMs, or any other orderable size/speed for the blade server (since local disks are only used to boot VMware).

C-Series TRC
Both must be same or higher than specs listed in Table 1.
E.g. for a TRC tested with 300 GB 10K rpm disks, then:

  • 300GB 15K rpm is supported (faster)
  • 146GB 10K rpm not supported (too small)
  • 7.2K rpm disk of any size not supported (too slow)

DAS is supported with customer-determined disk size, speed, quantity, technology, form factor and RAID configuration as long as:

  • compatible with the VMware HCL and compatible with the server model used
  • all UC latency, performance and capacity requirements are met. To ensure optimum UC app performance, be sure to use Battery Backup cache or SuperCap on RAID controllers for DAS.
TRC BOMs are updated as orderable disk drive options change. E.g. UCS C210 M2 TRC#1 was tested with 146GB 15K rpm disks, but due to 146GB disk EOL, the BOM now specifies 300GB 15K rpm disks (still supported as TRC since both size and speed are "same or higher" than what was tested).
Disk Quantity, Technology, Form Factor Must exactly match what is listed in Table 1. E.g. if the TRC was tested with ten 2.5" SAS drives, then that must be used regardless of disk size or speed.
RAID Configuration RAID configuration, including physical-to-logical volume mapping, must exactly match Table 1 and the RAID instructions in the document Installing CUCM on Virtual Servers here.


SAN / NAS Support Details

  • Applies to any TRC or Specs-based configuration connecting to FC, iSCSI, FCoE or NFS storage.
  • No UC requirement to dedicate arrays or storage groups to UC (vs. non-UC), or to one UC app vs. other UC apps.
  • The storage solution must be compatible with the server model used. E.g. for Cisco Unified Computing System: Cisco UCS Interoperability
  • The storage solution must be compatible with the VMware HCL. For example, refer to the “SAN/Storage” tab at http://www.vmware.com/resources/compatibility/search.php?sourceid=ie7&rls=com.microsoft:en-us:IE-SearchBox&ie=&oe=
  • No UC requirements on disk size, speed, technology (SAS, SATA, FC disk), form factor or RAID configuration as long as requirements for compatibility, latency, performance and capacity are met. "Tier 1 Storage" is generally recommended for UC deployments. See the UC Virtualization Storage System Design Requirements for an illustration of a best practices storage array configuration for UC.
  • There is no UC-specific requirement for NFS version. Use what VMware and the server vendor recommend for the vSphere ESXi version required by UC.
  • Use of storage network and array "features" (such as thin provisioning or EMC Powerpath) is allowed.
  • Otherwise any shared storage configuration is allowed as long as UC requirements for VMware HCL, server compatibility, latency, capacity and performance are met.




Removable Media
Booting from USB devices or SD cards is not supported with UC apps at this time.

Otherwise, there are no UC-specific requirements or restrictions. The different methods of installing UC apps into VMs can leverage the following distribution types of Cisco UC software:

  • Physical delivery of UC apps via ISO image file on DVD.
  • Cisco eDelivery of UC apps via email with link to ISO image file download.


IO Adapters, Controllers and Devices for LAN Access and Storage Access

All adapters used (NIC, HBA, CNA, VIC, etc.) must be on the VMware Hardware Compatibility List for the version of vSphere ESXi required by UC.

UC on UCS TRC Specs-based (UCS or 3rd-party Server)
Physical Adapter Hardware (NIC, HBA, VIC, CNA)
  • UCS B-Series TRC may use either the adapters listed in Table 1 BOMs or substitute with any other supported adapter for the blade server model. Which adapter "should" be used is dependent on deployment, design and UC apps.
  • For UCS C-Series TRC:
    • must exactly match adapter vendor/model/technology (e.g. Intel i350 for 1GbE or QLogic QLE2462 for FC) listed in Table 1 BOMs.
    • Allowed NIC quantity must be same or higher than what is listed in Table 1 BOMs.
    • Allowed HBA/VIC/CNA quantity must exactly match Table 1 BOMs.
    • Any other changes are not allowed for a UC on UCS TRC, but are allowed for UC on UCS Specs-based.
  • Only the following I/O Devices are supported:
    • HBA for storage access
      • Fibre Channel – 2Gbps or faster
      • InfiniBand
    • NIC for LAN and/or shared storage access
      • Ethernet – 1Gbps or faste.  Includes NFS and iSCSI for storage access.
    • Cisco VIC or 3rd-party Converged Network Adapter for LAN and/or storage access
      • FCoE - 10Gbps or faster
    • RAID Controllers for DAS storage access
      • SAS
      • SAS SATA Combo
      • SAS-RAID
      • SAS/SATA-RAID
      • SATA
  • The customer is also responsible for configuring redundant devices on the server (e.g. redundant NIC, HBA, VIA or CNA adapters).
  • There are no UC restrictions on hardware vendors for I/O Devices other than that VMware HCL and the server vendor/model must be compatible with them and support them.


IO Capacity and Performance

In most cases detailed capacity planning is not required for LAN IO or storage access IO. TRC adapter choices have been made to accommodate the IO of all UC on UCS app co-residency scenarios that will fit on the TRC. For guidance on active vs. standby network ports, see the Cisco UC Design Guide] and QoS Design Considerations for Virtual UC with UCS
It is the customer's responsibility to ensure the external LAN and storage access meet UC app design requirements.

  • LAN access adapters must be able to accommodate the LAN usage of UC VMs (described in UC app design guides).
  • Storage access adapters must be able to accommodate the storage IOPS (described in the Storage section of this policy).

Cisco TAC is not obligated troubleshoot UC apps issues in a deployment with insufficient or overloaded I/O devices.


UC on UCS TRC Bills of Material (BOMs)

Note Note: Do not assume that every UCS bundle part number on UCS Quick Catalog can be used with UC on UCS. Before quoting one of these bundles, identify the BOM that it ships and see below:
  • If the bundle meets TRC requirements, it may be quoted for UC on UCS TRC.
  • If the bundle does NOT meet TRC requirements but DOES meet Specs-based requirements, then it may be quoted for UC on UCS Specs-based only.
  • If the bundle does NOT meet TRC requirements and also does NOT meet Specs-based requirements, then it may NOT be quoted for UC on UCS at all without modification.

B440 M2 TRC#1

This BOM was also quotable via fixed-configuration bundle UCS-B440M2-VCDL1.

Quantity

Cisco Part Number

Description

1

B440-BASE-M2UPG

UCS B440 M2 Blade Server w/o CPU, memory, HDD, mezzanine

4

UCS-CPU-E74870

2.4 GHz E7-4870 130W 10C CPU/30M Cache

16

UCS-MR-2X082RX-C

2X8GB DDR3-1333-MHz RDIMM/PC3-10600/dual rank/x2/1.35v

1

N20-AC0002

UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb

32

UCS-MKIT-082RX-C

Auto-included: Mem kit for UCS-MR-2X082RX-C

4

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

4

N20-BHTS3

Auto-included: CPU heat sink for UCS B440 Blade Server

1

N20-LBLKU

Auto-included: Blanking panel for B440 M1 battery backup bay


B230 M2 TRC#1

This BOM was also quotable via fixed-configuration bundle UCS-B230M2-VCDL1 (has extra RAM vs. minimum below).

Quantity

Cisco Part Number

Description

1

B230-BASE-M2UPG

UCS B230 M2 Blade Server w/o CPU, memory, SSD, mezzanine1

2

UCS-CPU-E72870

2.4 GHz E7-2870 130W 10C/30M Cache

8

UCS-MR-2X082RX-B

2X8GB DDR3-1333-MHz RDIMM/PC3-10600/dual rank/x4/1.35v

1

N20-AC0002

UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1

16

UCS-MKIT-082RX-B

Auto-included: Mem kit for UCS-MR-2X082RX-B

2

N20-BBLKD-7MM

Auto-included: UCS 7MM SSD Blank Filler

2

N20-BHTS6

Auto-included: CPU heat sink for UCS B230 Blade Server



B200 M3 TRC#1

This configuration is also quotable as either UCUCS-EZ-B200M3 (single blade) or UCSB-EZ-UC-B200M3 (multiple blades with chassis and switching).

Quantity

Cisco Part Number

Description

1

UCSB-B200-M3-U

UCS B200 M3 Blade Server w/o CPU, mem, HDD, mLOM/mezz (UPG)

2

UCS-CPU-E5-2680

2.70 GHz E5-2680 130W 8C/20MB Cache/DDR3 1600MHz

8 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-MR-1X041RY-A 4GB DDR3-1600-MHz RDIMM/PC3-12800/single rank/1.35v
Diskless

1

UCSB-MLOM-40G-01

VIC 1240 modular LOM for M3 blade servers

2

N20-BBLKD

Auto-included: UCS 2.5 inch HDD blanking panel


2

UCSB-HS-01-EP

Auto-included: Heat Sink for UCS B200 M3 server







C260 M2 TRC#1

This configuration was also quotable as UCS-C260M2-VCD2.

Quantity

Cisco Part Number

Description

1

C260-BASE-2646

UCS C260 M2 Rack Server (w/o CPU, MRB, PSU)

2

UCS-CPU-E72870

2.4 GHz E7-2870 130W 10C/30M Cache

16

C260-MRBD-002

2 DIMM Memory Riser Board For C260

16

UCS-MR-2X041RX-C

2X4GB DDR3-1333-MHz RDIMM/PC3-10600/single rank/x1/1.35v

16

A03-D300GA2

300GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

2

UCSC-DBKP-08E

8 Drive Backplane W/Expander For C-Series

1

R2XX-PL003

LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC

1

UCSC-BBU-11-C260

RAID battery backup for LSI Electr controller for C260

1

One of:
  • N2XX-AIPCI02
  • UCSC-PCIE-IRJ45

  • Intel Quad port GbE Controller (E1G44ETG1P20)
  • Intel i350 Quad Port 1Gb Adapter

2

UCSC-PSU2-1200

1200W 2u Power Supply For UCS

1

UCSC-RAIL-2U

2U Rail Kit for UCS C-Series servers



DVD drive not provided nor supported on this model

1

UCS-SD-16G

16GB SD Card module for UCS Servers

1

UCSX-MLOM-001

Modular LOM For UCS

32

UCS-MKIT-041RX-C

Auto-Included: Mem kit for UCS-MR-2X041RX-C

2

UCSC-HS-01-C260

Auto-Included: CPU HEAT SINK for UCS C260 M2 RACK SERVER

2

UCSC-PCIF-01F

Auto-Included: Full height PCIe filler for C-Series

2

UCSC-PCIF-01H

Auto-Included: Half height PCIe filler for UCS

2

UCSC-RC-P8M-C260

Auto-Included: .79m SAS RAID Cable for C260



C240 M3S (SFF) TRC#1

Note Note: The C240 M3L (LFF) is only supported under UC on UCS Specs-based.

This configuration is also available via bundle UCUCS-EZ-C240M3S.

Quantity Cisco Part Number Description
1 UCSC-C240-M3S UCS C240 M3 SFF w/o CPU, mem, HD, PCIe, w/ rail kit
2 UCS-CPU-E5-2680 2.70 GHz E5-2680 130W 8C/20MB Cache/DDR3 1600MHz
8 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-MR-1X041RY-A 4GB DDR3-1600-MHz RDIMM/PC3-12800/single rank/1.35v
16 UCS-HDD300GI2F105 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 UCSC-SD-16G-C240 16GB SD Card Module for C240 Servers
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV

  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap
DVD drive not offered with C240 M3.
2 UCSC-PCIE-IRJ45 Intel i350 Quad Port 1Gb Adapter
2 UCSC-PSU2-1200 1200W 2u Power Supply For UCS
2 UCSC-HS-C240M3 Auto-included: Heat Sink for UCS C240 M3 Rack Server
1 UCSC-RAIL-2U Auto-included: 2U Rail Kit for UCS C-Series servers
8 N20-BBLKD Auto-included: UCS 2.5 inch HDD blanking panel
2 UCSC-PCIF-01F Auto-included:Full height PCIe filler for C-Series



C220 M3S (SFF) TRC#1

Note Note: This TRC is NOT supported for use with Cisco Business Edition 6000.
Note Note: The C220 M3L (LFF) is only supported under UC on UCS Specs-based.

This configuration is also available as bundle UCUCS-EZ-C220M3S.


Quantity Cisco Part Number Description
1 UCSC-C220-M3S UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit
2 UCS-CPU-E5-2643 3.30 GHz E5-2643/130W 4C/10MB Cache/DDR3 1600MHz
8 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
8 UCS-HDD300GI2F105 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV

  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap
1 UCSC-SD-16G-C220 16GB SD Card Module for C220 Servers
DVD drive not offered with C220 M3.
1 UCSC-PCIE-IRJ45 Intel i350 Quad Port 1Gb Adapter
2 UCSC-PSU-650W 650W power supply for C-series rack servers
2 UCSC-HS-C220M3 Auto-included: Heat Sink for UCS C220 M3 Rack Server
1 UCSC-RAIL1 Auto-included: 2U Rail Kit for C220 servers



C220 M3S (SFF) TRC#2

Note Note:
  • This TRC is supported for use with:
    • Cisco Business Edition 6000 (where it is quoted as an auto-included option in BE6K bundle, UCSC-C220-M3SBE)
    • UC on UCS TRC (where this configuration is available as bundle UCSC-C220-M3SBE= .)
  • In either deployment scenario, there are special rules for allowed apps, allowed VM OVA templates and allowed co-residency.

Note Note: The C220 M3L (LFF) is only supported under UC on UCS Specs-based.

Quantity Cisco Part Number Description
1 UCSC-C220-M3S UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit
2 UCS-CPU-E5-2609 2.4 GHz E5-2609/80W 4C/10MB Cache/DDR3 1066MHz
4 UCS-MR-1X082RY-A 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v
4 A03-D500GC3 500GB 6Gb SATA 7.2K RPM SFF hot plug/drive sled mounted
1 UCSC-SD-16G-C220 16GB SD Card Module for C220 Servers
1 One of:
  • UCS-RAID-9266
  • UCS-RAID-9266CV

(Bundles ship with -9266)


  • MegaRAID 9266-8i + battery backup for C240 and C220
  • MegaRAID 9266CV-8i w/TFM + Super Cap

(Bundles ship with -9266-8i)

DVD drive not offered with C220 M3.
1 R2XX-RAID10 Enable RAID 10 Setting
1 UCSC-PSU-650W 650W power supply for C-series rack servers
4 N20-BBLKD Auto-included: UCS 2.5 inch HDD blanking panel
2 UCSC-HS-C220M3 Auto-included: Heat Sink for UCS C220 M3 Rack Server
1 UCSC-PSU-BLKP Auto-included: Power supply blanking panel/filler (same as San Mateo)
1 UCSC-RAIL1 Auto-included: 2U Rail Kit for C220 servers
1 UCSC-PCIF-01F Auto-included: Full height PCIe filler for C-Series





End of Sale UC on UCS TRC Bills of Material (BOMs)

B200 M2 TRC#1

This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1. Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 N20-B6625-1 UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
2 A01-X0109 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
2 Either:
  • A03-D146GC2
  • UCS-HDD300GI2F105

  • 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
  • 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 N20-AC0002 UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
2 N20-BHTS1 Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server



B200 M2 TRC#2

Memory and hard drives changes are due to industry transitions and not UC app requirements.

Quantity Cisco Part Number Description
1 N20-B6625-1 UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine
2 A01-X0109 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz
12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
Diskless
1 N20-AC0002 UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1
2 N20-BHTS1 Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server





B200 M1 TRC#1

This configuration was also quotable as UCS-B200M2-VCS1.

Quantity

Cisco Part Number

Description

1

N20-B6620-1

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

8

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

2

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

N20-AQ0002

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

2

N20-BHTS1

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server



B200 M1 TRC#2

Quantity

Cisco Part Number

Description

1

N20-B6620-1

UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

8

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

Diskless

1

N20-AQ0002

UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb

2

N20-BHTS1

Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server


C210 M2 TRC#1

This configuration was also quotable as UCS-C210M2-VCD2. Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 R210-2121605W

UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card


2 A01-X0109

2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
10

Either:

  • A03-D146GC2
  • UCS-HDD300GI2F105

  • 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
  • 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 R2XX-PL003 LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC
1 R2XX-LBBU2 Battery Back-up for 6G based LSI MegaRAID Card
1 R210-SASXPAND SAS Pass-Thru Expander (srvr requiring > 8 HDDs) - C210 M1
1 Either:
  • N2XX-ABPCI03
  • N2XX-ABPCI03-M3

  • Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
  • Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
1 Either:
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
1 R210-ODVDRW DVD-RW Drive for UCS C210 M1 Rack Servers
6 N20-BBLKD Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers
3 R200-PCIBLKF1 Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
2 R210-BHTS1 Auto-Included: CPU heat sink for UCS C210 M1 Rack Server
1 R2X0-PSU2-650W Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server
1 SASCBLSHORT-003 Auto-Included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C210 M2 TRC#2

Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 R210-2121605W

UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card


2 A01-X0109

2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
2

Either:

  • A03-D146GC2
  • UCS-HDD300GI2F105

  • 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
  • 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
1 R2XX-PL003 LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC
1 R2XX-LBBU2 Battery Back-up for 6G based LSI MegaRAID Card
1 Either:
  • N2XX-ABPCI03
  • N2XX-ABPCI03-M3

  • Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
  • Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
1 Either:
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
1 R210-ODVDRW DVD-RW Drive for UCS C210 M1 Rack Servers
1 N2XX-AQPCI03 Qlogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter
14 N20-BBLKD Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers
2 R200-PCIBLKF1 Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
2 R210-BHTS1 Auto-Included: CPU heat sink for UCS C210 M1 Rack Server
1 R210-SASCBL-002 Auto-Included: Long SAS Cable for C210 (connects to SAS Extender)
1 R210-SASXTDR Auto-Included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1
1 R2X0-PSU2-650W Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server


C210 M2 TRC#3

Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity Cisco Part Number Description
1 R210-2121605W

UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card


2 A01-X0109

2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz

12

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v
Diskless
1 R2XX-PL003 LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC
1 R2XX-LBBU2 Battery Back-up for 6G based LSI MegaRAID Card
1 Either:
  • N2XX-ABPCI03
  • N2XX-ABPCI03-M3

  • Broadcom BCM5709 Quad Gig E card (10/100/1GbE)
  • Broadcom 5709 Quad Port 10/100/1Gb NIC w/TOE iSCSI for M3 Se
1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server
1 Either:
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")
1 R210-ODVDRW DVD-RW Drive for UCS C210 M1 Rack Servers
1 N2XX-AQPCI03 Qlogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter
14 N20-BBLKD Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers
2 R200-PCIBLKF1 Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server
2 R210-BHTS1 Auto-Included: CPU heat sink for UCS C210 M1 Rack Server
1 R210-SASCBL-002 Auto-Included: Long SAS Cable for C210 (connects to SAS Extender)
1 R210-SASXTDR Auto-Included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1
1 R2X0-PSU2-650W Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server


C210 M1 TRC#1

Note Note: Application co-residency not supported on this configuration - single VM only.

This BOM was also quotable as UCS-C210M1-VCD1.

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

10

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

4

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

2

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

2

R210-SASCBL-002

Auto-included: Long SAS Cable for C210 (connects to SAS Extender)

1

R210-SASXTDR

Auto-included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1


C210 M1 TRC#2

This BOM was also quotable as UCS-C210M1-VCD2.

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

10

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

N2XX-ABPCI03

Broadcom BCM5709 Quad Gig E card (10/100/1GbE)

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

1

R210-SASXPAND

SAS Pass-Thru Expander (srvr requiring > 8 HDDs) - C210 M1

1

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

1

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

1

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

1

SASCBLSHORT-003

Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C210 M1 TRC#3

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

''''2

A03-D146GA2

146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

N2XX-ABPCI03

Broadcom BCM5709 Quad Gig E card (10/100/1GbE)

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

1

N2XX-AQPCI03

QLogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter

14

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

2

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

2

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

2

R210-SASCBL-002

Auto-included: Long SAS Cable for C210 (connects to SAS Extender)

1

SASCBLSHORT-003

Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C210 M1 TRC#4

Quantity

Cisco Part Number

Description

1

R210-2121605

UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards)

2

N20-X00002

2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz

6

N01-M302GB1

2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

6

N01-M304GB1

4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs

1

R2XX-PL003

LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC

1

R2XX-LBBU2

Battery Back-up for 6G based LSI MegaRAID Card

1

N2XX-ABPCI03

Broadcom BCM5709 Quad Gig E card (10/100/1GbE)

1

R2X0-PSU2-650W

650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R250-SLDRAIL

Rail Kit for the C210 M1 Rack Server

1

R210-ODVDRW

DVD-RW Drive for UCS C210 M1 Rack Servers

1

N2XX-AQPCI03

QLogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter

14

N20-BBLKD

Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers

2

R200-PCIBLKF1

Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server

2

R210-BHTS1

Auto-included: CPU heat sink for UCS C210 M1 Rack Server

2

R210-SASCBL-002

Auto-included: Long SAS Cable for C210 (connects to SAS Extender)

1

SASCBLSHORT-003

Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)



C200 M2 TRC#1

Note Note: This TRC has special rules for allowed VM OVA templates and allowed co-residency.

This configuration was also quotable as UCS-C200M2-VCD2.

When quoted as part of Cisco Business Edition 6000, it was also quotable as either UCS-C200M2-VCD2BE, UCS-C200M2-BE6K or UCS-C200M2-WL8 (in CMBE6K-UCL or CMBE6K-UWL).

Memory and hard drives changes due to industry technology transitions not UC app requirements.

Quantity

Cisco Part Number

Description

1

R200-1120402W

UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card

2

A01-X0113

2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz

6

Either:

  • N01-M304GB1
  • A02-M304GB2-L
  • UCS-MR-1X041RX-A

  • 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
  • 4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt
  • 4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v

4

R200-D1TC03

Gen 2 1TB SAS 7.2K RPM

1

R200-PL004

LSI 6G MegaRAID 9260-4i card (C200 only)

1

Either:
  • R2XX-LBBU
  • UCSC-LBBU02

  • Battery Back-up
  • Battery back unit for C200 LFF and SFF M2
1 Either:
  • R250-SLDRAIL
  • R200-1032RAIL
  • R2XX-G31032RAIL

  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for the UCS 200, 210, C250 Rack Servers
  • Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36")

2

R200-BHTS1

Included: CPU heat sink for UCS C200 M1 Rack Server

1

R200-PCIBLKF1

Included: PCIe Full Height blanking panel for UCS C-Series Rack Server

1

R200-SASCBL-001

Included: Internal SAS Cable for a base UCS C200 M1 Server

1 Either:
  • R2X0-PSU2-650W-SB
  • R2X0-PSU2-650W

  • 650W power supply, w/added 5A Standby for UCS C200 or C210
  • 650W power supply unit for UCS C200 M1 or C210 M1 Rack Server

1

R2XX-PSUBLKP

Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1







Back to: Unified Communications in a Virtualized Environment

Rating: 3.0/5 (35 votes cast)

Personal tools