UC Virtualization Supported Hardware
From DocWiki
m (1 revision) |
m (1 revision) |
||
(7 intermediate revisions not shown) | |||
Line 13: | Line 13: | ||
*'''[[#UC on UCS Tested Reference Configurations | UC on UCS Tested Reference Configuration]] (TRC)''' | *'''[[#UC on UCS Tested Reference Configurations | UC on UCS Tested Reference Configuration]] (TRC)''' | ||
*'''UC on UCS Specs-based ''' | *'''UC on UCS Specs-based ''' | ||
- | *''' | + | *'''Third-party Server Specs-based''' |
<br> | <br> | ||
'''"TRC"''' used by itself means "UC on UCS Tested Reference Configuration (TRC)". | '''"TRC"''' used by itself means "UC on UCS Tested Reference Configuration (TRC)". | ||
'''"UC on UCS"''' used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.<br> | '''"UC on UCS"''' used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.<br> | ||
- | '''"Specs-based"''' used by itself refers to the common rules of UC on UCS Specs-based and | + | '''"Specs-based"''' used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based. <br><br> |
Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen: | Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen: | ||
Line 32: | Line 32: | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
! UC on UCS Specs-based | ! UC on UCS Specs-based | ||
- | ! | + | ! Third-Party Server Specs-based |
- | ! | + | ! Other hardware |
|- | |- | ||
! Basic Approach | ! Basic Approach | ||
Line 73: | Line 73: | ||
| Select Cisco UCS listed in [[#UC on UCS Tested Reference Configurations | Table 1]]. Must follow all TRC rules in this policy. | | Select Cisco UCS listed in [[#UC on UCS Tested Reference Configurations | Table 1]]. Must follow all TRC rules in this policy. | ||
| Any Cisco UCS that satisfies this page's policy | | Any Cisco UCS that satisfies this page's policy | ||
- | | Any | + | | Any 3rd-party server model that satisfies this page's policy |
| None | | None | ||
|- | |- | ||
Line 133: | Line 133: | ||
What does a TRC definition include? | What does a TRC definition include? | ||
*Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level. | *Definition of server model and local components (CPU, RAM, adapters, local storage) at the orderable part number level. | ||
- | *Required RAID configuration (e.g. RAID10) when the TRC uses DAS storage | + | *Required RAID configuration (e.g. RAID5, RAID10, etc.) - including battery backup cache or SuperCap - when the TRC uses DAS storage |
*Guidance on hardware installation and basic setup (e.g. [http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/servers.html click here]). | *Guidance on hardware installation and basic setup (e.g. [http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/virtual/servers.html click here]). | ||
**[http://www.cisco.com/go/ucs Click here for detailed Cisco UCS server documentation] regarding hardware configuration procedures. | **[http://www.cisco.com/go/ucs Click here for detailed Cisco UCS server documentation] regarding hardware configuration procedures. | ||
Line 172: | Line 172: | ||
Cisco VIC (UCS M81KR) | Cisco VIC (UCS M81KR) | ||
| | | | ||
+ | "Extra-extra-large" blade server<br> | ||
40 total physical cores<br> | 40 total physical cores<br> | ||
After ESXi, 254 GB physical RAM<br> | After ESXi, 254 GB physical RAM<br> | ||
Line 186: | Line 187: | ||
Cisco VIC (UCS M81KR) | Cisco VIC (UCS M81KR) | ||
| | | | ||
+ | "Extra-large" blade server<br> | ||
20 total physical cores<br> | 20 total physical cores<br> | ||
After ESXi, 126 GB physical RAM<br> | After ESXi, 126 GB physical RAM<br> | ||
Line 201: | Line 203: | ||
Cisco VIC 1240 | Cisco VIC 1240 | ||
| | | | ||
+ | "Large" blade server<br> | ||
16 total physical cores<br> | 16 total physical cores<br> | ||
After ESXi, 94 GB physical RAM<br> | After ESXi, 94 GB physical RAM<br> | ||
Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br> | Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br> | ||
4x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network. | 4x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network. | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
|- | |- | ||
Line 247: | Line 219: | ||
Ethernet ports on motherboard + 3rd-party NIC | Ethernet ports on motherboard + 3rd-party NIC | ||
| | | | ||
+ | "Extra-large" server<br> | ||
20 total physical cores<br> | 20 total physical cores<br> | ||
After ESXi, 126 GB physical RAM<br> | After ESXi, 126 GB physical RAM<br> | ||
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br> | After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br> | ||
- | + | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | |
|- | |- | ||
Line 262: | Line 235: | ||
Ethernet ports on motherboard + 3rd-party NICs | Ethernet ports on motherboard + 3rd-party NICs | ||
| | | | ||
+ | "Large" server<br> | ||
16 total physical cores<br> | 16 total physical cores<br> | ||
After ESXi, 94 GB physical RAM<br> | After ESXi, 94 GB physical RAM<br> | ||
After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br> | After RAID/VMFS overhead, 2x 1.93 TB (not counting VM overhead)<br> | ||
- | + | 12x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | |
|- | |- | ||
Line 277: | Line 251: | ||
Ethernet ports on motherboard + 3rd-party NIC | Ethernet ports on motherboard + 3rd-party NIC | ||
| | | | ||
+ | "Medium" server<br> | ||
8 total physical cores<br> | 8 total physical cores<br> | ||
After ESXi, 62 GB physical RAM<br> | After ESXi, 62 GB physical RAM<br> | ||
After RAID/VMFS overhead, 1.93 TB (not counting VM overhead)<br> | After RAID/VMFS overhead, 1.93 TB (not counting VM overhead)<br> | ||
- | + | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | |
|- | |- | ||
Line 292: | Line 267: | ||
Ethernet ports on motherboard | Ethernet ports on motherboard | ||
| | | | ||
+ | "Small" server<br> | ||
Restricted VM OVA template choices and co-residency. | Restricted VM OVA template choices and co-residency. | ||
8 total physical cores<br> | 8 total physical cores<br> | ||
Line 299: | Line 275: | ||
|- | |- | ||
- | ! | + | ! colspan="4" | Older (End of Sale) Configurations |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | | | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
|- | |- | ||
- | ! UCS | + | ! UCS B200 M2 <br>TRC#1<br> |
- | | [[# | + | | [[#B200 M2 TRC#1 | Click here for BOM]] |
| | | | ||
- | + | Half-width Blade Server<br> | |
- | Dual E5640 (4-core | + | Dual E5640 (4-core / 2.66 GHz)<br> |
48 GB RAM<br> | 48 GB RAM<br> | ||
- | VMware | + | VMware boot from DAS (2 disks RAID1)<br> |
UC apps boot from FC SAN<br> | UC apps boot from FC SAN<br> | ||
- | + | Cisco VIC (UCS M81KR) | |
- | + | ||
| | | | ||
8 total physical cores<br> | 8 total physical cores<br> | ||
After ESXi, 46 GB physical RAM<br> | After ESXi, 46 GB physical RAM<br> | ||
- | + | Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br> | |
- | + | 2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network. | |
|- | |- | ||
- | ! UCS | + | ! UCS B200 M2 <br>TRC#2<br> |
- | | [[# | + | | [[#B200 M2 TRC#2 | Click here for BOM]] |
| | | | ||
- | + | Half-width Blade Server<br> | |
- | Dual E5640 (4-core | + | Dual E5640 (4-core / 2.66 GHz)<br> |
48 GB RAM<br> | 48 GB RAM<br> | ||
Diskless - VMware + UC apps boot from FC SAN<br> | Diskless - VMware + UC apps boot from FC SAN<br> | ||
- | + | Cisco VIC (UCS M81KR) | |
- | + | ||
| | | | ||
8 total physical cores<br> | 8 total physical cores<br> | ||
After ESXi, 46 GB physical RAM<br> | After ESXi, 46 GB physical RAM<br> | ||
- | + | Storage capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00, customer's SAN + storage array.<br> | |
- | + | 2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on Cisco VIC, UCS 2x00/6x00 and customer's network. | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
|- | |- | ||
Line 397: | Line 338: | ||
Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.<br> | Storage capacity/IOPS dependent on CNA, UCS 2x00/6x00, customer's SAN + storage array.<br> | ||
2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network. | 2x 10GbE ports for LAN+storage access. LAN capacity/IOPS dependent on CNA, UCS 2x00/6x00 and customer's network. | ||
+ | |||
+ | |- | ||
+ | ! UCS C210 M2 <br>TRC#1<br> | ||
+ | | [[#C210 M2 TRC#1 | Click here for BOM]] | ||
+ | | | ||
+ | 2RU Rack-mount Server<br> | ||
+ | Dual E5640 (4-core, 2.66 GHz)<br> | ||
+ | 48 GB RAM<br> | ||
+ | VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br> | ||
+ | UC apps boot from DAS (8x 146/300 GB 15K, RAID5)<br> | ||
+ | Ethernet ports on motherboard + 3rd-party NIC | ||
+ | | | ||
+ | 8 total physical cores<br> | ||
+ | After ESXi, 46 GB physical RAM<br> | ||
+ | After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br> | ||
+ | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | ||
+ | |||
+ | |- | ||
+ | ! UCS C210 M2 <br>TRC#2<br> | ||
+ | | [[#C210 M2 TRC#2 | Click here for BOM]] | ||
+ | | | ||
+ | 2RU Rack-mount Server<br> | ||
+ | Dual E5640 (4-core, 2.66 GHz)<br> | ||
+ | 48 GB RAM<br> | ||
+ | VMware boots from DAS (2x 146/300 GB 15K, RAID1)<br> | ||
+ | UC apps boot from FC SAN<br> | ||
+ | Ethernet ports on motherboard + 3rd-party NIC<br> | ||
+ | FC ports on 3rd-party HBA<br> | ||
+ | | | ||
+ | 8 total physical cores<br> | ||
+ | After ESXi, 46 GB physical RAM<br> | ||
+ | 2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br> | ||
+ | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network. | ||
+ | |||
+ | |- | ||
+ | ! UCS C210 M2 <br>TRC#3<br> | ||
+ | | [[#C210 M2 TRC#3 | Click here for BOM]] | ||
+ | | | ||
+ | 2RU Rack-mount Server<br> | ||
+ | Dual E5640 (4-core, 2.66 GHz)<br> | ||
+ | 48 GB RAM<br> | ||
+ | Diskless - VMware + UC apps boot from FC SAN<br> | ||
+ | Ethernet ports on motherboard + 3rd-party NIC<br> | ||
+ | FC ports on 3rd-party HBA<br> | ||
+ | | | ||
+ | 8 total physical cores<br> | ||
+ | After ESXi, 46 GB physical RAM<br> | ||
+ | 2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br> | ||
+ | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network. | ||
|- | |- | ||
Line 412: | Line 402: | ||
8 total physical cores<br> | 8 total physical cores<br> | ||
After ESXi, 10 GB physical RAM<br> | After ESXi, 10 GB physical RAM<br> | ||
- | + | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network. | |
|- | |- | ||
Line 428: | Line 418: | ||
After ESXi, 34 GB physical RAM<br> | After ESXi, 34 GB physical RAM<br> | ||
After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br> | After RAID/VMFS overhead, 947 GB (not counting VM overhead)<br> | ||
- | + | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependnt on customer's network. | |
|- | |- | ||
Line 445: | Line 435: | ||
After ESXi, 34 GB physical RAM<br> | After ESXi, 34 GB physical RAM<br> | ||
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br> | 2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br> | ||
- | + | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | |
|- | |- | ||
Line 461: | Line 451: | ||
After ESXi, 34 GB physical RAM<br> | After ESXi, 34 GB physical RAM<br> | ||
2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br> | 2x 4Gb FC ports for SAN access. Storage capacity/IOPS dependent on HBA, customer's SAN + storage array.<br> | ||
- | + | 6x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | |
+ | |||
+ | |- | ||
+ | ! UCS C200 M2 <br>TRC#1<br> | ||
+ | | [[#C200 M2 TRC#1 | Click here for BOM]] | ||
+ | | | ||
+ | 1RU Rack-mount Server<br> | ||
+ | Dual E5506 (4-core, 2.13 GHz)<br> | ||
+ | 24 GB RAM<br> | ||
+ | VMware + UC apps boot from DAS (4x 1TB 7.2K disks, RAID10)<br> | ||
+ | Ethernet ports on motherboard + 3rd-party NIC<br> | ||
+ | FC ports on 3rd-party HBA<br> | ||
+ | | | ||
+ | Restricted VM OVA template choices and co-residency. | ||
+ | 8 total physical cores<br> | ||
+ | After ESXi, 22 GB physical RAM<br> | ||
+ | After RAID/VMFS overhead, 1.8 TB (not counting VM overhead)<br> | ||
+ | 2x 1GbE ports for LAN access (not counting CIMC). LAN capacity/IOPS dependent on customer's network. | ||
Line 476: | Line 483: | ||
All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features [[Unified Communications VMWare Requirements| as described here]]. | All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features [[Unified Communications VMWare Requirements| as described here]]. | ||
- | {{ note | For '''UC on UCS Specs-based''' and | + | {{ note | For '''UC on UCS Specs-based''' and '''Third-party Server Specs-based''', use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory. [[Troubleshooting and Performance Monitoring Virtualized Environments#vCenter Settings| Click here]] for how to configure VMware vCenter to capture these logs. If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems. }} |
<br> | <br> | ||
<br> | <br> | ||
Line 488: | Line 495: | ||
*what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization) | *what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization) | ||
*can its hardware component options satisfy all other requirements of this policy | *can its hardware component options satisfy all other requirements of this policy | ||
- | + | *For additional considerations, see [http://www.cisco.com/en/US/customer/products/ps6884/products_tech_note09186a0080bf23f5.shtml TAC TechNote 115955]. | |
- | + | ||
- | + | ||
- | + | ||
- | * | + | |
- | + | ||
<br> {{note | | <br> {{note | | ||
Line 506: | Line 508: | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
! UC on UCS Specs-based | ! UC on UCS Specs-based | ||
- | ! | + | ! Third-Party Server Specs-based |
! Not supported | ! Not supported | ||
Line 525: | Line 527: | ||
*Otherwise, any Cisco UCS model, generation, form factor (rack, blade) may be used. | *Otherwise, any Cisco UCS model, generation, form factor (rack, blade) may be used. | ||
| | | | ||
- | any | + | any 3rd-party server model is supported as long as: |
*it is on the [http://www.vmware.com/go/hcl VMware HCL] for [[Unified Communications VMWare Requirements|the version of VMware vSphere ESXi required by UC]]. | *it is on the [http://www.vmware.com/go/hcl VMware HCL] for [[Unified Communications VMWare Requirements|the version of VMware vSphere ESXi required by UC]]. | ||
*it carries a CPU model supported by UC [[UC Virtualization Supported Hardware#Processor|(described later in this policy)]]. | *it carries a CPU model supported by UC [[UC Virtualization Supported Hardware#Processor|(described later in this policy)]]. | ||
*it satisfies all other requirements of this policy<br> | *it satisfies all other requirements of this policy<br> | ||
- | *Otherwise, any | + | *Otherwise, any 3rd-party vendor, model, generation, form factor (rack, blade) may be used. |
| rowspan="3" | | | rowspan="3" | | ||
The following are '''NOT supported''': | The following are '''NOT supported''': | ||
- | * Cisco | + | * Cisco or 3rd-party server models that do not satisfy the rules of this policy. |
* Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model | * Cisco 7800 Series Media Convergence Servers (MCS 7800) regardless of CPU model | ||
* Cisco UCS Express (SRE-V 9xx on ISR router hardware) | * Cisco UCS Express (SRE-V 9xx on ISR router hardware) | ||
* Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware) | * Cisco UCS E-Series Blade Servers (E14x/16x on ISR router hardware) | ||
- | * | + | * For additional considerations, please see [http://www.cisco.com/en/US/customer/products/ps6884/products_tech_note09186a0080bf23f5.shtml TAC TechNote 115955]. |
- | + | ||
|- | |- | ||
Line 595: | Line 596: | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
! UC on UCS Specs-based | ! UC on UCS Specs-based | ||
- | ! | + | ! Third-party Server Specs-based |
! Not supported | ! Not supported | ||
Line 615: | Line 616: | ||
| must exactly match what is listed in [[#UC on UCS Tested Reference Configurations | Table 1]]. | | must exactly match what is listed in [[#UC on UCS Tested Reference Configurations | Table 1]]. | ||
| colspan="2" | | | colspan="2" | | ||
+ | The following "Full UC Performance" models: | ||
* Any [http://ark.intel.com/products/codename/33174/Westmere-EP Intel Xeon 5600 model] with minimum physical core speed of 2.53 GHz | * Any [http://ark.intel.com/products/codename/33174/Westmere-EP Intel Xeon 5600 model] with minimum physical core speed of 2.53 GHz | ||
* Any [http://ark.intel.com/products/codename/33164/Nehalem-EX Intel Xeon 7500 model] with minimum physical core speed of 2.53 GHz | * Any [http://ark.intel.com/products/codename/33164/Nehalem-EX Intel Xeon 7500 model] with minimum physical core speed of 2.53 GHz | ||
Line 653: | Line 655: | ||
! | ! | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
- | ! Specs-based (UCS or | + | ! Specs-based (UCS or 3rd-party Server) |
|- | |- | ||
Line 702: | Line 704: | ||
! <br> | ! <br> | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
- | ! Specs-based (UCS or | + | ! Specs-based (UCS or 3rd-party Server) |
|- | |- | ||
! Supported Storage Options | ! Supported Storage Options | ||
Line 725: | Line 727: | ||
! <br> | ! <br> | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
- | ! Specs-based (UCS or | + | ! Specs-based (UCS or 3rd-party Server) |
|- | |- | ||
! rowspan="2" | Disk Size and Speed | ! rowspan="2" | Disk Size and Speed | ||
Line 741: | Line 743: | ||
*compatible with the VMware HCL and compatible with the server model used | *compatible with the VMware HCL and compatible with the server model used | ||
- | *all UC latency, performance and capacity requirements are met | + | *all UC latency, performance and capacity requirements are met. To ensure optimum UC app performance, '''be sure to use Battery Backup cache or SuperCap on RAID controllers for DAS.''' |
|- | |- | ||
Line 784: | Line 786: | ||
! | ! | ||
! UC on UCS TRC | ! UC on UCS TRC | ||
- | ! Specs-based (UCS or | + | ! Specs-based (UCS or 3rd-party Server) |
|- | |- | ||
! Physical Adapter Hardware (NIC, HBA, VIC, CNA) | ! Physical Adapter Hardware (NIC, HBA, VIC, CNA) | ||
Line 1,026: | Line 1,028: | ||
=== B200 M3 TRC#1 === | === B200 M3 TRC#1 === | ||
+ | |||
+ | This configuration is also quotable as either UCUCS-EZ-B200M3 (single blade) or UCSB-EZ-UC-B200M3 (multiple blades with chassis and switching). | ||
{| class="prettytable" | {| class="prettytable" | ||
Line 1,112: | Line 1,116: | ||
<br> | <br> | ||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
- | |||
Line 1,483: | Line 1,220: | ||
'''1''' | '''1''' | ||
- | | | + | | One of:<br> |
- | '''N2XX-AIPCI02''' | + | *'''N2XX-AIPCI02''' |
+ | *'''UCSC-PCIE-IRJ45''' | ||
- | | | + | |<br> |
- | Intel Quad port GbE Controller (E1G44ETG1P20) | + | * Intel Quad port GbE Controller (E1G44ETG1P20) |
+ | * Intel i350 Quad Port 1Gb Adapter | ||
|- | |- | ||
Line 1,598: | Line 1,337: | ||
{{ note | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. }} | {{ note | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. }} | ||
+ | This configuration is also available via bundle UCUCS-EZ-C240M3S. | ||
{| class="prettytable" | {| class="prettytable" | ||
Line 1,634: | Line 1,374: | ||
|- | |- | ||
|'''1 | |'''1 | ||
- | |'''UCS-RAID-9266 | + | |'''UCSC-SD-16G-C240 |
- | |MegaRAID 9266-8i + battery backup for C240 and C220 | + | |16GB SD Card Module for C240 Servers |
+ | |||
+ | |- | ||
+ | |'''1 | ||
+ | |'''One of: | ||
+ | *UCS-RAID-9266 | ||
+ | *UCS-RAID-9266CV | ||
+ | |<br> | ||
+ | *MegaRAID 9266-8i + battery backup for C240 and C220 | ||
+ | *MegaRAID 9266CV-8i w/TFM + Super Cap | ||
|- | |- | ||
Line 1,661: | Line 1,410: | ||
|'''UCSC-RAIL-2U | |'''UCSC-RAIL-2U | ||
|Auto-included: 2U Rail Kit for UCS C-Series servers | |Auto-included: 2U Rail Kit for UCS C-Series servers | ||
- | |||
- | |||
- | |||
- | |||
- | |||
|- | |- | ||
Line 1,686: | Line 1,430: | ||
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} | {{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} | ||
+ | This configuration is also available as bundle UCUCS-EZ-C220M3S. | ||
Line 1,718: | Line 1,463: | ||
|- | |- | ||
|'''1 | |'''1 | ||
- | |'''UCS-RAID-9266 | + | |'''One of: |
- | |MegaRAID 9266-8i + battery backup for C240 and C220 | + | *UCS-RAID-9266 |
+ | *UCS-RAID-9266CV | ||
+ | |<br> | ||
+ | *MegaRAID 9266-8i + battery backup for C240 and C220 | ||
+ | *MegaRAID 9266CV-8i w/TFM + Super Cap | ||
+ | |||
+ | |- | ||
+ | |'''1 | ||
+ | |'''UCSC-SD-16G-C220 | ||
+ | |16GB SD Card Module for C220 Servers | ||
|- | |- | ||
Line 1,735: | Line 1,489: | ||
|'''UCSC-PSU-650W | |'''UCSC-PSU-650W | ||
|650W power supply for C-series rack servers | |650W power supply for C-series rack servers | ||
- | |||
- | |||
- | |||
- | |||
- | |||
|- | |- | ||
Line 1,760: | Line 1,509: | ||
{{ note | | {{ note | | ||
*This TRC is supported for use with: | *This TRC is supported for use with: | ||
- | ** Cisco Business Edition 6000 (where it is quoted as UCSC-C220-M3SBE | + | ** Cisco Business Edition 6000 (where it is quoted as an auto-included option in BE6K bundle, UCSC-C220-M3SBE) |
- | ** UC on UCS TRC | + | ** UC on UCS TRC (where this configuration is available as bundle UCSC-C220-M3SBE= .) |
- | *In either deployment scenario, there are special rules for allowed apps, allowed VM OVA templates and allowed co-residency.}}<br> | + | *In either deployment scenario, there are special rules for allowed apps, allowed VM OVA templates and allowed co-residency. |
+ | }}<br> | ||
{{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br> | {{ note | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. }} <br> | ||
Line 1,796: | Line 1,546: | ||
|- | |- | ||
|'''1 | |'''1 | ||
- | |'''UCS-RAID-9266 | + | |'''UCSC-SD-16G-C220 |
- | |MegaRAID 9266-8i + battery backup for C240 and C220 | + | |16GB SD Card Module for C220 Servers |
+ | |||
+ | |- | ||
+ | |'''1 | ||
+ | |'''One of: | ||
+ | *UCS-RAID-9266 | ||
+ | *UCS-RAID-9266CV | ||
+ | (Bundles ship with -9266) | ||
+ | |<br> | ||
+ | * MegaRAID 9266-8i + battery backup for C240 and C220 | ||
+ | * MegaRAID 9266CV-8i w/TFM + Super Cap | ||
+ | (Bundles ship with -9266-8i) | ||
|- | |- | ||
Line 1,813: | Line 1,574: | ||
|'''UCSC-PSU-650W | |'''UCSC-PSU-650W | ||
|650W power supply for C-series rack servers | |650W power supply for C-series rack servers | ||
- | |||
- | |||
- | |||
- | |||
- | |||
|- | |- | ||
Line 1,848: | Line 1,604: | ||
<br> | <br> | ||
+ | |||
+ | <br> | ||
+ | |||
+ | |||
+ | = End of Sale UC on UCS TRC Bills of Material (BOMs) = | ||
+ | |||
+ | === B200 M2 TRC#1 === | ||
+ | |||
+ | This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1. Memory and hard drives changes due to industry technology transitions not UC app requirements. | ||
+ | |||
+ | {| class="prettytable" | ||
+ | |- | ||
+ | |'''Quantity''' | ||
+ | |'''Cisco Part Number''' | ||
+ | |'''Description''' | ||
+ | |||
+ | |- | ||
+ | |'''1''' | ||
+ | |'''N20-B6625-1''' | ||
+ | |UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine | ||
+ | |||
+ | |- | ||
+ | |'''2''' | ||
+ | |'''A01-X0109''' | ||
+ | |2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz | ||
+ | |||
+ | |- | ||
+ | |'''12''' | ||
+ | | | ||
+ | Either: | ||
+ | *'''N01-M304GB1 | ||
+ | *'''A02-M304GB2-L | ||
+ | *'''UCS-MR-1X041RX-A | ||
+ | |<br> | ||
+ | *4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs | ||
+ | *4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt | ||
+ | *4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v | ||
+ | |||
+ | |- | ||
+ | |'''2''' | ||
+ | |Either: | ||
+ | *'''A03-D146GC2 | ||
+ | *'''UCS-HDD300GI2F105''' | ||
+ | |<br> | ||
+ | *146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted | ||
+ | *300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted | ||
+ | |||
+ | |- | ||
+ | |'''1''' | ||
+ | |'''N20-AC0002''' | ||
+ | |UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1 | ||
+ | |||
+ | |- | ||
+ | |'''2''' | ||
+ | |||
+ | |'''N20-BHTS1''' | ||
+ | |||
+ | |Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server | ||
+ | |} | ||
+ | |||
+ | |||
+ | |||
+ | <br> | ||
+ | |||
+ | === B200 M2 TRC#2 === | ||
+ | Memory and hard drives changes are due to industry transitions and not UC app requirements. | ||
+ | |||
+ | {| class="prettytable" | ||
+ | |- | ||
+ | |'''Quantity''' | ||
+ | |'''Cisco Part Number''' | ||
+ | |'''Description''' | ||
+ | |||
+ | |- | ||
+ | |'''1''' | ||
+ | |'''N20-B6625-1''' | ||
+ | |UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine | ||
+ | |||
+ | |- | ||
+ | |'''2''' | ||
+ | |'''A01-X0109''' | ||
+ | |2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz | ||
+ | |||
+ | |- | ||
+ | |'''12''' | ||
+ | | | ||
+ | Either: | ||
+ | *'''N01-M304GB1 | ||
+ | *'''A02-M304GB2-L | ||
+ | *'''UCS-MR-1X041RX-A | ||
+ | |<br> | ||
+ | *4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs | ||
+ | *4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt | ||
+ | *4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | | | ||
+ | | Diskless | ||
+ | |||
+ | |- | ||
+ | |'''1''' | ||
+ | |'''N20-AC0002''' | ||
+ | |UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1 | ||
+ | |||
+ | |- | ||
+ | |'''2''' | ||
+ | |||
+ | |'''N20-BHTS1''' | ||
+ | |||
+ | |Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server | ||
+ | |} | ||
+ | |||
+ | |||
+ | |||
+ | <br> | ||
+ | |||
+ | |||
+ | <br> | ||
+ | |||
+ | === B200 M1 TRC#1 === | ||
+ | |||
+ | This configuration was also quotable as UCS-B200M2-VCS1. | ||
+ | |||
+ | {| class="prettytable" | ||
+ | |- | ||
+ | | | ||
+ | '''Quantity''' | ||
+ | |||
+ | | | ||
+ | '''Cisco Part Number''' | ||
+ | |||
+ | | | ||
+ | '''Description''' | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''1''' | ||
+ | |||
+ | | | ||
+ | '''N20-B6620-1''' | ||
+ | |||
+ | | | ||
+ | UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''2''' | ||
+ | |||
+ | | | ||
+ | '''N20-X00002''' | ||
+ | |||
+ | | | ||
+ | 2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''8''' | ||
+ | |||
+ | | | ||
+ | '''N01-M304GB1''' | ||
+ | |||
+ | | | ||
+ | 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''2''' | ||
+ | |||
+ | | | ||
+ | '''A03-D146GA2''' | ||
+ | |||
+ | | | ||
+ | 146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''1''' | ||
+ | |||
+ | | | ||
+ | '''N20-AQ0002''' | ||
+ | |||
+ | | | ||
+ | UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''2''' | ||
+ | |||
+ | | | ||
+ | '''N20-BHTS1''' | ||
+ | |||
+ | | | ||
+ | Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server | ||
+ | |||
+ | |} | ||
+ | |||
+ | |||
+ | |||
+ | <br> | ||
+ | |||
+ | === B200 M1 TRC#2 === | ||
+ | |||
+ | {| class="prettytable" | ||
+ | |- | ||
+ | | | ||
+ | '''Quantity''' | ||
+ | |||
+ | | | ||
+ | '''Cisco Part Number''' | ||
+ | |||
+ | | | ||
+ | '''Description''' | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''1''' | ||
+ | |||
+ | | | ||
+ | '''N20-B6620-1''' | ||
+ | |||
+ | | | ||
+ | UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''2''' | ||
+ | |||
+ | | | ||
+ | '''N20-X00002''' | ||
+ | |||
+ | | | ||
+ | 2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''8''' | ||
+ | |||
+ | | | ||
+ | '''N01-M304GB1''' | ||
+ | |||
+ | | | ||
+ | 4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | | | ||
+ | |Diskless | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''1''' | ||
+ | |||
+ | | | ||
+ | '''N20-AQ0002''' | ||
+ | |||
+ | | | ||
+ | UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb | ||
+ | |||
+ | |- | ||
+ | | | ||
+ | '''2''' | ||
+ | |||
+ | | | ||
+ | '''N20-BHTS1''' | ||
+ | |||
+ | | | ||
+ | Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server | ||
+ | |||
+ | |} | ||
+ | |||
=== C210 M2 TRC#1 === | === C210 M2 TRC#1 === | ||
Line 2,919: | Line 2,946: | ||
- | |||
+ | <br> | ||
=== C200 M2 TRC#1 === | === C200 M2 TRC#1 === | ||
{{ note | This TRC has special rules for allowed VM OVA templates and allowed co-residency.}} | {{ note | This TRC has special rules for allowed VM OVA templates and allowed co-residency.}} | ||
Line 2,934: | Line 2,961: | ||
| | | | ||
'''Quantity''' | '''Quantity''' | ||
- | |||
| | | | ||
'''Cisco Part Number''' | '''Cisco Part Number''' | ||
- | |||
| | | | ||
'''Description''' | '''Description''' | ||
- | |||
|- | |- | ||
| | | | ||
'''1''' | '''1''' | ||
- | |||
| | | | ||
'''R200-1120402W''' | '''R200-1120402W''' | ||
- | |||
| | | | ||
UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card | UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card | ||
- | |||
|- | |- | ||
| | | | ||
'''2''' | '''2''' | ||
- | |||
| | | | ||
'''A01-X0113''' | '''A01-X0113''' | ||
- | |||
| | | | ||
2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz | 2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz | ||
- | |||
|- | |- | ||
|'''6''' | |'''6''' | ||
Line 2,972: | Line 2,990: | ||
*4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt | *4GB DDR3-1333MHz RDIMM/PC3-10600/single rank/Low-Dual Volt | ||
*4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v | *4GB DDR3-1333-MHz RDIMM/PC3-10600/1R/1.35v | ||
- | |||
|- | |- | ||
| | | | ||
'''4''' | '''4''' | ||
- | |||
| | | | ||
'''R200-D1TC03''' | '''R200-D1TC03''' | ||
- | |||
| | | | ||
Gen 2 1TB SAS 7.2K RPM | Gen 2 1TB SAS 7.2K RPM | ||
- | |||
|- | |- | ||
| | | | ||
'''1''' | '''1''' | ||
- | |||
| | | | ||
'''R200-PL004''' | '''R200-PL004''' | ||
- | |||
| | | | ||
LSI 6G MegaRAID 9260-4i card (C200 only) | LSI 6G MegaRAID 9260-4i card (C200 only) | ||
- | |||
|- | |- | ||
| | | | ||
'''1''' | '''1''' | ||
- | |||
|Either: | |Either: | ||
*'''R2XX-LBBU | *'''R2XX-LBBU | ||
*'''UCSC-LBBU02''' | *'''UCSC-LBBU02''' | ||
- | |||
|<br> | |<br> | ||
*Battery Back-up | *Battery Back-up | ||
*Battery back unit for C200 LFF and SFF M2 | *Battery back unit for C200 LFF and SFF M2 | ||
- | |||
|- | |- | ||
|'''1 | |'''1 | ||
Line 3,015: | Line 3,023: | ||
*Rail Kit for the UCS 200, 210, C250 Rack Servers | *Rail Kit for the UCS 200, 210, C250 Rack Servers | ||
*Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36") | *Rail Kit for UCS C200, C210 Rack Servers (23.5 to 36") | ||
- | |||
|- | |- | ||
| | | | ||
'''2''' | '''2''' | ||
- | |||
| | | | ||
'''R200-BHTS1''' | '''R200-BHTS1''' | ||
- | |||
| | | | ||
Included: CPU heat sink for UCS C200 M1 Rack Server | Included: CPU heat sink for UCS C200 M1 Rack Server | ||
- | |||
|- | |- | ||
| | | | ||
'''1''' | '''1''' | ||
- | |||
| | | | ||
'''R200-PCIBLKF1''' | '''R200-PCIBLKF1''' | ||
- | |||
| | | | ||
Included: PCIe Full Height blanking panel for UCS C-Series Rack Server | Included: PCIe Full Height blanking panel for UCS C-Series Rack Server | ||
- | |||
|- | |- | ||
| | | | ||
'''1''' | '''1''' | ||
- | |||
| | | | ||
'''R200-SASCBL-001''' | '''R200-SASCBL-001''' | ||
- | |||
| | | | ||
Included: Internal SAS Cable for a base UCS C200 M1 Server | Included: Internal SAS Cable for a base UCS C200 M1 Server | ||
- | |||
|- | |- | ||
|'''1 | |'''1 | ||
Line 3,054: | Line 3,052: | ||
*650W power supply, w/added 5A Standby for UCS C200 or C210 | *650W power supply, w/added 5A Standby for UCS C200 or C210 | ||
*650W power supply unit for UCS C200 M1 or C210 M1 Rack Server | *650W power supply unit for UCS C200 M1 or C210 M1 Rack Server | ||
- | |||
|- | |- | ||
| | | | ||
'''1''' | '''1''' | ||
- | |||
| | | | ||
'''R2XX-PSUBLKP''' | '''R2XX-PSUBLKP''' | ||
- | |||
| | | | ||
Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1 | Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1 | ||
- | |||
|} | |} | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
Revision as of 15:55, 22 March 2013
Go to: Guidelines to Edit UC Virtualization Pages
Introduction
![]() | Note: | Not all UC apps support all hardware options. Click here for supported apps matrix. |
This web page describes supported compute, storage and network hardware for Virtualization of Cisco Unified Communications, including UC on UCS (Cisco Unified Communications on Cisco Unified Computing System). Click here for a checklist to design, quote and procure a virtualized UC solution that follows Cisco's support policy.
Cisco uses three different support models:
- UC on UCS Tested Reference Configuration (TRC)
- UC on UCS Specs-based
- Third-party Server Specs-based
"TRC" used by itself means "UC on UCS Tested Reference Configuration (TRC)".
"UC on UCS" used by itself refers to both UC on UCS TRC and UC on UCS Specs-based.
"Specs-based" used by itself refers to the common rules of UC on UCS Specs-based and Third-party Server Specs-based.
Below is a comparison of the hardware support options. Note that the following are identical regardless of the support model chosen:
- Virtual machine (OVA) definitions
- VMware product, version and feature support
- VMware configuration requirements for UC
- Application/VM Co-residency policy (specifically regarding application mix, 3rd-party support, no reservations / oversubscription, virtual/physical sizing rules and max VM count per server).
| UC on UCS TRC | UC on UCS Specs-based | Third-Party Server Specs-based | Other hardware |
---|---|---|---|---|
Basic Approach | Configuration-based | Rules-based | Rules-based | Not supported - does not satisfy this page's policy. |
Allowed for which UC apps? | Click here for supported apps matrix | Click here for supported apps matrix | Click here for supported apps matrix | Not supported |
UC-required Virtualization Software |
|
|
| N/A - not supported. |
Allowed Servers | Select Cisco UCS listed in Table 1. Must follow all TRC rules in this policy. | Any Cisco UCS that satisfies this page's policy | Any 3rd-party server model that satisfies this page's policy | None |
Required Level of Virtualization/Server Experience | Low/medium | High | High | N/A |
Cisco-tested hardware? | Yes by UC and DC | Yes, but DC only | No | No |
Server Model, CPU and Component Choices | Less (customer accepts tradeoff of less hardware flexibility for more UC predictability). | More (customer assumes more test/design ownership to get more hardware flexibility) | More (customer assumes more test/design ownership to get more hardware flexibility) | None (unsupported hardware) |
Does Cisco TAC support UC apps? | Yes, when all TRC rules in this policy are followed.
UC apps on C-Series DAS-only TRC: Supported with Guaranteed performance | Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only | Yes, when all Specs-based rules in this policy are followed. Supported with performance Guidance only | UC apps not supported when deployed on unsupported hardware. |
Does Cisco TAC support the server? | Yes. If used with UC apps, then all TRC rules in this policy must be followed. | Yes. If used with UC apps, then all UC on UCS Specs-based rules in this policy must be followed. | No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract. | No. Cisco TAC supports products purchased from Cisco with a valid, paid-up maintenance contract. Also note UC apps also not supported when deployed on unsupported hardware. |
Who designs/determines the server's BOM? | Customer wants Cisco to own | Customer wants to own, with assistance from Cisco | Customer wants to own | N/A |
For more details on Cisco UCS servers in general, see the following:
- Cisco UCS B-Series Servers Documentation Roadmap
- Cisco UCS C-Series Servers Documentation Roadmap
- Cisco UCS C-Series Integrated Management Controller Documentation
- Cisco UCS Manager Documentation
- Cisco UCS home page
- Cisco UC on UCS solution home page
UC on UCS Tested Reference Configurations
![]() | Note: |
What does a TRC definition include?
|
Click here for basic guidance on TRC hardware setup.
Table 1 - UC on UCS TRCs
Tested Reference Configuration (TRC) | Part Numbers / SKUs / BOM | Form Factor, CPU Model and Specs | Capacity Available to VMs (using required UC sizing rules) |
---|---|---|---|
Shipping Configurations | |||
UCS B440 M2 TRC#1 | Click here for BOM |
Full-width Blade Server |
"Extra-extra-large" blade server |
UCS B230 M2 TRC#1 | Click here for BOM |
Half-width Blade Server |
"Extra-large" blade server |
UCS B200 M3 TRC#1 | Click here for BOM |
Half-width Blade Server |
"Large" blade server |
UCS C260 M2 TRC#1 | Click here for BOM |
2RU Rack-mount Server |
"Extra-large" server |
UCS C240 M3S (SFF) TRC#1 | Click here for BOM |
2RU Rack-mount Server |
"Large" server |
UCS C220 M3S (SFF) TRC#1 | Click here for BOM |
1RU Rack-mount Server |
"Medium" server |
UCS C220 M3S (SFF) TRC#2 | Click here for BOM |
1RU Rack-mount Server |
"Small" server |
Older (End of Sale) Configurations | |||
UCS B200 M2 TRC#1 | Click here for BOM |
Half-width Blade Server |
8 total physical cores |
UCS B200 M2 TRC#2 | Click here for BOM |
Half-width Blade Server |
8 total physical cores |
UCS B200 M1 TRC#1 | Click here for BOM |
Half-width Blade Server |
8 total physical cores |
UCS B200 M1 TRC#2 | Click here for BOM |
Half-width Blade Server |
8 total physical cores |
UCS C210 M2 TRC#1 | Click here for BOM |
2RU Rack-mount Server |
8 total physical cores |
UCS C210 M2 TRC#2 | Click here for BOM |
2RU Rack-mount Server |
8 total physical cores |
UCS C210 M2 TRC#3 | Click here for BOM |
2RU Rack-mount Server |
8 total physical cores |
UCS C210 M1 TRC#1 | Click here for BOM |
2RU Rack-mount Server |
Application co-residency NOT supported on this TRC. Single VM only.
8 total physical cores |
UCS C210 M1 TRC#2 | Click here for BOM |
2RU Rack-mount Server |
8 total physical cores |
UCS C210 M1 TRC#3 | Click here for BOM |
2RU Rack-mount Server |
8 total physical cores |
UCS C210 M1 TRC#4 | Click here for BOM |
2RU Rack-mount Server |
8 total physical cores |
UCS C200 M2 TRC#1 | Click here for BOM |
1RU Rack-mount Server |
Restricted VM OVA template choices and co-residency.
8 total physical cores
|
VMware Requirements
VMware virtualization software is required for Cisco TAC support.
- See the Introduction for basic virtualization software requirements, including what is optional and what is mandatory.
- For Cisco UCS, no UC applications run or install directly on the server hardware; all applications run only as virtual machines. Cisco UC does not support a physical, bare-metal, or nonvirtualized installation on Cisco UCS server hardware.
All UC virtualization deployments must align with the VMware Hardware Compatibility List (HCL).
All UC virtualization deployments must follow UC rules for supported VMware products, versions, editions and features as described here.
![]() | Note: | For UC on UCS Specs-based and Third-party Server Specs-based, use of VMware vCenter is mandatory, and Statistics Level 4 logging is mandatory. Click here for how to configure VMware vCenter to capture these logs. If not configured by default, Cisco TAC may request enabling these settings in order to troubleshoot problems. |
"Can I use this server?"
UC virtualization hardware support is most dependent on the Intel CPU model and the VMware Hardware Compatibility List (HCL).
The server model only matters in the context of:
- whether or not it is on the VMware HCL
- what Intel CPU models does it carry (and are those CPU models allowed for UC virtualization)
- can its hardware component options satisfy all other requirements of this policy
- For additional considerations, see TAC TechNote 115955.
UC on UCS TRC | UC on UCS Specs-based | Third-Party Server Specs-based | Not supported
| |
---|---|---|---|---|
Allowed Servers:
|
only Cisco Unified Computing System B-Series Blade Servers and C-Series Rack-mount Servers listed in Table 1 are supported. |
any Cisco Unified Computing System server is supported as long as:
|
any 3rd-party server model is supported as long as:
|
The following are NOT supported:
|
Server or Component "Embedded Software"
|
There are no UC-specific requirements. UC apps will specify the required version of VMware vSphere ESXi. Customers should follow server vendor guidelines for what to use with this VMware version. For Cisco UCS:
| |||
Mechanical and Environmental |
Otherwise, there are no UC-specific requirements for form factor, rack mounting hardware, cable management hardware, power supplies, fans or cooling systems. Follow server vendor guidelines for these components. If you use a Cisco UCS bundle SKU, note that the rail kit, cable management and power supply options may not match what is available with non-bundled Cisco UCS. Redundant power supplies are highly recommended, particularly for UC on UCS. For Cisco UCS, it is strongly recommended to use the Cisco default rail kit, unless you have different rack types such as telco racks or racks proprietary to another server vendor. Cisco does not sell any other types of rack-mounting hardware; you must purchase such hardware from a third party. |
Processors / CPUs
UC applications require explicit qualification of CPU architectures, due to real-time technical considerations and customer requirements for predictable design rules. Therefore:
- not every CPU architecture will be supported
- within a supported CPU architecture, not every CPU model will be supported
- UC support of new CPU architectures/models may lag the release date from Intel and/or server vendors.
![]() | Note: | Until UC qualification occurs, new CPU models are not supported, even if they are believed to be "better" than currently supported models. |
Note that processor support varies by UC application - see the Supported Applications matrix
UC on UCS TRC | UC on UCS Specs-based | Third-party Server Specs-based | Not supported
| |
---|---|---|---|---|
Physical CPU Quantity | must exactly match what is listed in Table 1. | Customer choice (subject to what server model allows). |
The following CPUs are NOT supported for UC:
| |
Physical CPU Vendor and Model | must exactly match what is listed in Table 1. |
The following "Full UC Performance" models:
| ||
Total physical CPU cores |
Total available is fixed based on the CPU models in Table 1. |
Total available depends on the physical server's socket count and the CPU model selected.
| ||
Total required is based on: Per these policies, recall that physical CPU cores may not be over-subscribed for UC VMs
|
Memory / RAM
![]() | Note: | Virtualization software licenses such as Cisco UC Virtualization Foundation or VMware vSphere limit the amount of total vRAM that can be used (and therefore the amount of physical RAM that can be used for UC VMs, due to UC sizing rules). See Unified Communications VMware Requirements for these limits. In general larger deployments, or deployments with high VM counts, will require very high vRAM totals and will therefore need to use VMware vSphere instead of Cisco UC Virtualization Foundation. If using high-memory-capacity servers, use VMware vSphere instead to ensure use of all physical memory. |
UC on UCS TRC | Specs-based (UCS or 3rd-party Server) | |
---|---|---|
Physical RAM | Total available is listed in Table 1. Additional memory may be added. | Total available depends on the server chosen. |
Total required is dependent on the virtual machine quantity/size mix deployed on the hardware:
| ||
Memory Module/DIMM Speed and Population |
For what was tested in a TRC, see Table 1. Follow server vendor guidelines for optimum memory population for the memory capacity required by UC.
Otherwise, there are no UC-specific requirements (primarily because UC does not support memory oversubscription).
|
Storage
To be supported for UC, all storage systems - whether TRC or specs-based - must meet the following requirements:
- Compatible with the VMware HCL and compatible with the supported server model used
- kernel disk command latency < 4ms (no spikes above) and physical device command latency < 20 ms (no spikes above). For NFS NAS, guest latency < 24 ms (no spikes above)
- Published vDisk capacity requirements of UC VMs . Disk space must be available to the VM as needed. If thin provisioned, running out of disk space due to thin provisioning will crash the application and corrupt the virtual disk (which may also prevent restore from backup on the virtual disk).
- Published IOPS performance requirements of UC VMs (including excess capacity provisioned to handle IOPS spikes such as during Cisco Unified Communications Manager upgrades).
- Other storage system design requirements (click here).
See below for supported storage hardware options.
| UC on UCS TRC | Specs-based (UCS or 3rd-party Server) |
---|---|---|
Supported Storage Options | TRCs are only defined for:
|
|
DAS Support Details
| UC on UCS TRC | Specs-based (UCS or 3rd-party Server) |
---|---|---|
Disk Size and Speed |
B-Series TRC C-Series TRC
|
DAS is supported with customer-determined disk size, speed, quantity, technology, form factor and RAID configuration as long as:
|
TRC BOMs are updated as orderable disk drive options change. E.g. UCS C210 M2 TRC#1 was tested with 146GB 15K rpm disks, but due to 146GB disk EOL, the BOM now specifies 300GB 15K rpm disks (still supported as TRC since both size and speed are "same or higher" than what was tested). | | |
Disk Quantity, Technology, Form Factor | Must exactly match what is listed in Table 1. E.g. if the TRC was tested with ten 2.5" SAS drives, then that must be used regardless of disk size or speed. | |
RAID Configuration | RAID configuration, including physical-to-logical volume mapping, must exactly match Table 1 and the RAID instructions in the document Installing CUCM on Virtual Servers here. |
SAN / NAS Support Details
- Applies to any TRC or Specs-based configuration connecting to FC, iSCSI, FCoE or NFS storage.
- No UC requirement to dedicate arrays or storage groups to UC (vs. non-UC), or to one UC app vs. other UC apps.
- The storage solution must be compatible with the server model used. E.g. for Cisco Unified Computing System: Cisco UCS Interoperability
- The storage solution must be compatible with the VMware HCL. For example, refer to the “SAN/Storage” tab at http://www.vmware.com/resources/compatibility/search.php?sourceid=ie7&rls=com.microsoft:en-us:IE-SearchBox&ie=&oe=
- No UC requirements on disk size, speed, technology (SAS, SATA, FC disk), form factor or RAID configuration as long as requirements for compatibility, latency, performance and capacity are met. "Tier 1 Storage" is generally recommended for UC deployments. See the UC Virtualization Storage System Design Requirements for an illustration of a best practices storage array configuration for UC.
- There is no UC-specific requirement for NFS version. Use what VMware and the server vendor recommend for the vSphere ESXi version required by UC.
- Use of storage network and array "features" (such as thin provisioning or EMC Powerpath) is allowed.
- Otherwise any shared storage configuration is allowed as long as UC requirements for VMware HCL, server compatibility, latency, capacity and performance are met.
Removable Media
Booting from USB devices or SD cards is not supported with UC apps at this time.
Otherwise, there are no UC-specific requirements or restrictions. The different methods of installing UC apps into VMs can leverage the following distribution types of Cisco UC software:
- Physical delivery of UC apps via ISO image file on DVD.
- Cisco eDelivery of UC apps via email with link to ISO image file download.
IO Adapters, Controllers and Devices for LAN Access and Storage Access
All adapters used (NIC, HBA, CNA, VIC, etc.) must be on the VMware Hardware Compatibility List for the version of vSphere ESXi required by UC.
UC on UCS TRC | Specs-based (UCS or 3rd-party Server) | |
---|---|---|
Physical Adapter Hardware (NIC, HBA, VIC, CNA) |
|
|
IO Capacity and Performance |
In most cases detailed capacity planning is not required for LAN IO or storage access IO. TRC adapter choices have been made to accommodate the IO of all UC on UCS app co-residency scenarios that will fit on the TRC. For guidance on active vs. standby network ports, see the Cisco UC Design Guide] and QoS Design Considerations for Virtual UC with UCS |
Cisco TAC is not obligated troubleshoot UC apps issues in a deployment with insufficient or overloaded I/O devices. |
UC on UCS TRC Bills of Material (BOMs)
![]() | Note: | Do not assume that every UCS bundle part number on UCS Quick Catalog can be used with UC on UCS. Before quoting one of these bundles, identify the BOM that it ships and see below:
|
B440 M2 TRC#1
This BOM was also quotable via fixed-configuration bundle UCS-B440M2-VCDL1.
Quantity |
Cisco Part Number |
Description |
1 |
B440-BASE-M2UPG |
UCS B440 M2 Blade Server w/o CPU, memory, HDD, mezzanine |
4 |
UCS-CPU-E74870 |
2.4 GHz E7-4870 130W 10C CPU/30M Cache |
16 |
UCS-MR-2X082RX-C |
2X8GB DDR3-1333-MHz RDIMM/PC3-10600/dual rank/x2/1.35v |
1 |
N20-AC0002 |
UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb |
32 |
UCS-MKIT-082RX-C |
Auto-included: Mem kit for UCS-MR-2X082RX-C |
4 |
N20-BBLKD |
Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers |
4 |
N20-BHTS3 |
Auto-included: CPU heat sink for UCS B440 Blade Server |
1 |
N20-LBLKU |
Auto-included: Blanking panel for B440 M1 battery backup bay |
B230 M2 TRC#1
This BOM was also quotable via fixed-configuration bundle UCS-B230M2-VCDL1 (has extra RAM vs. minimum below).
Quantity |
Cisco Part Number |
Description |
1 |
B230-BASE-M2UPG |
UCS B230 M2 Blade Server w/o CPU, memory, SSD, mezzanine1 |
2 |
UCS-CPU-E72870 |
2.4 GHz E7-2870 130W 10C/30M Cache |
8 |
UCS-MR-2X082RX-B |
2X8GB DDR3-1333-MHz RDIMM/PC3-10600/dual rank/x4/1.35v |
1 |
N20-AC0002 |
UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1 |
16 |
UCS-MKIT-082RX-B |
Auto-included: Mem kit for UCS-MR-2X082RX-B |
2 |
N20-BBLKD-7MM |
Auto-included: UCS 7MM SSD Blank Filler |
2 |
N20-BHTS6 |
Auto-included: CPU heat sink for UCS B230 Blade Server |
B200 M3 TRC#1
This configuration is also quotable as either UCUCS-EZ-B200M3 (single blade) or UCSB-EZ-UC-B200M3 (multiple blades with chassis and switching).
Quantity |
Cisco Part Number |
Description |
1 |
UCSB-B200-M3-U |
UCS B200 M3 Blade Server w/o CPU, mem, HDD, mLOM/mezz (UPG) |
2 |
UCS-CPU-E5-2680 |
2.70 GHz E5-2680 130W 8C/20MB Cache/DDR3 1600MHz |
8 | UCS-MR-1X082RY-A | 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v |
8 | UCS-MR-1X041RY-A | 4GB DDR3-1600-MHz RDIMM/PC3-12800/single rank/1.35v |
Diskless | ||
1 |
UCSB-MLOM-40G-01 |
VIC 1240 modular LOM for M3 blade servers |
2 |
N20-BBLKD |
Auto-included: UCS 2.5 inch HDD blanking panel
|
2 |
UCSB-HS-01-EP |
Auto-included: Heat Sink for UCS B200 M3 server |
C260 M2 TRC#1
This configuration was also quotable as UCS-C260M2-VCD2.
Quantity |
Cisco Part Number |
Description |
1 |
C260-BASE-2646 |
UCS C260 M2 Rack Server (w/o CPU, MRB, PSU) |
2 |
UCS-CPU-E72870 |
2.4 GHz E7-2870 130W 10C/30M Cache |
16 |
C260-MRBD-002 |
2 DIMM Memory Riser Board For C260 |
16 |
UCS-MR-2X041RX-C |
2X4GB DDR3-1333-MHz RDIMM/PC3-10600/single rank/x1/1.35v |
16 |
A03-D300GA2 |
300GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted |
2 |
UCSC-DBKP-08E |
8 Drive Backplane W/Expander For C-Series |
1 |
R2XX-PL003 |
LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC |
1 |
UCSC-BBU-11-C260 |
RAID battery backup for LSI Electr controller for C260 |
1 | One of:
|
|
2 |
UCSC-PSU2-1200 |
1200W 2u Power Supply For UCS |
1 |
UCSC-RAIL-2U |
2U Rail Kit for UCS C-Series servers |
|
|
DVD drive not provided nor supported on this model |
1 |
UCS-SD-16G |
16GB SD Card module for UCS Servers |
1 |
UCSX-MLOM-001 |
Modular LOM For UCS |
32 |
UCS-MKIT-041RX-C |
Auto-Included: Mem kit for UCS-MR-2X041RX-C |
2 |
UCSC-HS-01-C260 |
Auto-Included: CPU HEAT SINK for UCS C260 M2 RACK SERVER |
2 |
UCSC-PCIF-01F |
Auto-Included: Full height PCIe filler for C-Series |
2 |
UCSC-PCIF-01H |
Auto-Included: Half height PCIe filler for UCS |
2 |
UCSC-RC-P8M-C260 |
Auto-Included: .79m SAS RAID Cable for C260 |
C240 M3S (SFF) TRC#1
![]() | Note: | The C240 M3L (LFF) is only supported under UC on UCS Specs-based. |
This configuration is also available via bundle UCUCS-EZ-C240M3S.
Quantity | Cisco Part Number | Description |
1 | UCSC-C240-M3S | UCS C240 M3 SFF w/o CPU, mem, HD, PCIe, w/ rail kit |
2 | UCS-CPU-E5-2680 | 2.70 GHz E5-2680 130W 8C/20MB Cache/DDR3 1600MHz |
8 | UCS-MR-1X082RY-A | 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v |
8 | UCS-MR-1X041RY-A | 4GB DDR3-1600-MHz RDIMM/PC3-12800/single rank/1.35v |
16 | UCS-HDD300GI2F105 | 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted |
1 | UCSC-SD-16G-C240 | 16GB SD Card Module for C240 Servers |
1 | One of:
|
|
DVD drive not offered with C240 M3. | ||
2 | UCSC-PCIE-IRJ45 | Intel i350 Quad Port 1Gb Adapter |
2 | UCSC-PSU2-1200 | 1200W 2u Power Supply For UCS |
2 | UCSC-HS-C240M3 | Auto-included: Heat Sink for UCS C240 M3 Rack Server |
1 | UCSC-RAIL-2U | Auto-included: 2U Rail Kit for UCS C-Series servers |
8 | N20-BBLKD | Auto-included: UCS 2.5 inch HDD blanking panel |
2 | UCSC-PCIF-01F | Auto-included:Full height PCIe filler for C-Series |
C220 M3S (SFF) TRC#1
![]() | Note: | This TRC is NOT supported for use with Cisco Business Edition 6000. |
![]() | Note: | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. |
This configuration is also available as bundle UCUCS-EZ-C220M3S.
Quantity | Cisco Part Number | Description |
1 | UCSC-C220-M3S | UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit |
2 | UCS-CPU-E5-2643 | 3.30 GHz E5-2643/130W 4C/10MB Cache/DDR3 1600MHz |
8 | UCS-MR-1X082RY-A | 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v |
8 | UCS-HDD300GI2F105 | 300GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted |
1 | One of:
|
|
1 | UCSC-SD-16G-C220 | 16GB SD Card Module for C220 Servers |
DVD drive not offered with C220 M3. | ||
1 | UCSC-PCIE-IRJ45 | Intel i350 Quad Port 1Gb Adapter |
2 | UCSC-PSU-650W | 650W power supply for C-series rack servers |
2 | UCSC-HS-C220M3 | Auto-included: Heat Sink for UCS C220 M3 Rack Server |
1 | UCSC-RAIL1 | Auto-included: 2U Rail Kit for C220 servers |
C220 M3S (SFF) TRC#2
![]() | Note: | The C220 M3L (LFF) is only supported under UC on UCS Specs-based. |
Quantity | Cisco Part Number | Description |
1 | UCSC-C220-M3S | UCS C220 M3 SFF w/o CPU, mem, HDD, PCIe, w/ rail kit |
2 | UCS-CPU-E5-2609 | 2.4 GHz E5-2609/80W 4C/10MB Cache/DDR3 1066MHz |
4 | UCS-MR-1X082RY-A | 8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v |
4 | A03-D500GC3 | 500GB 6Gb SATA 7.2K RPM SFF hot plug/drive sled mounted |
1 | UCSC-SD-16G-C220 | 16GB SD Card Module for C220 Servers |
1 | One of:
(Bundles ship with -9266) |
(Bundles ship with -9266-8i) |
DVD drive not offered with C220 M3. | ||
1 | R2XX-RAID10 | Enable RAID 10 Setting |
1 | UCSC-PSU-650W | 650W power supply for C-series rack servers |
4 | N20-BBLKD | Auto-included: UCS 2.5 inch HDD blanking panel |
2 | UCSC-HS-C220M3 | Auto-included: Heat Sink for UCS C220 M3 Rack Server |
1 | UCSC-PSU-BLKP | Auto-included: Power supply blanking panel/filler (same as San Mateo) |
1 | UCSC-RAIL1 | Auto-included: 2U Rail Kit for C220 servers |
1 | UCSC-PCIF-01F | Auto-included: Full height PCIe filler for C-Series |
End of Sale UC on UCS TRC Bills of Material (BOMs)
B200 M2 TRC#1
This BOM was also quotable via fixed-configuration bundle UCS-B200M2-VCS1. Memory and hard drives changes due to industry technology transitions not UC app requirements.
Quantity | Cisco Part Number | Description |
1 | N20-B6625-1 | UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine |
2 | A01-X0109 | 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz |
12 |
Either:
|
|
2 | Either:
|
|
1 | N20-AC0002 | UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1 |
2 | N20-BHTS1 | Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server |
B200 M2 TRC#2
Memory and hard drives changes are due to industry transitions and not UC app requirements.
Quantity | Cisco Part Number | Description |
1 | N20-B6625-1 | UCS B200 M2 Blade Server w/o CPU, memory, HDD, mezzanine |
2 | A01-X0109 | 2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz |
12 |
Either:
|
|
Diskless | ||
1 | N20-AC0002 | UCS M81KR Virtual Interface Card/PCIe/2-port 10Gb1 |
2 | N20-BHTS1 | Auto-Included: CPU Heat Sink for UCS B200 M1 Blade Server |
B200 M1 TRC#1
This configuration was also quotable as UCS-B200M2-VCS1.
Quantity |
Cisco Part Number |
Description |
1 |
N20-B6620-1 |
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine |
2 |
N20-X00002 |
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz |
8 |
N01-M304GB1 |
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
2 |
A03-D146GA2 |
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted |
1 |
N20-AQ0002 |
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb |
2 |
N20-BHTS1 |
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server |
B200 M1 TRC#2
Quantity |
Cisco Part Number |
Description |
1 |
N20-B6620-1 |
UCS B200 M1 Blade Server w/o CPU, memory, HDD, mezzanine |
2 |
N20-X00002 |
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz |
8 |
N01-M304GB1 |
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
Diskless | ||
1 |
N20-AQ0002 |
UCS M71KR-Q QLogic Converged Network Adapter/PCIe/2port 10Gb |
2 |
N20-BHTS1 |
Auto-included: CPU Heat Sink for UCS B200 M1 Blade Server |
C210 M2 TRC#1
This configuration was also quotable as UCS-C210M2-VCD2. Memory and hard drives changes due to industry technology transitions not UC app requirements.
Quantity | Cisco Part Number | Description |
1 | R210-2121605W |
UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card
|
2 | A01-X0109 |
2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz |
12 |
Either:
|
|
10 |
Either:
|
|
1 | R2XX-PL003 | LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC |
1 | R2XX-LBBU2 | Battery Back-up for 6G based LSI MegaRAID Card |
1 | R210-SASXPAND | SAS Pass-Thru Expander (srvr requiring > 8 HDDs) - C210 M1 |
1 | Either:
|
|
1 | Either:
|
|
1 | Either:
|
|
1 | R210-ODVDRW | DVD-RW Drive for UCS C210 M1 Rack Servers |
6 | N20-BBLKD | Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers |
3 | R200-PCIBLKF1 | Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
2 | R210-BHTS1 | Auto-Included: CPU heat sink for UCS C210 M1 Rack Server |
1 | R2X0-PSU2-650W | Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server |
1 | SASCBLSHORT-003 | Auto-Included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander)
|
C210 M2 TRC#2
Memory and hard drives changes due to industry technology transitions not UC app requirements.
Quantity | Cisco Part Number | Description |
1 | R210-2121605W |
UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card
|
2 | A01-X0109 |
2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz |
12 |
Either:
|
|
2 |
Either:
|
|
1 | R2XX-PL003 | LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC |
1 | R2XX-LBBU2 | Battery Back-up for 6G based LSI MegaRAID Card |
1 | Either:
|
|
1 | Either:
|
|
1 | Either:
|
|
1 | R210-ODVDRW | DVD-RW Drive for UCS C210 M1 Rack Servers |
1 | N2XX-AQPCI03 | Qlogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter |
14 | N20-BBLKD | Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers |
2 | R200-PCIBLKF1 | Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
2 | R210-BHTS1 | Auto-Included: CPU heat sink for UCS C210 M1 Rack Server |
1 | R210-SASCBL-002 | Auto-Included: Long SAS Cable for C210 (connects to SAS Extender) |
1 | R210-SASXTDR | Auto-Included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1 |
1 | R2X0-PSU2-650W | Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server |
C210 M2 TRC#3
Memory and hard drives changes due to industry technology transitions not UC app requirements.
Quantity | Cisco Part Number | Description |
1 | R210-2121605W |
UCS C210 M2 Srvr w/1PSU, w/o CPU, mem, HDD, DVD or PCIe card
|
2 | A01-X0109 |
2.66GHz Xeon E5640 80W CPU/12MB cache/DDR3 1066MHz |
12 |
Either:
|
|
Diskless | ||
1 | R2XX-PL003 | LSI 6G MegaRAID 9261-8i card (RAID 0,1,5,6,10,60) - 512WC |
1 | R2XX-LBBU2 | Battery Back-up for 6G based LSI MegaRAID Card |
1 | Either:
|
|
1 | Either:
|
|
1 | Either:
|
|
1 | R210-ODVDRW | DVD-RW Drive for UCS C210 M1 Rack Servers |
1 | N2XX-AQPCI03 | Qlogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter |
14 | N20-BBLKD | Auto-Included: HDD slot blanking panel for UCS B-Series Blade Servers |
2 | R200-PCIBLKF1 | Auto-Included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
2 | R210-BHTS1 | Auto-Included: CPU heat sink for UCS C210 M1 Rack Server |
1 | R210-SASCBL-002 | Auto-Included: Long SAS Cable for C210 (connects to SAS Extender) |
1 | R210-SASXTDR | Auto-Included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1 |
1 | R2X0-PSU2-650W | Auto-Included: 650W power supply unit for UCS C200 M1 or C210 M1 Server |
C210 M1 TRC#1
![]() | Note: | Application co-residency not supported on this configuration - single VM only. |
This BOM was also quotable as UCS-C210M1-VCD1.
Quantity |
Cisco Part Number |
Description |
1 |
R210-2121605 |
UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards) |
2 |
N20-X00002 |
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz |
6 |
N01-M302GB1 |
2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
6 |
A03-D146GA2 |
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted |
1 |
R2XX-PL003 |
LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC |
1 |
R2XX-LBBU2 |
Battery Back-up for 6G based LSI MegaRAID Card |
1 |
R2X0-PSU2-650W |
650W power supply unit for UCS C200 M1 or C210 M1 Rack Server |
1 |
R250-SLDRAIL |
Rail Kit for the C210 M1 Rack Server |
1 |
R210-ODVDRW |
DVD-RW Drive for UCS C210 M1 Rack Servers |
10 |
N20-BBLKD |
Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers |
4 |
R200-PCIBLKF1 |
Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
2 |
R210-BHTS1 |
Auto-included: CPU heat sink for UCS C210 M1 Rack Server |
2 |
R210-SASCBL-002 |
Auto-included: Long SAS Cable for C210 (connects to SAS Extender) |
1 |
R210-SASXTDR |
Auto-included: SAS Extender (servers requiring </= 8 HDDs) for UCS C210 M1 |
C210 M1 TRC#2
This BOM was also quotable as UCS-C210M1-VCD2.
Quantity |
Cisco Part Number |
Description |
1 |
R210-2121605 |
UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards) |
2 |
N20-X00002 |
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz |
6 |
N01-M302GB1 |
2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
6 |
N01-M304GB1 |
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
10 |
A03-D146GA2 |
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted |
1 |
R2XX-PL003 |
LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC |
1 |
R2XX-LBBU2 |
Battery Back-up for 6G based LSI MegaRAID Card |
1 |
N2XX-ABPCI03 |
Broadcom BCM5709 Quad Gig E card (10/100/1GbE) |
1 |
R2X0-PSU2-650W |
650W power supply unit for UCS C200 M1 or C210 M1 Rack Server |
1 |
R250-SLDRAIL |
Rail Kit for the C210 M1 Rack Server |
1 |
R210-ODVDRW |
DVD-RW Drive for UCS C210 M1 Rack Servers |
1 |
R210-SASXPAND |
SAS Pass-Thru Expander (srvr requiring > 8 HDDs) - C210 M1 |
1 |
N20-BBLKD |
Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers |
1 |
R200-PCIBLKF1 |
Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
1 |
R210-BHTS1 |
Auto-included: CPU heat sink for UCS C210 M1 Rack Server |
1 |
SASCBLSHORT-003 |
Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander) |
C210 M1 TRC#3
Quantity |
Cisco Part Number |
Description |
1 |
R210-2121605 |
UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards) |
2 |
N20-X00002 |
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz |
6 |
N01-M302GB1 |
2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
6 |
N01-M304GB1 |
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
''''2 |
A03-D146GA2 |
146GB 6Gb SAS 10K RPM SFF HDD/hot plug/drive sled mounted |
1 |
R2XX-PL003 |
LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC |
1 |
R2XX-LBBU2 |
Battery Back-up for 6G based LSI MegaRAID Card |
1 |
N2XX-ABPCI03 |
Broadcom BCM5709 Quad Gig E card (10/100/1GbE) |
1 |
R2X0-PSU2-650W |
650W power supply unit for UCS C200 M1 or C210 M1 Rack Server |
1 |
R250-SLDRAIL |
Rail Kit for the C210 M1 Rack Server |
1 |
R210-ODVDRW |
DVD-RW Drive for UCS C210 M1 Rack Servers |
1 |
N2XX-AQPCI03 |
QLogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter |
14 |
N20-BBLKD |
Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers |
2 |
R200-PCIBLKF1 |
Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
2 |
R210-BHTS1 |
Auto-included: CPU heat sink for UCS C210 M1 Rack Server |
2 |
R210-SASCBL-002 |
Auto-included: Long SAS Cable for C210 (connects to SAS Extender) |
1 |
SASCBLSHORT-003 |
Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander) |
C210 M1 TRC#4
Quantity |
Cisco Part Number |
Description |
1 |
R210-2121605 |
UCS C210 M1 Rack Server w/1 PSU (w/o CPU, memory, HDD, DVD, PCIe cards) |
2 |
N20-X00002 |
2.53GHz Xeon E5540 80W CPU/8MB cache/DDR3 1066MHz |
6 |
N01-M302GB1 |
2GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
6 |
N01-M304GB1 |
4GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs |
1 |
R2XX-PL003 |
LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) - 512WC |
1 |
R2XX-LBBU2 |
Battery Back-up for 6G based LSI MegaRAID Card |
1 |
N2XX-ABPCI03 |
Broadcom BCM5709 Quad Gig E card (10/100/1GbE) |
1 |
R2X0-PSU2-650W |
650W power supply unit for UCS C200 M1 or C210 M1 Rack Server |
1 |
R250-SLDRAIL |
Rail Kit for the C210 M1 Rack Server |
1 |
R210-ODVDRW |
DVD-RW Drive for UCS C210 M1 Rack Servers |
1 |
N2XX-AQPCI03 |
QLogic QLE2462, 4Gb dual port Fibre Channel Host Bus Adapter |
14 |
N20-BBLKD |
Auto-included: HDD slot blanking panel for UCS B-Series Blade Servers |
2 |
R200-PCIBLKF1 |
Auto-included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
2 |
R210-BHTS1 |
Auto-included: CPU heat sink for UCS C210 M1 Rack Server |
2 |
R210-SASCBL-002 |
Auto-included: Long SAS Cable for C210 (connects to SAS Extender) |
1 |
SASCBLSHORT-003 |
Auto-included: 2 Short SAS Cables for UCS C210 Server (for SAS Expander) |
C200 M2 TRC#1
![]() | Note: | This TRC has special rules for allowed VM OVA templates and allowed co-residency. |
This configuration was also quotable as UCS-C200M2-VCD2.
When quoted as part of Cisco Business Edition 6000, it was also quotable as either UCS-C200M2-VCD2BE, UCS-C200M2-BE6K or UCS-C200M2-WL8 (in CMBE6K-UCL or CMBE6K-UWL).
Memory and hard drives changes due to industry technology transitions not UC app requirements.
Quantity |
Cisco Part Number |
Description |
1 |
R200-1120402W |
UCS C200 M2 Srvr w/1PSU, DVD w/o CPU, mem, HDD or PCIe card |
2 |
A01-X0113 |
2.13GHz Xeon E5506 80W CPU/4MB cache/DDR3 800MHz |
6 |
Either:
|
|
4 |
R200-D1TC03 |
Gen 2 1TB SAS 7.2K RPM |
1 |
R200-PL004 |
LSI 6G MegaRAID 9260-4i card (C200 only) |
1 | Either:
|
|
1 | Either:
|
|
2 |
R200-BHTS1 |
Included: CPU heat sink for UCS C200 M1 Rack Server |
1 |
R200-PCIBLKF1 |
Included: PCIe Full Height blanking panel for UCS C-Series Rack Server |
1 |
R200-SASCBL-001 |
Included: Internal SAS Cable for a base UCS C200 M1 Server |
1 | Either:
|
|
1 |
R2XX-PSUBLKP |
Included: Power supply unit blanking pnl for UCS 200 M1 or 210 M1 |
Back to: Unified Communications in a Virtualized Environment |
---|