In the same way that the Karate Kid only knew 5 Karate moves and was then immediately able to beat black belts and win a whole tournament, once you know the 5 “Moves” below you will know all the key UCS features and the value of the proposition.
A single IP address to manage up to 320 servers including all network and storage elements of those servers.
This also means 1 management pane of glass via a single GUI (UCSM)
2) Unified Fabric (Fibre Channel over Ethernet (FCoE))
This basically just means wrapping a Fibre channel frame in an Ethernet frame and transmitting it over same cabling.
Since the release of 10 Gigabit Ethernet it is now possible to accommodate “lossy” Ethernet and “lossless” Fibre Channel on the same medium.
This also required Cisco changing the rules of Ethernet with the inception of Cisco Data Centre Ethernet (DCE) submitted to the IEEE for standardisation who thought the name Data Centre Bridging (DCB)) sounded better.
Example: Take a VMware ESX host that requires 8 NICSs and two HBA’s making 10 physical connections. This same server when deployed on a Cisco UCS blade now has only 2! That equates to an 80% cabling reduction. On top of which adding additional NICS and HBAs is now a simple matter of mouse clicks, leading to what has been termed a “Wire once” deployment.
This significant cabling reduction leads to several other benefits including: Less server switch ports required which means less switches, more efficient cooling, and less power used (Gigabit Ethernet Cat 6 uses 8watts of power per end, FCoE twin-ax uses 0.1 watt per end)
3) Extended Memory Technology (EMT)
It is widely accepted that a CPUs optimal running efficiency is 60-70% but with the rapid evolution of CPUs and the ever increasing amount of cores and multi-threading capabilities per socket, most hosts, particularly VMware ESX hosts run out of RAM well before this point.
In an Intel Nahalem (XEON 5500) architecture, memory is directly associated per processor (socket). Each socket has 3 memory channels and each memory channel has access to 2 DDR3 DIMM slots this equals 6 DIMM slots per socket. Therefore a dual socket server can access a maximum of 12 DIMM slots and if using 16GB DIMMS the absolute maximum amount of RAM that can be installed is 192GB.
So how do you get more RAM? Well you need to add another CPU which in fact makes it even less efficient than before.
Enter EMT which allows a dual socket host access to a massive 384GB of RAM at higher Bus speeds (1066MHz compared to 800MHz)
How does Cisco manage this? Well as mentioned in chapter 3 Cisco UCS is a ground up development and as such Cisco by partnering with the likes of Intel and VMware could address a lot of these limitations and provide several optimisations.
Cisco realised that the maximum single DIMM that a BIOS could logically address is 32GB. and while at the time of writing (Q4 2010) 32GB DIMMs are still not readily commercially available, by developing the “Catalina” ASIC and placing it between the CPU and memory channels it was possible to in effect “RAID” 4 x 8GB physical DIMMs into 1 x 32GB logical DIMM. Thereby able to present 6 x 32GB logical DIMMs (192GB) to each socket this physically equates to 24 x 8GB DIMMs per socket on the system board making 48 DIMM sockets on a dual socket blade.
The Cisco “Catalina” ASICs sit between the DIMMs and the CPU which presents 24 x 8GB physical DIMMS as 6 x 32GB logical DIMMs per CPU (socket)
While the benefits detailed above are clear with regards to maximising memory there is another benefit to be had if there isn’t a requirement to max the memory to 384GB per blade. For example take the maximum amount of memory that can be installed in an HP blade server utilising dual Intel XEON 5500 processors:
Assuming £1000 for an 16GB DDR3 DIMM and £150 for a 4GB DIMM
HP DualXEON 5500 12 DIMM Slots using 16GB DIMMS = 192MB @ 800MHz = £12000
Cisco UCS B250 48 DIMM Slots using 4GB DIMMS = 192MB @ 1066MHz = £7200
As can be seen there are significant cost savings to be had by using a large number of low capacity DIMMs. This is only made possible by having 48 DIMM slots available with Cisco Extended Memory Technology compared to only 12 in a comparable HP server.
Ground up built for virtualised environments, partnered with Intel and VMware on virtualisation optimisations. Virtualises I/O at BIOS level so O/S sees those resources as “Physical”
Blades or Compute nodes have no identity, identity is via service profiles. This means to replace a blade can take 15mins (as long as it takes to boot) all MACS, WWNs and UUIDs are only ever associated with a profile which can be detached and reattached to blades as required.
Historically if a server failed in the data centre the procedure would be to send an engineer in to investigate and if required replace the failed blade.
Obviously we don’t want to have to reconfigure other entities which may be linked to this particular MAC address, so the engineer would move the NIC cards from the failed unit to the replacement. Similarly we don’t want to have to involve the SAN team in having to re-zone the storage, so the engineer would move the HBAs from the failed unit to the replacement to ensure the WWPNs remain unchanged. Also there may be software licenses tied to the Universally Unique Identifier (UUID ) of the server.
All in all this server swap out could take several hours resulting in server downtime and engineer resource costs.
In a Cisco UCS environment this would be a simple matter of disassociating the service profile from the failed blade to a standby or replacement blade and then just power up the new blade.
As all MAC addresses, WWPNs, UUIDs, firmware revisions and settings are only ever associated with a service profile this new blade will be an exact match of the failed unit thus preserving all identity information.