This latest post in my “UCS for….” Series attempts to put across the Cisco UCS key concept of the Blade and its role in the UCS system for people already familiar with Storage Concepts.
The role of the Blade has certainly changed in a Cisco UCS environment, no longer is it “The Server”, it is now just the physical memory, CPU and I/O that the server makes use of. “The Server” in the case of UCS now being the Service Profile, Basically an XML file with all of that server’s identity, addresses, BIOS settings and firmware defined.
Abstracting the logical server from the physical tin opens up a huge raft of efficiencies and dramatically increases flexibility. As I’m sure all Hypervisor admins fully appreciate.
So for the purposes of this post think of the blade as a disk in an array.
Now, in a disk array do you generally care which physical disk your data is currently on?
Generally the answer is No,
In the same way you don’t necessarily need to care which bit of tin your Service Profile is currently making use of.
Pause…….. for that key concept to sink in.
OK, I’m not going to say it is wrong for customers to want to be able to say “That blade x is server y” and put host name stickers on them etc.. etc.. That’s fine and many customers want just that. There is a certain amount of comfort in knowing and controlling exactly which blades are associated to which service profiles. Its just a human thing and a concept which is deeply ingrained in most server admins.
However in the era of the cloud and increased adoption of automation / orchestration tools this “Legacy” thought process is gradually softening.
I must admit I get a great feeling when customers fully embrace the statelessness of UCS and allow it to “Stretch its legs” and make full use of server pools and qualifications. When you associate a Service Profile to a server pool the system just picks a blade out of the specified pool and away it goes, if that blade ever fails and there is a spare blade in the pool the UCS will just dynamically grab that spare blade regardless of which chassis that spare blade may be in, and that server is back up in a few minutes.
Now when I said “do you care about which disk your data sits on” you may have well said, “no, but I do kind of care what TYPE of disk my data sits on”, i.e. whether your data is on larger but relatively slow SATA or NL-SAS drives or on super fast Enterprise Flash Drives (EFD)
Enter Server pool qualifications; you can setup server pools based on most physical attributes of a blade. I.e. if a blade has 40 Cores dynamically put it in my pool called “High Performance” if it has 512GB of RAM dynamically put it in my pool called “ESXi Servers” as a couple of examples.
This separation of the Service Profile from the physical blade gives the UCS admin the flexibility to move service profiles between different spec blades as the need arises. For example if there is a greater demand on the payroll system at month end they can associate that service profile to a “High Performance” blade for the duration of the peak demand and then associate back to an “Efficient performance” blade for when that peak demand reduces. This moving of service profiles is disruptive however, as server needs to be shutdown first. But hey still awesome to be able to do and a huge advancement from where the compute industry was.
In any case a lot of customers have several hosts in an ESXi cluster, and cluster bare metal servers so critical workloads are protected from single blade failures. So done with a bit of planning you could move Service Profiles between different blades without impacting a clustered application.
Similarly upgrades are now just a case of upgrading or buying a spare blade soak testing it for as long as you need (I use a Soak Test Service Profile with several diag utils on) then in your outage window and with a couple of clicks of the mouse, move your service profile to the upgraded blade, with all your addresses, BIOS settings and Firmware revisions maintained.
I must stress that I have not seen any of the below on any roadmaps it’s just where my thinking goes.
So what could be in the future if we carry on this thought process and the analogy of UCS as the “Compute Array”, it would seem logical to me that the next stage of evolution would be non-disruptive service profile moves (akin to a bare metal vMotion) which would then open up the possibility of moving service profiles dynamically and seamlessly between blades of differing performance as demands on that workload increase or decrease a cross between VMware’s Dynamic Resource Sheduling (DRS) and EMC’s Fully Automated Storage Tiering (FAST) but for compute. Wow, what a place the world will be then!
So hope this post helps all you Storage Bods develope a better under standing of Cisco UCS, as ever please feel free to comment on this post. I enjoy getting feedback and answering your Cisco UCS questions.