Most of my blog posts derive not from what I think you ought to know about Cisco UCS although some certainly do, but generally from customer questions about the technology, and if one customer is asking a particular question then likely many customers will be asking the same question and this one’s a cracker!
A customer said to me the other day something like:
“Our current Blade infrastructure meets our security standards, where we CANNOT have traffic separated solely by VLANs, it does this by using separate modules in the Chassis to which we run separate cables, and we are unclear if Cisco UCS will give us the same level of traffic separation”
Great question, Hold tight let’s go!
OK, what we are really talking about here is the physical architecture of Cisco UCS, so for the purposes of providing some context to the discussion let’s assume we have two bare metal Windows blades which sit of different VLANs and those VLANs for whatever reason cannot co-exist on any NIC, cable or switch if the only separation between them would by VLAN ID (802.1Q Tags). I chose bare metal blades because there is already a myriad of ways for providing secure separation between Virtual Machines in the same Cisco UCS Pod, utilising Cisco Nexus 1000v and Virtual Secure Gateway (VSG) to name but one.
So first off these VLANs start off life in the network Core either on separate physical switches or on the same switch but separated by Nexus Virtual Device Contexts (VDC) or Virtual Routing and Forwarding (VRF), but let’s keep it simple and assume these VLANs exist on physically separate upstream switches, which in turn are connected in to our Cisco UCS Fabric Interconnects.
So first thing to remember is that while the Cisco UCS Fabric Interconnect my look like a Cisco Nexus 5k that has simply been painted a different colour 🙂 it doesn’t act like one.
I’m sure you are aware of the traditional Switch Mode Vs End Host Mode “debate” but this never really crops up anymore since version 2.0 code and the support for disjointed layer 2 domains. Of which the above diagram is a prime example.
As I’m sure you know the Fabric Interconnects by default run in End Host Mode, in which they appear to the upstream LAN and SAN as just a huge server with multiple NICs and HBA’s. And as we also know with Cisco UCS what you see is certainly not what you get, what I mean by that is the server, the NIC, the cable and the switch port are all virtualised.
So let’s take our two servers, the server in slot 5 will be in VLAN 1 (blue) and the server in slot 6 in VLAN 2 (red)
So again for the sake of simplicity we will only use a single FEX to FI cable and create a single vNIC on each server mapped to Fabric A with the redundancy provided by hardware fabric failover.
So logically the setup is as per the below.
OK the next key concept to understand is that whenever you create a vNIC on a Cisco CNA like the Virtual Interface Card (VIC) this automatically creates the corresponding virtual Ethernet port on the fabric interconnects (On both FI’s if fabric failover is enabled) and connects the veth to the vNIC with a virtual cable as shown below, this creates a Virtual Network Link (VN-Link).
This is because the Cisco VIC is a Fabric Extender in mezzanine form factor. This is known as Adapter FEX.
Next key concept: The cable between the FEX and the Fabric Interconnect is not a standard 802.1Q trunk, as with all FEX technologies you can think of these cables as “The Backplane” connecting the Control Plane (FI) to the Data Plane (FEX).
Obviously the FI does need to tag traffic between the FI and the FEX in order to ensure traffic from a particular vNIC is correctly delivered only to its corresponding veth but these tags are not 802.1Q tags but instead Virtual Network TAGs (VN-TAGs) which are applied in hardware and as such much harder to Spoof
So the end to end picture looks like this.
So as you can see if we have a vNIC which only carries a single VLAN and that VLAN is defined as native, then there is no 802.1Q tags required. And similarly with regards to the uplinks if they are only mapped to a single native VLAN again no 802.1Q tags are required on these links.
So logically the above setup is the same architecture as having a server with 2 physically separate NICs connected into different upstream networks. Which if you remember complies with the customers security requirements.
As always comments welcome.