Cisco UCS has had a baby (Mother and Daughterboard doing well)

As many of you know I am now in full CCIE Datacenter study mode, and as such have not had as much time to blog and answer posted questions as I would like. However I felt compelled to take a break from my studies to write a post on the new Cisco UCS generation 3 Fabric Interconnect.

I noticed the other day that Cisco have released the data sheet on the latest member of the Cisco UCS family, the Cisco 6324 Fabric Interconnect, which is great because I can now finally blog about it.

http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6300-series-fabric-interconnects/datasheet-c78-732207.html

Having been waiting for this for a long time, I immediately contacted our purchasing team to get a quote, with the view to getting one in for our Lab so I can have a good play with it, and I was again pleased to see it was listed on Cisco Commerce Workspace (CCW) all-be-it still on New Product Hold.

The main reason I have been waiting for this product is that it meets a few use cases which historically UCS never really addressed to the level I wanted with a “full-fat” B-Series deployment, but my customers needed.

Sure, I could use some stand-a-lone C-Series rack mounts, but I really want the power of UCS Manager and to consolidate all these UCS Domains under UCS Central and integrate them with UCS Director.

And that is where the new Cisco 6324 Fabric Interconnect IO Module comes in, it brings all the power and features of a full scale UCS Solution, but at the scale and price point that meets these smaller use cases. The best of both worlds if you like.

So what does this new solution look like?

Well as can be seen from the above data sheet and the below figure, the Fabric Interconnects occupy the IO Module slots in the Chassis.

5108 v2 Chassis with 6324 FI IOM

5108 v2 Chassis with 6324 FI IOM

If we look at the new Fabric Interconnect a little closer we see there are 4 x 10G Unified ports and 1 x 40G QSFP+ Port, and as can be seen from the below image there are a number of connectivity options available including direct attached storage and up to 7 directly attached C Series Rack mount servers, allowing a total of 15 Severs within the system.

6324 Fabric Interconnect

Internally the 6324 Fabric Interconnect provides 2 x 10Gb Traces (KR Ports) to each half width blade slot (think 2204XP)

But I’m sure you are wondering what happened to the L1 and L2 cluster ports, which would allow two Fabric Interconnects to cluster and form an HA pair.

Well that explains why there is also a new Chassis being released. This updated 5108 Chassis is fully backwards compatible, and has hardware support for all past, present and foreseen Fabric Interconnects, IO Modules, Power Supplies and Servers. Although remember it is actually the version of UCS Manager which determines supported hardware.

This new chassis not only supports a new Dual Voltage power supply but also comes with a new backplane, and part of that new back plane, yes you guessed it, are the required traces to support the 1Gbit cluster interconnect and primary heartbeat between the 6324 Fabric Interconnects. (2104/2204/2208 if used are unaffected).

The secondary heartbeat still runs over the Chassis SEEPROM as per the traditional UCS method (See my previous post on Cisco UCS HA)

So a new 6324 based solution could look like the following, which I’m sure you’ll agree is more than suitable for all the use cases I mentioned above.

Fully Deployed 6324 FI IOM

At First Customer Ship (FCS) the servers supported for use with the 6324 FI are the B200M3, C220M3 and C240M3.

Anyway I for one can’t wait to get my hands on this for a good play, and am really excited about all the possibilities for future updates that this platform allows.

Watch this space carefully, I feel Cisco have some big plans for this new arrival.

Regards
Colin

Posted in Product Updates | Tagged , , , , , , , | 1 Comment

The King is Dead, Long live the King!

Huge congratulations to Cisco for achieving number 1 in the x86 blade server market in only 5 Years since launch.

Cisco No.1

Cisco No.1

According to the latest IDC worldwide quarterly Server Tracker (2014 Q1) Cisco UCS which turned 5 years old this year has hit the number one spot for x86 Server market share in Americas and No.2 worldwide.

To go from zero to No.1 in only 5 years from a standing start is an awesome achievement, and a real credit to all those involved.

In the 5 years that I have been SME for Cisco UCS, I have seen this traction first hand and still get a great buzz from seeing the lights switch on when people “get it”

This latest news only gets me more excited about Cisco Application Centric Infrastructure (ACI) as many of the same great minds that bought us Cisco UCS developed Cisco ACI.

Congrats!

Regards
Colin

Posted in General | Tagged , , , , , , | 1 Comment

#EngineersUnplugged ACI Edition with Colin Lynch and Hal Rottenburg

Posted in SDN | Tagged , , , , | Leave a comment

Colin Lynch and Joe Onisick Talk Cisco ACI

Listen to Cisco Champion radio with Joe Onisick @jonisick and Colin Lynch @UCSguru on Cisco ACI and Nexus 9000 hosted by Amy Lewis @CommsNinja

Posted in SDN | Tagged , , , , , , , , | Leave a comment

Behind The Cisco Service Request

Ever wondered who’s on the other end of your Cisco Service Request? well wonder no more as I put on my CiscoChampion hat and play journalist for the week at Cisco Live Milan

Posted in Cisco Champion | Tagged , , , , , | Leave a comment

The SDN Meteor is coming

When you next look up at the night sky, you may see a bright spec in the distance, and that bright spec is set to get a lot brighter.

The spec of which I speak is Software Defined Networking (SDN) and is set to change the network as we know it forever and perhaps a lot sooner than first thought.

With the “commoditisation” of Pure SDN solutions and hybrid SDN solutions which also harness custom ASICS, things will change! Maybe not today, not tomorrow but they will change.

We have plenty of warning about this meteor strike, not to try and divert it, as impact is inevitable, but we have fair warning to prepare for it and to evolve our traditional networking skill set in time.

I do not see the results of this strike, being an immediate extinction level event for traditional networkers but more like a huge lake gradually drying up.

At the moment the lake is huge and teaming with life but gradually as businesses move towards SDN solutions, the traditional networking lake will slowly start to dry up until a few who are unwilling to adapt are flapping in a pool of mud awaiting their imminent fate.

This is not by any means meant to be a doom and gloom “End of the traditional networking world is nigh” type post, but a positive post that the networking world is about to get real interesting and bought kicking and screaming into the modern world of flexibility, agility and fast provisioning. And I for one am not close enough to retirement age to ignore it, and am actually quite looking forward to the new Challenge.

Having attended Cisco Live Europe and VMware PEX this month, I’ve spoken at length to the relevant business units, and I am very much encouraged by the commitments and training road maps being put in place to bring us “Traditional Networkers” on this new and exciting journey ahead.

Colin

Posted in SDN | Tagged , , , , | 6 Comments

Cols Guide to… VXLAN

Your indispensible guides to making your IT life simpler.

So what is VXLAN and why do we need it?

Well put simply it’s VLAN with an X in the middle :-) the X standing for eXtensible. VXLAN was a joint project between Cisco, VMware, Redhat and Citrix which is why it has been so widely adopted, and underpins the majority of SDN offerings.
And as to why we need it, well that’s mainly to address two limitations of using regular VLANs. Scale and Flexibility.

Scale:
As we all know standard 802.1Q VLANs scale to just over 4000 VLAN Ids, and while that number sounds a lot and is fine in most cases, large Service Providers, Enterprises and Multi Tenant environments ,would certainly need more.

VXLAN encapsulates the standard Ethernet frame and adds a header to it including a 24bit VXLAN ID field which increases the number of VLANs from 4096 to 16million logical segments, while only adding approx 50 Bytes of overhead to the frame (udp header)

Flexibility:
In this world of ever increasing workload flexibility and agility we need a way of quickly and safely providing connectivity between Virtual Machines anywhere in the network where we have capacity.
Historically this was done by extending VLANs everywhere that a Virtual Machine may be required. This as we all know comes with a raft of potential issues around Scale, Complexity and resiliency
As the Layer 2 Frame is encapsulated into an IP Packet it can now cross Layer 3 boundaries! This opens up a whole raft of use cases.

These use cases include, but are certainly not limited to:
• Running layer 3 all the way to the edge of your network then mapping your VXLANs over the top (overlay) getting the best of both worlds of a L3 transport but Layer 2 adjacency / reach ability wherever you need it.
• Extend your Layer 2 into any Public/Hosted Cloud allowing you to move VMs in and out of a hosted service as and when you need to. (Cloud Burst)
• Extending a VLAN over a Layer 3 Data Centre Interconnect (DCI) for Disaster Recovery (DR) to allow VM mobility between Data Centres.

Also IP packets make much better use of Port-Channelled links unlike other encapsulation technologies like MAC in MAC.

So how does VXLAN work?

The VXLAN enabled switch (The Nexus 1000v VEM in my example below) learns the VM’s MAC Address, and the assigned VXLAN ID; it then encapsulates the frame according to the port profile the VM is assigned to.
When the VM first comes online the VEM assigns it to a defined Multicast Group, which carries all, Broadcast, Unknown Unicast and multicast traffic (B/U/M). Known Unicasts are sent directly to the correct destination VEM/port.
Although all VMs/Tenants are assigned to the same Multicast group the VXLAN segment IDs are used to only deliver traffic to the same VXLAN thus maintaining and ensuring tenant separation.
The resulting VXLAN “tunnels” terminate at either end on the VXLAN enabled Switches the VM’s/Servers are connected to. These Switches are referred to as Virtual Tunnel End Points (VTEPs)

Figure 1 below shows the VXLAN encapsulation (Wrapper) put around the original Ethernet frame.

Figure 1 VXLAN Encapsulation

VXLAN Packet

The Outer IP’s added by the VEM are for the VTEPs, VTEPs can be a virtual switch residing in a hypervisor like the Nexus 1000v or a logical switch residing in a physical switch.
If you want to “break out” of the VXLAN and have your VM talk to a Bare Metal device or a gateway for routing then a VTEP Gateway is required. This VXLAN gateway has an interface in the VXLAN and an interface in the classical Ethernet VLAN then bridges between the two.
Examples of VXLAN gateways are the Cisco ASR1000v/CSR1000v or the VXLAN Gateway Services Module for the Nexus 1110/1010 Virtual Services Appliance. Some VXLAN enabled physical switches are also capable of providing VXLAN gateway functionality.
As mentioned above VXLAN relies on having an IP Multicast Enabled network between VTEPs.
There are 2 Cisco (non IETF) enhancements which negate the need for an IP Multicast enabled network.
1) Head-end software replication.
The VTEP (Nexus 1000v in my example) sends a copy of the B/U/M Traffic via unicast to all possible VTEPs on which the destination MAC could be located. (works well for smaller deployments)

2) The second solution relies on the control plane of the Nexus 1000V virtual switch, the Virtual Supervisor Module (VSM), to distribute the MAC locations of the VMs to the Nexus 1000V Virtual Ethernet Module (VEM, or the data plane), so that all packets can be sent in unicast mode. While this solution seemingly conflicts with the VXLAN design objective of not relying on a control plane, it provides an optimal solution within Nexus 1000V-based virtual network environments. Compatibility with other VXLAN implementations is maintained through IP Multicast, where required.

VXLAN Configuration example:

Physical Topology

Physical Topology

Logical Topology

VXLAN Logical Topology

First Ensure IP multicast is enabled on the switch and SVI interfaces.

Ip pim sparse-dense-mode (on the L3 interfaces)
Ip pim birdir-enable (recommended as any endpoint could be a sender or receiver)
Ip send-rp-announce Loopback0 scope 16 birdir (sets switch up as an RP)
Ip pim send-rp-discovery Loopback0 scope 16

Verify with “sh ip pim interface” and “sh ip pim rp map

On Cisco Nexus 1000v VSM

Feature Segmentation (enable VXLAN Feature, requires advance license)

Bridge-domain VXLAN5000_TENANT1
Group 239.1.2.3
Segment id 5000

Create the Layer 3 control interface uplink port-profiles for the VEMs

Port-Profile type vethernet Control_Uplink_1001
capability l3control
capability vxlan
vmware port-group
switchport mode access
switchport access vlan 1001
no shutdown
system vlan 1001
state enabled

Port-Profile type vethernet Control_Uplink_1002
capability l3control
capability vxlan
vmware port-group
switchport mode access
switchport access vlan 1002
no shutdown
system vlan 1002
state enabled

Create the Port-Profile the VMs will connect to:

Port-Profile type vethernet VXLAN_5000_Tenant1
switchport mode access
switchport access bridge-domain 5000
vmware port-group
no shut
state enable

Verify on VSM with
Show bridge domain

Verify on Switch with
Sh ip mroute 239.1.2.3

First test with both VM’s on the same host/port-group then vMotion VM2 to ESX2

VXLAN Packet Walk

Let’s take the above example and do a PING from VM1 (MAC1) on ESX01 to VM2 (MAC2) on ESX02

1. Virtual machine VM1 on ESX01 sends an ARP packet with Destination MAC as “FFFFFFFFFFF”

2. VTEP (VEM) on ESX01 encapsulates the Ethernet broadcast packet into a UDP header with Multicast address “239.1.2.3” as the destination IP address and VTEP address “10.200.1.150” as the Source IP address.

3. The physical network delivers the multicast packet to the hosts that joined the multicast group address “239.1.2.3”.

4. The VTEP on ESX02 receives the encapsulated packet. Based on the outer and inner header, it makes an entry in the forwarding table that shows the mapping of the virtual machine MAC address and the VTEP. In this example, the virtual machine MAC1 running on ESX01 is associated with VTEP IP “10.200.1.50”.

5. The VTEP also checks the segment ID or VXLAN logical network ID (5000) in the external header to decide if the packet has to be delivered on the host or not.

6. The packet is de-encapsulated and delivered to the virtual machines connected on that logical network VXLAN 5000.

7. Virtual Machine MAC2 on ESX02 responds to the ARP request by sending a unicast packet with Destination Ethernet MAC address as MAC1.

8. After receiving the unicast packet, the VTEP on Host 2 performs a lookup in the forwarding table and gets a match for the destination MAC address “MAC1”.

9. The VTEP now knows that to deliver the packet to virtual machine MAC1 it has to send it to VTEP with IP address “10.200.1.50”.

10. The VTEP creates unicast packet with destination IP address as “10.200.1.50” and sends it out.

11. The packet is delivered to ESX01

12. The VTEP on Host 1 receives the encapsulated packet. Based on the outer and inner header, it makes an entry in the forwarding table that shows the mapping of the virtual machine MAC address and the VTEP. In this example, the virtual machine MAC2 running on ESX02 is associated with VTEP IP “10.200.2.50”.

13. The VTEP also checks segment ID or VXLAN logical network ID (5000) in the external header to decide if the packet has to be delivered on the host or not.

14. The packet is de-encapsulated and delivered to the virtual machine connected on that logical network VXLAN 5000.

I will do a Video walkthrough on how to set VXLAN up using my Cisco UCS and Nexus 1000v and Nexus 5000 Lab and post here when done.

Thanks for stopping by and look after that Datacenter of yours :-)

Posted in SDN | Tagged , , , , , , | 2 Comments

What does it mean to be a Cisco Champion?

Anyone with more than a passing familiarity with Twitter will no doubt have seen a hashtag whizz passed entitled #CiscoChampion or even #CiscoChampion(s) (more on the latter later)

So what does this mean? well you may well be familiar with other vendors Advocacy programs like “EMC Elect” or VMware “vExpert”, well “Cisco Champion” is Cisco’s.

There are several “Flavors” of Cisco Champion I for example am humbled and proud to be a Cisco Champion for Data Center.

How did I become a Cisco Champion? well you have to be active in the Social Community and be willing to “Give Back” to the community and give those in the community the benefit of your knowledge and experience. What form this takes is not fixed but it could be a blog or via Twitter or the Cisco Community sites, or a combination of all 3.

So what’s changed for me since becoming a Cisco Champion? well quite a lot really, not only do I feel more empowered, but also that I now really have a voice (or at least one that people listen to) as well as getting  a lot more “Cisco Love”;  not to say that I didn’t get any before, as working for a Gold Partner I certainly get my share.

But since becoming a Cisco Champion this “Love” has increased to a whole new level!

By “Cisco Love” I mean access to Betas, inside scoops, early blogger briefings, guest blog spots, participation in great events and promotions etc..  etc..

So does this mean Cisco have now bought my Soul, and that I am no longer able to blog objectively?

Is it like Cisco have driven my punk ass to the Vets and laid me on that table and made the unkindest cut of all? Removing any last shred of independent thought or dissidence.

Hell no!, as a Cisco Champion and advocate our objectivity is what Cisco require, they want it, they need it, they crave it.

After all a constructive criticism from an advocate in any walk of life is an opinion really worth listening too.

All Cisco Champions are told to continue to be themselves in their online social activities, and make it clear they are NOT Cisco representatives.

And as for the #CiscoChampion(s) hashtag, well it’s because we are plural i.e. there’s more than one of us but the logo is singular, and we have to match the logo. Nothing more interesting than that I’m afraid.

So a big thank you to Amy Lewis @CommsNinja , Rachel Bakker @RBakker
, Nancy Rivas @nrrivas07 and Feyi Adegbohun @Efannie for all your help and support I’m loving this journey so far!

Nominations for the latest flavor of Cisco Champion close on January 24th so if you know anyone who could be a Cisco Champion for Enterprise Networks then recommend them at the below link.

http://blogs.cisco.com/enterprise/17-signs-you-could-be-a-ciscochampion-for-enterprise-networks/

You can find more information on the Cisco Champions program here: http://www.cisco.com/web/about/facts_info/champions.html

Cisco Champion

Cisco Champion

Posted in Cisco Champion | Tagged , , , | 2 Comments

Cisco UCS Boot From SAN Video Walktrough

Hi All

I’ve been meaning to do this video for ages and finally had some time to do it.

The most common questions I tend to get are generally around booting a Cisco UCS Server from the SAN. Now in order to take full advantage of the statelessness of Cisco UCS servers we certainly want to avoid any dependency on a blade and SAN Boot is a great way to do it. And in Cisco UCS it’s an absolute dream to set up.

But I’ve decided not to just stop at the Cisco UCS config but also include the SAN switch config and the Array config. Why? you may ask. Well in this day of ever increasing convergence roles are merging and Silos crashing so it makes sense to have a good overview of the entire process. And even if all these elements are still conducted by separate admins in your environment, well it’s still great to have an appreciation of the information they need in order to work closer and more efficiently with them. I have seen too many cases when trying to troubleshoot a Boot From SAN issue (or any issue for that matter) when different admins did not communicate with each other and used different (not wrong) naming conventions etc.. and it just made end to end troubleshooting that bit harder. The more consistent we can make something by working together and sharing information certainly makes everyone’s job allot easier.

Anyway grab yourself a Scotch and sit back and let the next 60mins wash over you, it always goes down smooth.

Regards

Colin

Posted in General | Tagged , , , , , , | 20 Comments

UCS Manager 2.2 (El Capitan) Released

Last week saw the latest major update to UCS Manager in the form of version 2.2 codenamed “El Capitan”

It certainly doesn’t seem a year since I wrote the summary for the then eagily awaited 2.1 “Delmar” release” but I guess time really does fly when your having fun!

UCSM 2.2 will be the last Major version to include support for Generation 1 hardware. 6100 FI’s, 2104 IOM, M1 Servers and M1 Only Adapters. As such it is expected to be a long-lived release, so expect patches and major bug fixes for approximatley 12 months longer than normal major releases (Circa 4 years).

Remember that  Cisco offer the “UCS Advantage Trade in program” which provides an easy path in which to upgrade Generation 1 hardware to the latest versions.

USCM 2.2 Features Overview

UCSM 2.2 Features

UCSM 2.2 Features

Fabric Enhancements

  • Fabric Scaling:
    As you may expect UCSM 2.2 supports more of most things VLANs, VIFs, IGMP Groups, Adapter Endpoints (Physical network adapters across all servers in the UCS domain) This is possible since UCSM 2.2 syncs to an updated underlying NxOS code. Up until now I have never done a design constrained by any of the above, but more is always better right? :-)The table below shows the config maximums for UCSM 2.2 and previous releases.

Fabric Maximums

  • IPv6 Management Support:
    All 3 IP addresses  (2 physical and 1 cluster) are now able to have IPv6 addresses as are the new CIMC “in band” addresses. Services such as NTP, DNS are also reachable via IPv6.
  • Uni-Directional Link Detection (UDLD) Support:
    Rapidly detects and optionally disables/resets broken bidirectional links. We’ve had this for a long time in Nexus and now its an option on the Fabric Interconnects. And can be enabled either via a global or per port policy.
  • User Space NIC (usNIC) for Low Latency:
    Designed for High Performance Compute (HPC) applications that require low latentcy fabric and host adapters. usNIC allows latency sensitive MPI (Message-Passing Interface) applications running on bare-metal host OSes to bypass the kernel (Supported on 6200 with “Sereno” based adapters only VIC1240, VIC1280, VIC1225).
  • Virtual Machine Queue (VMQ) Support:
    Enables support for MS Windows VMQs on the Cisco UCS VIC adapter and Improves VM I/O performance in cases where VM-FEX cannot be used for I/O acceleration..

Operational Enhancements

  • Direct Connect C-Series To FI without FEX:
    Probably one of the biggest enhancements for me this one, and one Cisco have been gradually working towards. With UCSM 2.2 It is now possible to directly connect a C-Series Rackmount to the Fabric Interconnect by a single cable without the need for a 2232PP FEX.  You still have the option of using an extenal FEX which would still be the way to go for a solution with a larger number of integrated C-Series as there will come a point where several 1:1 FI/Port Licences to C-Series will be less cost effective than just buying the 2232PP FEX. But for an environment with just 1 or 2 the “No FEX” option is a clear winner.
C-Series no FEX Option

C-Series no FEX Option

  • Two-Factor Authentication for UCS Manager Logins:
    This is one to make the Security Admin happy. Support for strengthened UCSM authentication (requiring second factor of authentication after the username + password) such as RSA Secure ID, or Symantec VIP Enterprise Gateway.
  • VM-FEX for Hyper-V Mgmt with Microsoft SCVMM:
    VM-FEX Support on Hyper-V hosts was added in UCSM 2.1, but it lacked a centralized VM Network management (SCVMM integration) A Cisco provider plugin gets installed into SCVMM, fetches all network definitions from UCSM and periodically polls for configuration updates.
VM-FEX Hyper-V SCVMM

VM-FEX Hyper-V SCVMM

  • CIMC In-band Management:

If you have ever been a bit frustrated that loading a huge bare metal ISO to a CIMC took a while as you had to go via the 1Gbs FI MGMT port then this should make you happier. With UCSM 2.2 it is now possible to optionally access the CIMC of M3 blades over the same in band network as the data path giving access to all those those lovley 10Gb uplinks. You may also have a requirement to seperate UCSM Management traffic from CIMC Management traffic well now you can. CIMC Out of band is the same as it was you just have the option of connecting to either the In Band or Out of Band CIMC Address. CIMC In-band access supports KVM console, vMedia & Serial over LAN (SoL)

In-band CIMC

In-band CIMC

  • Server Firmware Auto Sync:
    Server Firmware can now be automatically synchronized and updated to the version configured in the new ‘Default Host Firmware Package’ without the need for an Service Profile associated.

Compute Enhancements

  • Secure Boot:
    Establish a chain of trust on the secure boot enabled platform to protect it from executing unauthorized BIOS images.
    UEFI Secure Boot utilizes the UEFI BIOS to authenticate UEFI images before executing them
    UCSM GUI will expose:
    * Boot Mode radio button (Legacy/UEFI)
    * Boot Security check box (visible only when UEFI is selected)

    Secure Boot

    Secure Boot

  • Enhanced Local Storage Management:
    Thanks to a new Out-of-Band communication channel developed between the CIMC and RAID Controller there is now:
    * Enhanced monitoring capabilities for local storage
    * Allow real-time monitoring of local storage without the need for host-based utilities.
  • Precision Boot Order Control:
    Enables the creation of boot policies with multiple local boot devices.
    Provides precision control over the actual boot order.
Precision Boot

Precision Boot

  • FlexFlash (Local SD Card) Support:
    Customers can now manage the FlexFlash Controller configuration from UCSM.
  • Flash Adapters and HDD Firmware Management:
    UCSM Firmware bundles now contain Flash Adapter firmware and local disk firmware.

Trusted Platform Module (TPM) Inventory:
Allow access to the inventory and state of the TPM module from UCSM (without having to access the BIOS via KVM).

TPM
TPM
  • DIMM Blacklisting and Correctable Error Reporting:
    Improved accuracy at identifying “Degraded” DIMMs. DIMM Blacklisting if enabled will forcefully map-out a DIMM that hits an uncorrectable error during host CPU execution

Well thats about it, hope there is somthing in this update for you, there sure is for me :-)

Posted in Product Updates | 14 Comments