VMworld 2017 Barcelona

There is a saying that “The early bird catches the worm!” but I say “but it’s the 2nd mouse that gets the cheese!” In short apologies for the lateness of this post 🙂

This year was my first year as a VMware vExpert and through that program VMware kindly offered me a  “blogger pass” to attend VMworld Barcelona.

Having a great interest in software defined networking in general and VMware NSX in particular I was keen to focus on what VMware are doing in this space along with their general Software-Defined Data Center (SDDC) strategy and offerings. So I have picked a couple of topics that were of particular interest to me.

VMware Cloud Foundation:

We are all familiar with vendors like VCE (now Dell EMC Converged Platforms) who create Vblocks, which are fully qualified Converged Infrastructures, where all components of the system, compute, storage, networking and virtualisation are vigorously tested and version controlled to ensure optimal compatibility and performance.  Well VMware have created the VMware Cloud Foundation (VCF) which does the same thing for the whole SDDC whether deploying on a private, public or hybrid cloud.

VCF combines VMware vSphere (Compute), vSAN (Storage) and NSX (Networking and Security) into a tightly integrated stack with automation, upgrades and life cycle management via SDDC Manager.

The benefits and value of adopting a VCF solution include:

  • Accelerated time to market resulting from the reduced design, testing and implementation times.
  • Reduced maintenance and Opex from features like one click automated upgrades.
  • Repeatable solution for multi-site deployments.
  • Validated integration with public cloud providers, allowing mobility of workloads between private and public clouds.

You can either buy a fully pre-built SDDC with all the cloud foundation software pre-loaded, currently available on the Dell EMC VxRack platform or you can build your own as long as long as you adhere to the VMware Cloud Foundation compatibility guide. I’m sure you’ll all be glad to hear that Cisco UCS C240 is on there.

Just like a Vblock has it’s Release Certification Matrix (RCM) a VCF SDDC has its VCF matrix which details the hardware and software combinations that have been validated for that particular version. Valid upgrade paths to later versions are also detailed in the release notes of the particular VCF Version.

vRealize Network Insight:

One of the largest customer concerns when looking to migrate from a traditional “black list” network to a software defined “white list” model, is will my application still work in the new environment?

In the traditional black list model all communication is allowed by default unless specifically blocked by a firewall or access control list, whereas in a software defined “white list” model all traffic is denied by default unless specifically permitted. This means that all flows for all applications need to be known and understood, and those flows allowed in the new software defined environment.

There are of course lots of methods and discovery tools out there that assist with application discovery and dependency mapping, but they all differ greatly in usefulness, functionality and cost.

While at VMworld I have been looking into, and having a play with vRealize Network Insight (vRNI) which was a result of the Arkin acquisition June 2016.

Not part of the vRealize suite but rather an add-on service to VMware NSX and licenced according to the number of NSX CPU licences.

vRNI provides both a day 0 assessment in order to do a “underlay readiness and health check to be confident the underlay network is healthy, happy and NSX ready. vRNI can then be used to analyse and report on all the traffic within the network, automatically group workloads into security groups and then create the required NSX distributed firewall rules required between those security groups.

 

The security advantages of a zero trust/least privilege network are well understood and only allowing the flows you need on a network is certainly the way forward. I am in the process writing a full blog review of vRNI and as such will not elaborate further on it in this post.

 

CCIE Reception

It was also great to see VMware recognising and putting on an event for Cisco CCIEs. In a world of ever growing automation and GUI’s it is a common topic on whether you still need to understand all this “networking stuff” that goes on, often “under the covers” in many cases. As someone who has been involved in many software defined / overlay networking issues my answer to that, is absolutely! Having a good strong foundation of network knowledge and troubleshooting skills will only help you when designing and troubleshooting a network of any description.

 

Highlight

While there were several great sessions and receptions, I guess the highlight of my VMworld was that a small group of “NSX VIPs” were given the opportunity of an open forum round table with VMware CEO Pat Gelsinger, where we could just ask any questions we liked. I was very impressed with Pats technical knowledge regarding many of the products in the VMware portfolio particularly NSX.

One of the topics of discussion was the evolution of NSX-T, (VMware’s NSX offering for multi-cloud, multi-hypervisor, and container environments) and it’s eventual replacement of NSX-V (The VMware only product)

While this transition will certainly be over some time, the majority of R&D and new features will be targeted at NSX-T.

All in all a great experience at VMworld Europe 2017!

 

 

 

 

 

 

 

Advertisements
Posted in SDN, VMware NSX | Leave a comment

Configuring an L3OUT in Cisco ACI

Posted in Cisco ACI | Tagged , , , , , | Leave a comment

Configuring an L2OUT in Cisco ACI

Posted in Cisco ACI | Tagged , , , , | Leave a comment

Adding a Bare Metal Rack Mount into a Cisco ACI Fabric

Posted in Cisco ACI | Tagged , , , , | Leave a comment

Configuring Cisco UCS and VMware with Cisco ACI the Easy Way!

Posted in Cisco ACI | Tagged , , , | 1 Comment

Cisco ACI Upgrade Walkthrough

Posted in Cisco ACI | Tagged , , , , , , | Leave a comment

My Journey with VMware NSX

A few times recently I have been asked how I went about expanding my skill set to include software defined networking solutions after being a “traditional networker” for the past 20 years. So here is my story so far.

Three years or so ago, having achieved my 2nd CCIE, I was looking for my next challenge, Software Defined Networking (SDN) was already gaining momentum and so I looked at what I could do in that space. I already had a fairly good handle on Cisco Application Centric Infrastructure (ACI) but at the time there were no certification tracks geared around Cisco ACI.

VMware NSX seemed the obvious choice, I was already familiar with the Nicira solution prior to the VMware acquisition, along with the fact that NSX being a Network Function Virtualisation (NFV) based solution, uses constructs that are very easy for “traditional networkers” to understand, i.e if you know what a physical router does and how to configure it, then it isn’t much of a departure to understand and configure a distributed logical router (dLR) in NSX, and the same thing goes for NSX logical switches and firewalls.

If you’re familiar with setting up emulators like GNS3 and Cisco VIRL then again you’re already adept at setting up virtual networks so the gap to understanding and configuring NSX really isn’t that much to bridge.

Like most people when trying to learn something new I started playing with NSX in a home lab environment, just a couple of low grade dual core servers with 64GB RAM in each was plenty to create a nested NSX environment, but I quickly found the VMware Hands on Labs (http://labs.hol.vmware.com/HOL/catalogs/catalog/681) were so available and functional that I pretty much just used them instead.

I Progressed to VCP-NV (VCP-Network Virtualisation) and then attended the “NSX Ninja” 2 week boot camp, on the back of which I took and passed (2nd time round) the VCIX-NV (Implementation Expert) an intense 3hr practical assessment on building and troubleshooting an NSX solution.

The NSX Ninja course was great! taught by Paul Mancuso @pmancuso and Chris McCain @hcmccain and gave a great insight into the process of submitting and defending a VCDX-NV (Design Expert) design. VCDX-NV being my main goal for this year which requires the submission of an NSX design and then you defend that design to a panel of experts. The NSX Ninja course was possibly one of the best courses I have ever attended, purely for the amount of interaction and constructive feedback.

Of course what also stood me in great stead was the invaluable experience I had picked up, having spent 3 years working with NSX day in and day out, and having now delivered 3 large production deployments in multi vCenter cross site topologies, as no matter how much training you do, nothing quite burns in that knowledge than working in the field delivering solutions meeting real customer’s business requirements.

As with most expert level certifications it is not reaching the destination that makes you an expert, it’s what you learn and the scars you pick up along the path.

This year I was very proud to be selected as a vExpert and am very much looking forward to participating in the program.

Good luck on your own journeys.

Colin

Posted in SDN, VMware NSX | Tagged , , , , | Leave a comment

Introducing Cisco UCS S-Series

Today Cisco announced the Cisco UCS S Series line of storage servers.

S-Series-fronts-series-logoNow the more eagle eyed among you may think that the new Cisco UCS S3260 Storage Server looks very much like the Cisco UCS C3260 Rack server (Colusa), well you wouldn’t be too far off, however the S3260 has been well and truly “Pimped” to address the changing needs of a modern storage solution, particularly an extremely cost effective building block in a Hybrid Cloud environment.

The C-3160/C-3260 was particularly suited to large cost effective cooler storage solutions, that is to say the retention of less / inactive data on a long-term or indefinite basis at low cost, use cases being, archive or video surveillance etc.. The fact is data is getting bigger and warmer all time time and it shows no signs of slowing down anytime soon. And even on these traditional colder storage solutions the requirement for real time analytics on this data is requiring an ever increasing amount of compute coupled with this storage.

So Cisco have created this next generation of Storage Server to meet these evolving needs.

If there is a single word to describe the new Cisco UCS S-Series it is “Flexibility” as it can be configured for:

Any Performance, Right sized to any workload
Any Capacity, Scale to Petabytes in minutes
Any Storage: Disk, SSD or NVMe
Any Connectivity. Unified I/O and Native Fiber Channel

Highlights

  • Fully UCS Manager Integrated

ucs-manager

Since UCS Manager 3.1  (Grenada) all Cisco UCS products are supported under a single code release, including the S-Series storage servers (UCSM 3.1.2)

  • Modular Components to allow independent scaling.

As we know different components generally have different development cycles, the S-Series storage servers  are built with a modular architecture to allow components be upgraded independently, for example, as and when the 100Gbps I/O module is released it’s a simple I/O module replacement, similarly when the Intel Skylake Purley platform (E2600 v5) is available it’s just a server module upgrade.

  • Up to 600TB in a 4U Chassis, then scales out beyond that in additional 4U Chassis
  • 90TB SSD Flash

As can be seen below the Cisco UCS S3260 can house up to 2 x Dual Socket M3 or M4 Server nodes, but you also have options of using a single server node then adding either an additional Disk Expansion module , or an additional I/O Expansion module.

s3260-overview

 

Server Node Options.

Server-Nodes

 

System I/O Controller

io-controller

The 2 Fabric (SIOC)  modules map to each of the server nodes, i.e Fabric Module 1 to Server Node 1 and Fabric Module 2 to Server Node 2. This provides up to a 160Gb of bandwidth to each 4U chassis.

 

Disk Expansion Module.

disk-expansion-module

Adds up to another 40TB of storage capacity to reach 600TB Max, Support for 4TB, 6TB, 8TB, 10TB drives

 

I/O Expansion Module

In order to allow for the maximum amount of flexibility with regards to connectivity or acceleration Cisco have the option of an I/O expansion module to allow additional Ethernet or Fiber Channel (Target and Initiator) connectivity options.

Flash Memory from Fusion I/O or SanDisk are also supported.

io-expansion

 

I/O Expansion Module 3rd Party Options.

 

io-3rd-party

 

Cisco UCS S-Series Configuration Options.

The figure below shows the various configuration options depending on how you wish to optimize the server.

s3260-chassis-configurations

 

Where does Cisco UCS S-Series fit?

Cisco are positioning the S-Series as a pure infrastructure play, they are not bundling any Software Defined Storage (SDS) software on it, as that space if filled with the Cisco HyperFlex Solution, but perhaps the S-Series could be an option for a huge storage optimized HyperFlex node in the future.

That does not of course preclude you from running your own SDS software on the s3260, like VMware VSAN for example.

And for clients that want that off the shelf pre-engineered solution then solutions like vBlock/VxBlock  or FlexPod are still there to fill that need.

portfolio

One thing’s for sure still lots of innovation planned in this space, in the words of Captain Jean-Luc Picard ” Plenty of letters left in the alphabet”

For more information refer to the Cisco UCS S-Series Video Portal

Regards

Colin

 

Posted in Product Updates | Tagged , , , , , , , , , , , , , , , , | 3 Comments

Sizing your Cisco HyperFlex Cluster

Calculating the usable capacity of your HyperFlex cluster under all scenarios is worth taking some time to fully understand

The most significant design decision to take, is whether to optimise your HyperFlex cluster for Capacity or Availability, as with most things in life there is a trade-off to be made.

If you size your cluster for maximum capacity obviously you will lose some availability, likewise if you optimise your cluster for availability you will not have as much usable capacity available to you.

The setting that determines this is the Replication Factor

As the replication factor is set at cluster creation time, and cannot be changed once set , it is worth thinking about. In reality you may decide to choose a different replication factor based on the cluster use case, i.e. Availability optimised for production and capacity optimised for Test/Dev for example.

It is absolutely fine to have multiple clusters with different RF settings within the same UCS Domain.

So let’s look at the two replication factors available:

Replication Factors

As you can no doubt determine from the above, the more nodes you have in your cluster the more available it becomes, and able to withstand multiple failure points. In short the more hosts and disks there are on which to stripe the data across the less likely it is that multiple failures will effect replicas of the same data.

OTHER FACTORS THAT AFFECT SIZING.

METADATA

There is 8% capacity that needs to be factored in for the Meta data ENOSPC* buffer

*No space available for writing onto the storage device

DEPUPLICATION AND COMPRESSION

Now for some good news, what the Availability Gods taketh away, HyperFlex can giveth back! remember that “always on, nothing to configure, minimal performance impacting” Deduplication and Compression I mentioned in my last post? Well it really makes great use of that remaining available capacity.

VDI VM’s will practically take up no space at all, and will give circa 95% capacity savings. For persistent user data and general Virtual Server Infrastructure (VSI) VM’s the capacity savings are still a not to be sniffed at 20-50%.

N+1

The last figure to bear in mind is our old friend N+1, so factor in enough capacity to cope with a single node failure, planned or otherwise, while still maintaining your minimum capacity requirement.

Example.

As in most cases, it may be clearer if I give a quick manual example.

I’m only going to base this example on the storage requirement,  as the main point of this post is to show how the Replication Factor (RF) affects capacity and availability. Obviously in the real world vCPU to Physical Core ratios and vRAM requirements would also be worked out and factored in.

But let’s keep the numbers nice and simple and again not necessarily “real world” and plan for an infrastructure to support 500 VM’s each requiring 50GB of storage.

So 500 x 50GB gives us a requirement of 25TB capacity.

So let’s say we spec up 5 x HyperFlex Nodes each contributing 10TB RAW capacity to the cluster which gives us 50 TB RAW, but let’s allow for that 8% of Meta Data overhead (4TB)

50TB – 4TB = 46TB

So we are down to 46TB we now need to consider our Replication Factor (RF), so let’s say we leave it at the default of RF 3, so we need to divide our remaining capacity by our RF, in this case 3, to allow for the 2 additional copies of all data blocks.

46TB / 3 = 15.33TB

So that puts us down to 15.33TB actual usable capacity in the cluster.

So remember that 25TB of capacity we need for our 500 Virtual Servers, well lets be optimistic and assume we will get the upper end of our Dedupe and Compression savings to lets reduce that by 50%

25TB / 2 = 12.5TB

And that’s not even taking into account thin provisioning which although variable and dependant on a particular clients comfort factor would realistically reduce this capacity requirement by a further 30-50%

So let’s now say our 12.5TB on disk capacity is realistically likely to be 6.25 -8.75 TB

meaning we are OK with our available 15.33TB on our 5 node cluster.

So the last consideration is could we withstand a single node failure or planned host upgrade and still meet our minimum storage requirement?

So either we could work out all the above again but just using 4 nodes (contributing a total of 40TB RAW) to the cluster. Or, since we know that 5 nodes contribute a total of 15.33TB usable, then we know that each node is contributing approx. 3.06TB of usable capacity, so if we take a node out of the cluster.

15.33TB – 3.06TB = 12.27TB usable remaining, which is still above our 8.75TB realistic requirement.

Luckily to make all this sizing a walk in the park, a HyperFlex sizing calculator will soon be available on CCO.

Colin

Posted in HyperFlex | Tagged , , , , , , , , | 5 Comments