Configuring an L3OUT in Cisco ACI

Posted in Cisco ACI | Tagged , , , , , | 7 Comments

Configuring an L2OUT in Cisco ACI

Posted in Cisco ACI | Tagged , , , , | Leave a comment

Adding a Bare Metal Rack Mount into a Cisco ACI Fabric

Posted in Cisco ACI | Tagged , , , , | 1 Comment

Configuring Cisco UCS and VMware with Cisco ACI the Easy Way!

Posted in Cisco ACI | Tagged , , , | 2 Comments

Cisco ACI Upgrade Walkthrough

Posted in Cisco ACI | Tagged , , , , , , | Leave a comment

My Journey with VMware NSX

A few times recently I have been asked how I went about expanding my skill set to include software defined networking solutions after being a “traditional networker” for the past 20 years. So here is my story so far.

Three years or so ago, having achieved my 2nd CCIE, I was looking for my next challenge, Software Defined Networking (SDN) was already gaining momentum and so I looked at what I could do in that space. I already had a fairly good handle on Cisco Application Centric Infrastructure (ACI) but at the time there were no certification tracks geared around Cisco ACI.

VMware NSX seemed the obvious choice, I was already familiar with the Nicira solution prior to the VMware acquisition, along with the fact that NSX being a Network Function Virtualisation (NFV) based solution, uses constructs that are very easy for “traditional networkers” to understand, i.e if you know what a physical router does and how to configure it, then it isn’t much of a departure to understand and configure a distributed logical router (dLR) in NSX, and the same thing goes for NSX logical switches and firewalls.

If you’re familiar with setting up emulators like GNS3 and Cisco VIRL then again you’re already adept at setting up virtual networks so the gap to understanding and configuring NSX really isn’t that much to bridge.

Like most people when trying to learn something new I started playing with NSX in a home lab environment, just a couple of low grade dual core servers with 64GB RAM in each was plenty to create a nested NSX environment, but I quickly found the VMware Hands on Labs (http://labs.hol.vmware.com/HOL/catalogs/catalog/681) were so available and functional that I pretty much just used them instead.

I Progressed to VCP-NV (VCP-Network Virtualisation) and then attended the “NSX Ninja” 2 week boot camp, on the back of which I took and passed (2nd time round) the VCIX-NV (Implementation Expert) an intense 3hr practical assessment on building and troubleshooting an NSX solution.

The NSX Ninja course was great! taught by Paul Mancuso @pmancuso and Chris McCain @hcmccain and gave a great insight into the process of submitting and defending a VCDX-NV (Design Expert) design. VCDX-NV being my main goal for this year which requires the submission of an NSX design and then you defend that design to a panel of experts. The NSX Ninja course was possibly one of the best courses I have ever attended, purely for the amount of interaction and constructive feedback.

Of course what also stood me in great stead was the invaluable experience I had picked up, having spent 3 years working with NSX day in and day out, and having now delivered 3 large production deployments in multi vCenter cross site topologies, as no matter how much training you do, nothing quite burns in that knowledge than working in the field delivering solutions meeting real customer’s business requirements.

As with most expert level certifications it is not reaching the destination that makes you an expert, it’s what you learn and the scars you pick up along the path.

This year I was very proud to be selected as a vExpert and am very much looking forward to participating in the program.

Good luck on your own journeys.

Colin

Posted in SDN, VMware NSX | Tagged , , , , | Leave a comment

Introducing Cisco UCS S-Series

Today Cisco announced the Cisco UCS S Series line of storage servers.

S-Series-fronts-series-logoNow the more eagle eyed among you may think that the new Cisco UCS S3260 Storage Server looks very much like the Cisco UCS C3260 Rack server (Colusa), well you wouldn’t be too far off, however the S3260 has been well and truly “Pimped” to address the changing needs of a modern storage solution, particularly an extremely cost effective building block in a Hybrid Cloud environment.

The C-3160/C-3260 was particularly suited to large cost effective cooler storage solutions, that is to say the retention of less / inactive data on a long-term or indefinite basis at low cost, use cases being, archive or video surveillance etc.. The fact is data is getting bigger and warmer all time time and it shows no signs of slowing down anytime soon. And even on these traditional colder storage solutions the requirement for real time analytics on this data is requiring an ever increasing amount of compute coupled with this storage.

So Cisco have created this next generation of Storage Server to meet these evolving needs.

If there is a single word to describe the new Cisco UCS S-Series it is “Flexibility” as it can be configured for:

Any Performance, Right sized to any workload
Any Capacity, Scale to Petabytes in minutes
Any Storage: Disk, SSD or NVMe
Any Connectivity. Unified I/O and Native Fiber Channel

Highlights

  • Fully UCS Manager Integrated

ucs-manager

Since UCS Manager 3.1  (Grenada) all Cisco UCS products are supported under a single code release, including the S-Series storage servers (UCSM 3.1.2)

  • Modular Components to allow independent scaling.

As we know different components generally have different development cycles, the S-Series storage servers  are built with a modular architecture to allow components be upgraded independently, for example, as and when the 100Gbps I/O module is released it’s a simple I/O module replacement, similarly when the Intel Skylake Purley platform (E2600 v5) is available it’s just a server module upgrade.

  • Up to 600TB in a 4U Chassis, then scales out beyond that in additional 4U Chassis
  • 90TB SSD Flash

As can be seen below the Cisco UCS S3260 can house up to 2 x Dual Socket M3 or M4 Server nodes, but you also have options of using a single server node then adding either an additional Disk Expansion module , or an additional I/O Expansion module.

s3260-overview

 

Server Node Options.

Server-Nodes

 

System I/O Controller

io-controller

The 2 Fabric (SIOC)  modules map to each of the server nodes, i.e Fabric Module 1 to Server Node 1 and Fabric Module 2 to Server Node 2. This provides up to a 160Gb of bandwidth to each 4U chassis.

 

Disk Expansion Module.

disk-expansion-module

Adds up to another 40TB of storage capacity to reach 600TB Max, Support for 4TB, 6TB, 8TB, 10TB drives

 

I/O Expansion Module

In order to allow for the maximum amount of flexibility with regards to connectivity or acceleration Cisco have the option of an I/O expansion module to allow additional Ethernet or Fiber Channel (Target and Initiator) connectivity options.

Flash Memory from Fusion I/O or SanDisk are also supported.

io-expansion

 

I/O Expansion Module 3rd Party Options.

 

io-3rd-party

 

Cisco UCS S-Series Configuration Options.

The figure below shows the various configuration options depending on how you wish to optimize the server.

s3260-chassis-configurations

 

Where does Cisco UCS S-Series fit?

Cisco are positioning the S-Series as a pure infrastructure play, they are not bundling any Software Defined Storage (SDS) software on it, as that space if filled with the Cisco HyperFlex Solution, but perhaps the S-Series could be an option for a huge storage optimized HyperFlex node in the future.

That does not of course preclude you from running your own SDS software on the s3260, like VMware VSAN for example.

And for clients that want that off the shelf pre-engineered solution then solutions like vBlock/VxBlock  or FlexPod are still there to fill that need.

portfolio

One thing’s for sure still lots of innovation planned in this space, in the words of Captain Jean-Luc Picard ” Plenty of letters left in the alphabet”

For more information refer to the Cisco UCS S-Series Video Portal

Regards

Colin

 

Posted in Product Updates | Tagged , , , , , , , , , , , , , , , , | 4 Comments

Sizing your Cisco HyperFlex Cluster

 

Update 31st July 2018

Since my post below, Cisco have released a Hyperflex profiler and sizer tool, for more info of where to get it and how to install and use it, refer to the below video by Joost van der Made.

 

But still feel free to read my original post below, as it will teach you the under the covers Math.

/End of update

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Original Post:

Calculating the usable capacity of your HyperFlex cluster under all scenarios is worth taking some time to fully understand

The most significant design decision to take, is whether to optimise your HyperFlex cluster for Capacity or Availability, as with most things in life there is a trade-off to be made.

If you size your cluster for maximum capacity obviously you will lose some availability, likewise if you optimise your cluster for availability you will not have as much usable capacity available to you.

The setting that determines this is the Replication Factor

As the replication factor is set at cluster creation time, and cannot be changed once set , it is worth thinking about. In reality you may decide to choose a different replication factor based on the cluster use case, i.e. Availability optimised for production and capacity optimised for Test/Dev for example.

It is absolutely fine to have multiple clusters with different RF settings within the same UCS Domain.

So let’s look at the two replication factors available:

Replication Factors

As you can no doubt determine from the above, the more nodes you have in your cluster the more available it becomes, and able to withstand multiple failure points. In short the more hosts and disks there are on which to stripe the data across the less likely it is that multiple failures will effect replicas of the same data.

OTHER FACTORS THAT AFFECT SIZING.

METADATA

There is 8% capacity that needs to be factored in for the Meta data ENOSPC* buffer

*No space available for writing onto the storage device

DEPUPLICATION AND COMPRESSION

Now for some good news, what the Availability Gods taketh away, HyperFlex can giveth back! remember that “always on, nothing to configure, minimal performance impacting” Deduplication and Compression I mentioned in my last post? Well it really makes great use of that remaining available capacity.

VDI VM’s will practically take up no space at all, and will give circa 95% capacity savings. For persistent user data and general Virtual Server Infrastructure (VSI) VM’s the capacity savings are still a not to be sniffed at 20-50%.

N+1

The last figure to bear in mind is our old friend N+1, so factor in enough capacity to cope with a single node failure, planned or otherwise, while still maintaining your minimum capacity requirement.

Example.

As in most cases, it may be clearer if I give a quick manual example.

I’m only going to base this example on the storage requirement,  as the main point of this post is to show how the Replication Factor (RF) affects capacity and availability. Obviously in the real world vCPU to Physical Core ratios and vRAM requirements would also be worked out and factored in.

But let’s keep the numbers nice and simple and again not necessarily “real world” and plan for an infrastructure to support 500 VM’s each requiring 50GB of storage.

So 500 x 50GB gives us a requirement of 25TB capacity.

So let’s say we spec up 5 x HyperFlex Nodes each contributing 10TB RAW capacity to the cluster which gives us 50 TB RAW, but let’s allow for that 8% of Meta Data overhead (4TB)

50TB – 4TB = 46TB

So we are down to 46TB we now need to consider our Replication Factor (RF), so let’s say we leave it at the default of RF 3, so we need to divide our remaining capacity by our RF, in this case 3, to allow for the 2 additional copies of all data blocks.

46TB / 3 = 15.33TB

So that puts us down to 15.33TB actual usable capacity in the cluster.

So remember that 25TB of capacity we need for our 500 Virtual Servers, well lets be optimistic and assume we will get the upper end of our Dedupe and Compression savings to lets reduce that by 50%

25TB / 2 = 12.5TB

And that’s not even taking into account thin provisioning which although variable and dependant on a particular clients comfort factor would realistically reduce this capacity requirement by a further 30-50%

So let’s now say our 12.5TB on disk capacity is realistically likely to be 6.25 -8.75 TB

meaning we are OK with our available 15.33TB on our 5 node cluster.

So the last consideration is could we withstand a single node failure or planned host upgrade and still meet our minimum storage requirement?

So either we could work out all the above again but just using 4 nodes (contributing a total of 40TB RAW) to the cluster. Or, since we know that 5 nodes contribute a total of 15.33TB usable, then we know that each node is contributing approx. 3.06TB of usable capacity, so if we take a node out of the cluster.

15.33TB – 3.06TB = 12.27TB usable remaining, which is still above our 8.75TB realistic requirement.

Luckily to make all this sizing a walk in the park, a HyperFlex sizing calculator will soon be available on CCO.

Colin

Posted in HyperFlex | Tagged , , , , , , , , | 5 Comments

Cisco HyperFlexes its muscles.

If “Software Defined X” has been the hot topic over the last couple of years then “Hyperconvergence” is certainly set to be one of the hottest topics of 2016, like most buzzwords it’s a very overused term, the word even has “hype” in the name, hell, even as I type this post auto complete finished off the word after only 2 characters.

This market has been so confusing with so many overlapping offerings and players coming and going, alliances forged and broken and like all intensely competitive markets there has been a long line of casualties along the way, but from all this competition hopefully it should be the consumer that emerges the real winner. I did think about doing an elaborate “Game of Thrones” esque opening credits video showing all the key players, but my CGI skills aren’t up to much so just pretend I did it.

So before we get stuck in, what is Hyperconvergence?

Well traditionally a  hyper-converged infrastructure (HCI) is modular compute building blocks with internal storage, plus a magical hyperconverged software layer than manages the compute and storage elements, abstracts them, and presents them to the Application as virtual pools of shared resources. This facilitates and maximizes resource utilization at the same time minimizing wastage and inefficiency. And it is this magical hyperconverged software layer that is generally the differentiating factor in the plethora of offerings out there. I say “traditionally” as you may notice there is a critical element missing from the above definition. Which I will cover later in this post.

The rise in popularity of hyperconverged offerings, is also due, in part to the ability to scale from small to very large deployments as well as negating the requirement for a complex enterprise shared storage array, thus minimizing the initial upfront Capex costs and allowing a “Pay as you Grow” cost model, which actually increases in efficiency and performance the larger it scales due to its distributed nature.

While Cisco are well established in the integrated Systems markets, contributing network and compute to the converged Infrastructure offering of Vblock, VxBlock and Integrated Systems like FlexPod, there has always been a bit of a gap in their hyperconverged portfolio, sure there is the OmniStack partnership with SimpliVity, but nothing as far as a complete Cisco HCI offering goes.

Introducing The Cisco HyperFlex System

 

 

 

 

Today Cisco announced their new hyperconverged offering in the form of the Cisco HyperFlex System, a complete hyperconverged solution combining, next generation Software Defined Compute, Storage and Networking, thus providing a complete end-to-end software-defined infrastructure, all in one tightly integrated system built for today’s workloads and emerging applications.

I say complete hyperconverged offering as the Cisco HyperFlex System, also comes with full network fabric integration. One of the significant competitive advantages the Cisco HyperFlex System has over other HCI offerings that do not integrate or even include the network element. In fact if the fabric isn’t part of the solution, is the solution really even hyperconverged?

HyperFlex is built from the ground up for hyperconvergence, leveraging the Cisco UCS platform, along with software provided by Springpath, a start-up founded in 2012 by VMware veterans. This hyperconverged software is fully API enabled and has been branded the HX Data Platform

If being a bit late to the hyperconverged party has had one advantage, it’s that Cisco have had time to listen to customers about what they felt is lacking in the current generation of hyperconverged offerings and to properly address the architectural shortcuts and design trade offs made by some other Vendors in order to get to market quickly.

And with HyperFlex, Cisco feels they have leap-frogged any other HCI offering out there by some 2 – 4 years!

Key features of the Cisco HyperFlex System

There are so many features covered in the announcement today, each worthy of a blog post in their own right, which I will no doubt cover here, once more details are released and I can actually get my hands on one to play with. But until then, here is the list of the HyperFlex features that most caught my eye.

  • Simplicity And Easy Independent Scaling.

Cache and Capacity can be scaled up within nodes, or additional nodes can be added, thus allowing compute and capacity to be completely independently scaled up and out as required. Whereas traditional hyper converged solutions only scale in a linear form i.e. you are forced to add both compute and storage in fixed ratios, even if you only need to scale one of them.

 scale

New cluster nodes are automatically recognized and are added to the cluster with a few mouse clicks.

Cisco claim that is possible to stand up a Cisco HyperFlex System, including all the networking in under 1 hour, well I’ll certainly look forward to testing that claim out.

 

  • The HX Data Platform

The HX Data Platform is implemented using a Cisco HyperFlex HX Data Platform controller which runs as a VM on each cluster node,  this controller implements the distributed file system and intercepts and handles all I/O from guest virtual machines.

HX Data Platform Controller

HX Data Platform Controller

 

The HX nodes connect to the hyperconverged presented storage via 2 vSphere Installation Bundles (VIB) IO Visor & VAAI, within the hypervisor that provides a network file system (NFS) mount point to the distributed storage.

The IO Visor VIB  can also be loaded on a non HyperFlex node to provide access to the Hyperconverged storage to add additional compute power in a Hybrid solution.

iovisor

  • Superior Flash Endurance.

Built upon a Log-structured file system, enables superior flash endurance by significantly optimizing writes and reducing program/erase cycles.

  • Dynamic Data Distribution

Unlike systems built on conventional file systems which first need to write locally then replicate, creating hot spots, the HX Data Platform stripes data across all nodes simultaneously. It does this by first writing to the local SSD cache, the replicas are then written to the remote SSD drives in parallel before the write is acknowledged.

For reads if the data happens to be local it will usually be read locally otherwise the data will be retrieved from the SSD of a remote node, thus allowing all SSD drives to be utilised for reads eliminating I/O bottlenecks.

  • Continuous Data Optimization.

The always-on inline deduplication provides up to 30% space saving followed by inline compression which provides up to an additional 50% space saving and all with little to no performance impact. And did I mention it’s always-on? Nothing to turn on or configure.

And these figures do not even include the additional space savings achieved by using native optimized clones and snapshots, if it did the overall space saving would be circa 90% or more.

This combined with thin provisioning gives the most efficient use of the storage you have, so you only need buy new storage as you need it.

  • High reliance and fast recovery

Depending on the chosen type of replication mode, based on maximizing availability (Replica Mode 3) or capacity (Replica Mode 2 ) the platform can withstand the loss of 2 HX nodes without data loss. Virtual machines on failed nodes simply redistribute to other nodes via the usual vSphere methods, with no data movement required. Then with the combined functionality of stateless service profiles and the built in self-healing within the HX Data Platform the replacement node is simply and dynamically replicated back in, again with no data movement required which eliminates the issue of sessions pausing/timing out in solutions which rely on data locality, which attempts to locate the data on to the hosts that are using it.

  • Management Simplicity

100% Administered via the Cisco HyperFlex HX Data Platform Administration Plug-in for vCenter. This plugin provides full management and monitoring of the data platform health as well as providing data which can be used to determine when the cluster needs to be scaled.

The initial UCS Manager elements can also be managed via the often forgotten UCS Manager Plugin for vCenter.

There will also be a UCS Manger wizard, to guide the user through the initial UCS Manager configuration of pool address population and Service Profile creation, something I’m sure we will see in UCS Classic not long after.

  • Flexible

At FCS vSphere with File based storage will be supported on the Cisco HyperFlex System, with Block and Object based storage planned for the future, along with Hyper-V, Bare Metal and Container support.

 

  •  Built on industry leading Cisco UCS Technology

Cisco UCS now tried, tested and trusted by over 50,000 customers worldwide

The Cisco HyperFlex System will come with Gen 2 6248UP or 6296UP Fabric Interconnects (FW 2.2(6f)), with the Gen 3 Fabric Interconnects already released and waiting to provide 40-Gbs connectivity to the Cisco UCS as and when data throughput demand increases within the HyperFlex system.

While the network with many HCI offerings is at best an afterthought, or at worst not even included, with the Cisco HyperFlex System, the network is fully integrated and optimized for the large amount of east/west  traffic required in a hyperconverged system. With every HyperFlex node just a single hop away providing deterministic and consistent performance.

Having Cisco UCS as the solid foundation for the platform also provides a single central management system for both integrated and hyperconverged infrastructure as well as offering integration with Cisco UCS Director and UCS Central.

Portfolio

 

As can be seen from the above diagram there are 2 models of HyperFlex rack mount nodes each requiring a minimum cluster size of 3 nodes. The 1U HX220c ideal for VDI and ROBO use cases, and the 2U HX240c for Capacity heavy use cases, with a third hybrid option for combining Blade and Rack mounts for compute heavy workloads.

HX220c M4

HX220c M4

HX220c M4

Each HX220c Node contains:

2 x Intel Xeon E5-2600 v3 Processors (up to 16 Cores per socket)
Up to 768 GB DDR4 RAM
1 x 480GB 2.5inch SSD Enterprise Performance (EP) for Caching
1 x 120GB 2.5 SSD Enterprise Value (EV) for logging.
2 FlexFlash SD cards for boot drives and ESXi 6.0 hypervisor (ESXi 5.5 also supported)
Up to 6 x 1.2TB SFF SAS Drives contributing up to 7.2 TB to the cluster.
1 Cisco UCS Virtual Interface Card (VIC 1227)

HX240c M4

HX240c M4

HX240c M4

 

Each HX240c M4 Node contains:

1 or 2 x Intel Xeon E5-2600 v3 Processors (up to 16 Cores per socket)
Up to 768 GB DDR4 RAM
1 x 1.6TB 2.5inch SSD Enterprise Performance (EP) for Caching
1 x 120GB 2.5 SSD Enterprise Value (EV) for logging.
2 FlexFlash SD cards for boot drives and ESXi 6.0 hypervisor (ESXi 5.5 also supported)
Up to 23 x 1.2TB SFF SAS Drives contributing up to 27.6 TB to the cluster.
1 Cisco UCS Virtual Interface Card (VIC 1227)

 

Common Use Cases

 Looking at what the Early Access Customers, are doing with HyperFlex by far the main use case looks to be VDI. The low up front cost, consistent performance and user experience along with Predictable Scaling certainly make HyperFlex an ideal solution for VDI.

 Also high on the list was Test/Dev environments, features like Agile Provisioning, instant native cloning and native Snapshots make a compelling case for entrusting your Test/Dev environment to HyperFlex.

 And while the above are two compelling use cases and sweet spots for HyperFlex I’m sure as customers experience the ease, flexibility and scalability of the HyperFlex System we will see it used more and more for mixed workload general VM deployments as the resilience and performance is certainly there for critical applications.

Remote Office Branch Office (ROBO) also was mentioned all though I would think this would likely be a larger remote office , as any use case requiring only 2 or 3 servers, would likely be more cost effectively served with the current UCS C Series in conjunction with StorMagic SvSAN

Pricing:

With an initial bundle price for 3 x HX220c nodes, including a pair of Fabric Interconnects expected to be circa $59,000 which also includes the first year’s software subscription, Cisco are obviously dead set on making this a compelling solution based not only on outstanding Next Gen functionality, performance and agility but also on cost.

Other Questions you may be thinking about

Now as with all new products, a line has to be drawn somewhere, for that First Customer Ship date. Only so much validation, testing of various hardware combinations, features and scale limits can be conducted.

Now I’m a curious chap, and I like to ask a lot of questions, particularly the questions I know my readers would like the answers to.

The running theme in the answers from Cisco to most of my “Could I” questions was that they wanted to get it right, and ensure that the product was as optimized as possible and that Cisco were not prepared to make any compromises to user experience, performance or stability by casting the net too wide from day 1.

All answers are paraphrased.

Q) Will HX Data Platform be available as a software only option?
A) No HX Data Platform will only be offered preinstalled on the badged HyperFlex HX nodes.

Q) Can I just load the HX Data Platform on my existing UCS Servers if I use the exact spec of the HyperFlex branded ones?
A) No (see above answer)

Q) Are there any hardware differences in the HX nodes and there equivalent C Series counter parts?
A) No, but the specification and settings are highly tuned in conjunction with the HX Data Platform

Q) Will I be able to mix HX220c and HX240c in the same HyperFlex Cluster?
A) Not at FCS, all nodes within the same cluster need to be identical in model & spec.

However each Cisco UCS Domain supports up to 4 separate clusters, and each of those clusters could be optimised for a particular use case or application. for example:

Cluster 1:  Replica Mode 2 on HX220c to support Test/Dev workloads
Cluster 2:  Replica Mode 3 on HX240c to support Capacity heavy workloads
Cluster 3:  Replica Mode 3 on HX240c and B200M4 to support Compute heavy workloads

Q) Why is the maximum HX cluster size 8?
A) 8 seemed a reasonable number to start with,  but will certainly increase with additional validation testing. While the initial Cluster size is limited to 8 HX nodes per cluster, with the Hybrid option an additional 4 classic B200M4 Blades can be added for additional compute power, giving a total number of servers in a Hybrid cluster of 12. In the Hybrid solution the B200M4 local storage is not utilized by the Cisco HyperFlex System..

Q) Will I be able to have a mixed HyperFlex and non HyperFlex node UCS Domain?
A) Not at FCS, HX Nodes will require a separate UCS Domain, except for the 4 supported blades in the hybrid model

Q) Are FEX’s supported to connect in HyperFlex nodes to the FI’s?
A) Not at FCS, but no Technical reason why not, once validated, but oversubscription of FEX uplinks needs to be considered.

Q) Will the 3000 Series Storage Optimized Rack Mount servers (Colusa) like the 3260 be available as HyperFlex nodes.
A) Not at FCS, These Servers are more suited to lower performance, high capacity use cases like archiving, and cold storage. Plus that the 3000 series servers are managed via CIMC and not UCSM.

Q) Can I setup a Cisco Hybrid HyperFlex System by directly connecting the HX nodes to my UCS Mini?
A) Not at FCS

Closing thoughts

Both the Converged and Hyperconverged markets continue to grow and will co-exist, but with HyperFlex Cisco have certainly strengthened what was the only chink in their armour, meaning that there is now a truly optimized solution based on a single platform under a single management model for all requirements and use cases. Providing many HCI features not available until now.

 

One platform

 

One thing is clear the Hyperconverged game changes today!

 

Until next time.

Colin

Keep up to date with further HyperFlex announcements on social media by following the hashtag #CiscoHX

Posted in HyperFlex | Tagged , , , , , , , , , , , , , , | 9 Comments