Tag Archives: UCS

What is a VBlock.. the latest

Overview

Back in 2009 VMware, Cisco and EMC joined forces to create a new approach to selling full datacenter  pre-configured solution stacks. Rather than simply a gentlemen’s agreement and a cross pollination of development from the three companies, it was decided they would  create a new start up business as the delivery mechanism to drive this new concept to market. This new start up, known as VCE (Virtual Computing Environment), would take to market a new range of pre-validated, pre-configured and singularly supported solution stacks called VBlock.

The purpose of a VBlock is to simplify infrastructure down to effectively units of IT and define that a workload can be supported by “a number of floor tiles” in the data centre. This approach is enabled by the fact that everything within a VBlock is pre-validated from an interoperability perspective and customizable components are reduced down to packs of Blades (compute), Disks and network components  required to connect into the upstream customer environment, means that solution design is massively simplified and can be focus to supoprting the identified workload.

Pre-Validated

VCE extensively soak test workloads and configurations available within the VBlock to reduce pre-sales time spent on researching interoperability between the Network/compute/storage layers of the Data centre. This means that defining how a workload is supported is the focus and planning phases are significantly reduced. This pre-validated approach means that power and cooling requirements are easily determined  in preparation for site deployment.

Pre Build and Pre Configured

As part of the VBlock proposition, the physical and logical build process are carried out in VCE facilities, so that time on customer site is restricted to that if integrating into the customer environment and application layer services. This reduces deployment time massively.

Single Support Presence

 Rather than dealing with the parent companies (VMware, Cisco, EMC) of VCE on a per vendor basis. VCE act as a single support presence and will own any VBlock related issue end to end. This is partly enabled by the pre-validated aspect of VBlock, as VCE have a number of VBlocks in house and provided the VBlock is constructed as per approved architectures, VCE can simulate the environment which has caused the error to decrease time to resolution.

The Technology

The technology element at the core of the VBlock consists of VMware VSphere, Cisco UCS (Cisco’s Unified compute solution), Cisco Nexus (Cisco’s Unified fabric offering) and EMC VNX’s unified storage platform. Cisco simplify management of their blade computing platform down to a single point of management (UCS Manager) which resides on the 6100 Fabric interconnects and allows for  “stateless” computing, in that it is possible to  abstract the server “personality” (Mac addresses, word wide names, firmware, etc) away from the server hardware, then create and apply these personalities on demand to any blade within the UCS system. This management system manages all aspects of the UCS system (blade/chassis management, connectivity, firmware and connectivity). Cisco’s Unified Fabric commonly refers to their Nexus range (but elements of unified fabric apply to UCS). Cisco Nexus allows both IP network traffic and fibre channel traffic to be delivered over common 10 Gigabit switches using FcoE (Fibre Channel over Ethernet). In addition the Cisco Nexus 1000v enables deployment of a virtual switch within the Vmware environment ,allowing network services to be deployed within virtual infrastructure  where it was previously only possible in the physical world.

EMC VNX is a multi protocol storage array allowing for storage connectivity via block storage technologies (iSCSI/Fibre Channel) or NAS connectivity (CIFS/NFS/pNFS), giving the end user free choice as to how storage is provided to the UCS Server estate. EMC also drive efficiencies in how capacity and performance are handled by leveraging technologies such as deduplication and thin provisioning to achieve a lower cost per gigabyte. EMC are also able to leverage solid state disk technologies to extend storage Cache or enable sub LUN level tiering of data between Solid state disk and traditional mechanical disk technologies based on data access patterns.

VMware Vsphere has provided many companies cost saving in the past but in the Vblock is leveraged to maximum effect to provide operational efficiencies with features such as dynamic and automated mobility of virtual machines between physical servers based in load, high availability and the native integration that is inherent between VMware and EMC with the VAAI API integration. This integration enables much lower SAN fabric utilisation for what were very intensive storage network operations such as storage migration. EMC Powerpath/VE is also included in the Vblock which enables true intelligent load balancing of storage traffic across the SAN fabric.

Management

VCE utilise the Ionix Unified Infrastructure Manager (UIM) as a management overlay which integrates with the Storage,Compute,Network and Virtualisation  technologies within the Vblock and allows high level automation of and operational simplicity with how resources are provisioned within the VBlock. UIM will discover resources within the VBlock and the administrator then classifies those resources. As an example High performance blades may be deemed “Gold” blades verses lower specification blades which may be classified as “silver” blades. This classification is also applied to other resources within the Vblock such as storage. Once resources have been classified, then they can be applied on a per tenancy/application/department basis which is allowed access to differing levels of Gold/silver/Bronze resources within the Vblock. UIM now also includes operational aspects which give end to end visibility of exactly which hardware within a VBlock a particular VM is utilising (Blades, disks, etc).  Native Vendor management tools can be utilised, although with the exception of Vcenter, UIM would be the point of management of 90% of VBlock tasks after initial deployment.

In Summary

The VCE approach to IT infrastructure with VBlock enables simplification of procurement and IT infrastructure  planning as VCE are able to reduce their infrastructure offerings to essentially  units of IT which are sized to support a defined workload  within a number of “floor tiles” in the data centre. These predetermined units of IT have deterministic power and cooling requirements and scale in such aware to where all VBlock instances (be it few or Many) can be managed from a single point of management and are all supported under a single instance of support. Leveraging technologies which drive efficiencies around Virtualisation, networking, storage and computing we see benefits such as higher performance in smaller physical footprints when addressing storage and compute, minimised cables management and complexity with 10GbE enabling technologies such as Fibre Channel over Ethernet and operational simplicity with the Native Vblock unified infrastructure management tool UIM.management tool UIM.

Advertisements

Cisco UCS – Extended memory architecture.. What is it ?

As promised in my previous post, lets go through the blades available in Ciscos Unified Computing System. Essentially we have a few flavours of blades, full width and half width blades, some which utilise extended memory architecture (co developed by Intel and Cisco, which we’ll touch on), Daul socket for the most part with the exception of one which is 4 socket and a veritable feast of different memory options, processor options, IO card options and drive options.

However, I wanted to start with the component pieces before we delve into schematics (because you can read about those on the Cisco Site) and spend a little more time on each piece.

So what is this extended memory architecture Cisco keep bangin on about ? lets start with the why before we get to the how. Any tom, dick and harry can stick a load of Memory DIMMs in a server and scream about the fact they’ve got a few hundred gig of memory..   so why is this different ?

Typcally each CPU on a server has 3 memory channels for… you guessed it.. accessing memory. The number of transfers per second at which memory will perform is typically dictated by the number of DIMMs that are populated per memory channel. Typically when you populate 1 x DIMM per memory channel memory runs at 1333 MTpS (Million transfers per second), when you populate 2 DIMMs it would run at 1066 MTpS and when you get to a depth of 3 DIMMs per channel you’re running at 800 MTpS (not ideal). So as memory desity gets higher, performance can suffer (as shown below).

 

Cisco, in combination with Intel have developed something called the Catalina chipset. Despite sounding like a car, the Catalina chipset is quite a nifty addition. Effectively acting like a RAID controller for memory, it sits downstream of the CPU memory controllers  (one per memory channel) and presents out four additional memory sockets per channel, then presents an aggregate of the memory sitting beneath it as one logical DIMM up to the CPU memory channel, meaning that you can have denser memory configurations without memory ever clocking in below 1066 MTpS ( as shown below).

The two benefits of this being that you can address a larger amount of memory for memory intensive applications/virtual machines/whatever with a lower socket count, also making it possible to see higher consolidation ratio’s when virtualising machines or you can achieve moderate memory configurations using lower capacity less costly DIMMs. Cisco currently utilise this technology with the Westmere and Nehalem CPUs, B250 Blades servers and C250 Rackmount servers.

I nice little clip from the Cisco Datacenter youtube channel with a brief intro into extended memory

Either way..   not a bad idea..


What is this Cisco UCS server Business ?

As I delve into greater numbers of VBlock opportunities, more and more people are asking questions around the Cisco UCS compute offering and what that brings to the table with VBlock. This is a large subject to cover in one post, so I shall start with the fundamentals and start with the B Series offering, as that is where a lot of the more interesting subject matter resides.

The Cisco Unified Computing offering when discussing the B Series relates to Cisco’s Blade server offering. In terms of the architecture of the UCS system much like conventional blade offerings we have :

  • Blade Servers
  • Blade Chassis
  • Blade Chassis Switches (known as fabric extenders in this case, which are slightly different)

But, with the UCS system, rather than having standard upstream switches we have what we call Fabric interconnects. These are effectively the same hardware as the Cisco Nexus 5010 and 5020, but running something called UCS Manager rather than just standard switch software. One of the main differentiators of UCS is that all the management for the UCS system is done from these fabric interconnects. The clever bit around UCS is that each blade server upon deployment is completely stateless, meaning that the server has no personality (no Mac addresses, UUID, WWN’s). Pools of world these unique identifiers are created within UCS Manager and provisioned to what are called service profiles..  these in turn are then deployed to the blades along with WWN’s of boot from SAN LUNs. This means that if we have to down a server (planned or unplanned), we can take the service profile and attach it to another blade.. and the outside world will not see that anything has changed, all with minimal downtime.

Nice UCS Manager demo I came across which gives a high level overview of UCS Management (there is much more to be found on youtube)

There are 2 Flavours of Fabric Interconnect, the 6120, which is a 20 port 10GbE switch, which also supports one expansion module to either add additional 10GbE ports or FC ports to enable FCoE downstream to the Chassis. Each 6120 can manage up to 160 half width or 80 full width blades across 20 chassis. But bear in mind the port density of the 6120 will mean that you have a limited number connections downstream to the chassis in the maximum configuration, so only 10GbE throughput to each chassis per 6120 or 20GbE with 2 x 6120 Fabric interconnects (recommended for HA).

The 6140, which is a 40 port 10GbE switch, which also supports two expansion modules to either add additional 10GbE ports or FC ports to enable FCoE downstream to the Chassis. Each 6140 can manage up to 320 half width or 160 full width blades across 40 chassis. But again, bear in mind the port density of the 6140 will mean that you have a limited number connections downstream to the chassis in the maximum configuration, so only 10GbE throughput to each chassis per 6140 or 20GbE with 2 x 6140 Fabric interconnects (recommended for HA). 

In terms of how the 6100 fabric interconnects connect upstream to the customers aggregation/distribution network layer. All native ports are 10GbE capable, but the first 8 ports on the 6120 and the first 16 ports on the 6140 can negotiate down to gigabit speeds. You currently have the choice of 10GbE long range or short range optics, Gigabit SFP’s or Cisco’s CX1 10GbE copper twinax cables which have the SFP’s attached to each end of the cable, but only come in 1m, 3m and 5m lengths (i believe 7m is planned), typically the CX1 cables would be used for downstream chassis connectivity, but can be used upstream if the connecting device supports them (ie, nexus 5k).  For FC connectivity, we can add expansion modules for eith 8GB/s FC or 4GB/s FC connectivity. Its worth noting that the 6100’s work in NPV mode, so they do currently require upstream FC switches which support NPIV.

The Chassis, otherwise knows as the 5108 Chassis is 6RU in height, with front to back cooling and can house up to 8 half width blades or 4 full width blades (I shall detail the difference later). The has 8 cooling fans and requires a minimum of to power supplies, but can have up to 4 depending on the power redundancy requirements. The Chassis is connected upstream via a pair of 2104 Fabric extenders (blade switches effectively) or a single fabric extender if using a non HA configuration. Each 2104 has 4 external ports and connects to its respective fabric interconnect (not dual honed).

architecture Diagram from Cisco.com

As I said.. there is a lot to cover on UCS, so will detail the blades themselves with all the gubbins, such as extended memory architecture, adapters, processing technology in the next post. Then in further posts we’ll cover things like virtualization integration, 3rd party tools, network consolidation as a whole.


VBlock? Acadia ? VCE ?… Who ? what ? where ? When ? How ?

If you work in the IT sector, you will have had to have been locked in a small room with no outside contact for the better part of half a year not to have heard the term “Cloud Computing” bantered around. Back in 1960 a rather insightfull computer scientist called John McCarthy stated “Computation may someday be organised as a Public Utility”..   and a mere 50 years later..   we’re getting there 😉

Last year VMWare, EMC and Cisco announced The VCE Coallition and Acadia. This Cleverly crafted acronym depicts the 3 players in this coalition… yes, you guessed it ! VMWare, Cisco, EMC !  Cunningly this 3 letter acronym also give’s them a veritable feast of marketing slogans..  as it has the letter “V” in it, so they can use the word “virtual”.. Great ! The 3 have also decided to put their money where their mouth is and started a joint venture called Acadia..  essentially a technical organisation staffed by a combination of 130 Cisco, EMC and VMWare employees for services engagement, assessments and support for what has come to pass as Vblock. But, Acadia is not going to be involved in any of the sales elements of Vblock, its simply the one throat to choke for technical and support issues… 

So What is Vblock ?! Well..   what does your typical organsiation require by way of computing…   Servers, storage and infrastructure; what do the 3 main players in this VCE coalition excel in ? conveniently servers, infrastructure and storage !!

The Vblock is a datacenter in a rack, consisting of Vsphere sitting on Cisco UCS Servers and EMC storage. There are 3 flavours of Vblock currently, Vblock 0, Vblock 1 and Vblock 2. The common elements within all of these is the use of Cisco’s UCS Servers, MDS and Nexus switches and VMWare’s Vsphere for server delivery. The element that does vary is the underlying storage, Vblock 0 uses Celerra for storage provision, Vblock 1 uses Clariion CX4 a and VBlock 2 sports the ever so flash VMax (Symmetrix).

The idea being that end users can minimise captital expenditure by simply renting the infrastructure they need from a provider in possesion of a Vblock environment. The software which ties all of this together comes from the Ionix range (previously named ControlCenter) and is called Unified Infrastructure Manager or UIM, this software is essentially the glue that binds the 3 different vendors technologies, one point of management for switching, Vsphere Management and Storage management and the nice thing is that as you scale out a Vblock environment (by adding more Vblocks), they are all managed via the one Ionix UIM console. Also, UIM supports multi tenancy which means that if you are using Vblock to deliver SaaS/Cloud/whatever you want to call it, you can offer customers their own management bubble, where they can log in and manage the components relating to their service. See below video from EMC from a nice walkthough of the software from EMC.

More to come, as I get stuck into some training over the comming months.