What is this Cisco UCS server Business ?

As I delve into greater numbers of VBlock opportunities, more and more people are asking questions around the Cisco UCS compute offering and what that brings to the table with VBlock. This is a large subject to cover in one post, so I shall start with the fundamentals and start with the B Series offering, as that is where a lot of the more interesting subject matter resides.

The Cisco Unified Computing offering when discussing the B Series relates to Cisco’s Blade server offering. In terms of the architecture of the UCS system much like conventional blade offerings we have :

  • Blade Servers
  • Blade Chassis
  • Blade Chassis Switches (known as fabric extenders in this case, which are slightly different)

But, with the UCS system, rather than having standard upstream switches we have what we call Fabric interconnects. These are effectively the same hardware as the Cisco Nexus 5010 and 5020, but running something called UCS Manager rather than just standard switch software. One of the main differentiators of UCS is that all the management for the UCS system is done from these fabric interconnects. The clever bit around UCS is that each blade server upon deployment is completely stateless, meaning that the server has no personality (no Mac addresses, UUID, WWN’s). Pools of world these unique identifiers are created within UCS Manager and provisioned to what are called service profiles..  these in turn are then deployed to the blades along with WWN’s of boot from SAN LUNs. This means that if we have to down a server (planned or unplanned), we can take the service profile and attach it to another blade.. and the outside world will not see that anything has changed, all with minimal downtime.

Nice UCS Manager demo I came across which gives a high level overview of UCS Management (there is much more to be found on youtube)

There are 2 Flavours of Fabric Interconnect, the 6120, which is a 20 port 10GbE switch, which also supports one expansion module to either add additional 10GbE ports or FC ports to enable FCoE downstream to the Chassis. Each 6120 can manage up to 160 half width or 80 full width blades across 20 chassis. But bear in mind the port density of the 6120 will mean that you have a limited number connections downstream to the chassis in the maximum configuration, so only 10GbE throughput to each chassis per 6120 or 20GbE with 2 x 6120 Fabric interconnects (recommended for HA).

The 6140, which is a 40 port 10GbE switch, which also supports two expansion modules to either add additional 10GbE ports or FC ports to enable FCoE downstream to the Chassis. Each 6140 can manage up to 320 half width or 160 full width blades across 40 chassis. But again, bear in mind the port density of the 6140 will mean that you have a limited number connections downstream to the chassis in the maximum configuration, so only 10GbE throughput to each chassis per 6140 or 20GbE with 2 x 6140 Fabric interconnects (recommended for HA). 

In terms of how the 6100 fabric interconnects connect upstream to the customers aggregation/distribution network layer. All native ports are 10GbE capable, but the first 8 ports on the 6120 and the first 16 ports on the 6140 can negotiate down to gigabit speeds. You currently have the choice of 10GbE long range or short range optics, Gigabit SFP’s or Cisco’s CX1 10GbE copper twinax cables which have the SFP’s attached to each end of the cable, but only come in 1m, 3m and 5m lengths (i believe 7m is planned), typically the CX1 cables would be used for downstream chassis connectivity, but can be used upstream if the connecting device supports them (ie, nexus 5k).  For FC connectivity, we can add expansion modules for eith 8GB/s FC or 4GB/s FC connectivity. Its worth noting that the 6100’s work in NPV mode, so they do currently require upstream FC switches which support NPIV.

The Chassis, otherwise knows as the 5108 Chassis is 6RU in height, with front to back cooling and can house up to 8 half width blades or 4 full width blades (I shall detail the difference later). The has 8 cooling fans and requires a minimum of to power supplies, but can have up to 4 depending on the power redundancy requirements. The Chassis is connected upstream via a pair of 2104 Fabric extenders (blade switches effectively) or a single fabric extender if using a non HA configuration. Each 2104 has 4 external ports and connects to its respective fabric interconnect (not dual honed).

architecture Diagram from Cisco.com

As I said.. there is a lot to cover on UCS, so will detail the blades themselves with all the gubbins, such as extended memory architecture, adapters, processing technology in the next post. Then in further posts we’ll cover things like virtualization integration, 3rd party tools, network consolidation as a whole.


About interestingevan

I work as a Technical Architect for a Storage and Virtualisation distributor in the UK called Magirus. The goal of this blog is simply to be a resource for people the want to learn about or go and Sell storage. I'm a qualified EMC Clariion Technical architect, Commvault Engineer and Cisco Unified computing specialist. I have also worked with the rest of the EMC portfolio for a good few years. This Blog will provide information on how specific technologies work, what questions need to be asked in order to spec certain products, competative info and my two pence on some of these technologies. Please feel free to provide feedback as to the content on this blog and some bits you'd like to see. View all posts by interestingevan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: