Tag Archives: VMware

VMware – AppBlast. One word…. Wow!

VMware have a history of innovation and creating disruptive technology. Disruption may sound like bad thing, although as we know with things like the VMware hypervisor, disruption makes people money. It may be disruptive, but if the benefits are clear then people standardise on the technology and IT Resellers, Vendors and professionals benefit from the plethora of technology requirements which spill out the sides to accomodate these new marvells of modern tech.

VMware first set the trend when they abstracted the OS dependancy on directly seeing physical hardware, by introducing a hypervisor; now they have taken away the application dependancy on seeing the operating system..  lovelly jubbly ! this sounds good, but why ? how? what? 

I’m a little light on the nuts and bolts right now, but needless to say; needless to say, if you can deliver a windows/linux/mac application to any device with a browser supporting HTML5, the benefit is clear ! Visio on my iPad.. yes please, Safari on my Windows PC.. Why not ?!

I shall await the finer details with baited breath, but leave you with a pretty cool demo as shown below..   geeky soul food ! Enjoy !!

 

Advertisements

What is a VBlock.. the latest

Overview

Back in 2009 VMware, Cisco and EMC joined forces to create a new approach to selling full datacenter  pre-configured solution stacks. Rather than simply a gentlemen’s agreement and a cross pollination of development from the three companies, it was decided they would  create a new start up business as the delivery mechanism to drive this new concept to market. This new start up, known as VCE (Virtual Computing Environment), would take to market a new range of pre-validated, pre-configured and singularly supported solution stacks called VBlock.

The purpose of a VBlock is to simplify infrastructure down to effectively units of IT and define that a workload can be supported by “a number of floor tiles” in the data centre. This approach is enabled by the fact that everything within a VBlock is pre-validated from an interoperability perspective and customizable components are reduced down to packs of Blades (compute), Disks and network components  required to connect into the upstream customer environment, means that solution design is massively simplified and can be focus to supoprting the identified workload.

Pre-Validated

VCE extensively soak test workloads and configurations available within the VBlock to reduce pre-sales time spent on researching interoperability between the Network/compute/storage layers of the Data centre. This means that defining how a workload is supported is the focus and planning phases are significantly reduced. This pre-validated approach means that power and cooling requirements are easily determined  in preparation for site deployment.

Pre Build and Pre Configured

As part of the VBlock proposition, the physical and logical build process are carried out in VCE facilities, so that time on customer site is restricted to that if integrating into the customer environment and application layer services. This reduces deployment time massively.

Single Support Presence

 Rather than dealing with the parent companies (VMware, Cisco, EMC) of VCE on a per vendor basis. VCE act as a single support presence and will own any VBlock related issue end to end. This is partly enabled by the pre-validated aspect of VBlock, as VCE have a number of VBlocks in house and provided the VBlock is constructed as per approved architectures, VCE can simulate the environment which has caused the error to decrease time to resolution.

The Technology

The technology element at the core of the VBlock consists of VMware VSphere, Cisco UCS (Cisco’s Unified compute solution), Cisco Nexus (Cisco’s Unified fabric offering) and EMC VNX’s unified storage platform. Cisco simplify management of their blade computing platform down to a single point of management (UCS Manager) which resides on the 6100 Fabric interconnects and allows for  “stateless” computing, in that it is possible to  abstract the server “personality” (Mac addresses, word wide names, firmware, etc) away from the server hardware, then create and apply these personalities on demand to any blade within the UCS system. This management system manages all aspects of the UCS system (blade/chassis management, connectivity, firmware and connectivity). Cisco’s Unified Fabric commonly refers to their Nexus range (but elements of unified fabric apply to UCS). Cisco Nexus allows both IP network traffic and fibre channel traffic to be delivered over common 10 Gigabit switches using FcoE (Fibre Channel over Ethernet). In addition the Cisco Nexus 1000v enables deployment of a virtual switch within the Vmware environment ,allowing network services to be deployed within virtual infrastructure  where it was previously only possible in the physical world.

EMC VNX is a multi protocol storage array allowing for storage connectivity via block storage technologies (iSCSI/Fibre Channel) or NAS connectivity (CIFS/NFS/pNFS), giving the end user free choice as to how storage is provided to the UCS Server estate. EMC also drive efficiencies in how capacity and performance are handled by leveraging technologies such as deduplication and thin provisioning to achieve a lower cost per gigabyte. EMC are also able to leverage solid state disk technologies to extend storage Cache or enable sub LUN level tiering of data between Solid state disk and traditional mechanical disk technologies based on data access patterns.

VMware Vsphere has provided many companies cost saving in the past but in the Vblock is leveraged to maximum effect to provide operational efficiencies with features such as dynamic and automated mobility of virtual machines between physical servers based in load, high availability and the native integration that is inherent between VMware and EMC with the VAAI API integration. This integration enables much lower SAN fabric utilisation for what were very intensive storage network operations such as storage migration. EMC Powerpath/VE is also included in the Vblock which enables true intelligent load balancing of storage traffic across the SAN fabric.

Management

VCE utilise the Ionix Unified Infrastructure Manager (UIM) as a management overlay which integrates with the Storage,Compute,Network and Virtualisation  technologies within the Vblock and allows high level automation of and operational simplicity with how resources are provisioned within the VBlock. UIM will discover resources within the VBlock and the administrator then classifies those resources. As an example High performance blades may be deemed “Gold” blades verses lower specification blades which may be classified as “silver” blades. This classification is also applied to other resources within the Vblock such as storage. Once resources have been classified, then they can be applied on a per tenancy/application/department basis which is allowed access to differing levels of Gold/silver/Bronze resources within the Vblock. UIM now also includes operational aspects which give end to end visibility of exactly which hardware within a VBlock a particular VM is utilising (Blades, disks, etc).  Native Vendor management tools can be utilised, although with the exception of Vcenter, UIM would be the point of management of 90% of VBlock tasks after initial deployment.

In Summary

The VCE approach to IT infrastructure with VBlock enables simplification of procurement and IT infrastructure  planning as VCE are able to reduce their infrastructure offerings to essentially  units of IT which are sized to support a defined workload  within a number of “floor tiles” in the data centre. These predetermined units of IT have deterministic power and cooling requirements and scale in such aware to where all VBlock instances (be it few or Many) can be managed from a single point of management and are all supported under a single instance of support. Leveraging technologies which drive efficiencies around Virtualisation, networking, storage and computing we see benefits such as higher performance in smaller physical footprints when addressing storage and compute, minimised cables management and complexity with 10GbE enabling technologies such as Fibre Channel over Ethernet and operational simplicity with the Native Vblock unified infrastructure management tool UIM.management tool UIM.


Protocol considerations with VMware

A good video I came across from EMC discussing some storage protocol considerations when looking at VMware.


VBlock 0 : Want a Vblock ? Don’t need to support 1000’s of Virtual machines though ?

So, with all the hype of Vblock and VCE, we have marveled at the integration between the 3 vendors of VCE and slick positioning of vBlock…     but..   what if a vBlock 1 or 2 is going to break the bank and you need something scaled down a touch, but with the benefits of single support across the 3 vendor components ?

Now EMC have released the VBlock 0, things are looking more compelling for aspirations to the private cloud, who may not want to invest in vBlock 1 or 2 and geared to support thousands of Virtual Machines.

In the below video, Scott Lowe and Mike Foley from the VCE team  talk through what the vBlock 0 consists of for the techhead.co.uk site

The reference architecture for vBlock 0 can be found here.


VBlock single support offering

So I’ve just come back from a week over at EMC in Cork  and have the privilege of seeing the flashy lights of a Vblock and speaking with the Various VCE subject matter experts. So where do I start ?

For those of you who aren’t familiar with Vblock or what the VCE (VMware, Cisco, EMC) coalition is all about you can go to www.vcecoalition.com or watch the below vid from the VCE guys for the polished positioning :

This post is more for those of you who are familiar with the VCE offerings. I shall start with the single support element of the Vblock which has been a subject of some debate since, as there was some ambiguity around what acadia does and where it operates…   so, lets start with forgetting about Acadia. That sorts that 🙂 its all about the SST (Seamless support Team). The SST is a dedicated vBlock support team based in Cork (amongst other places), which consists of VMWare, Cisco and EMC qualified staff, all working under one roof, they are responsible for qualifying a vblock, supporting customer implementations of vBlock and more importantly for those who qualify as VCE partners, the SST will also support in the pre-sales element of vBlock and in qualifying the opportunity.

Slide from VCE around the SST

More information on VBlock support can be found here

Can I add a xxxxx to my vBlock ?

No !..   well not without an official exception from the SST anyway and to be fair, aside from maybe adjusting quantities of disks/blades/memory, the architecture for a vBlock shouldn’t need to be changed. For the most part, if your goal if to move toward the virtualised Datacenter then the vBlock should meet that requirement with the validated architecture. Bear in mind the vBlock is designed to sit in a Datacenter environment, effectively at the access layer and  uplink into an existing network core/aggregation layer (which is where you would provide services such as firewall/VPN termination/Layer 3 routing, etc.. ) and these elements do not fall under the remit of the seamless support team. The SST only look after the vBlock component(s), other components aside from the vBlock will have to fall under the support of their native vendors.

Why can’t we just add everything VMWare/Cisco/EMC which we have to the same support contract ?!

One of the reasons the SST is so effective is that they have a number of vBlocks within their support centers which all support personnel have access to, this means that they can re-create any issue which a customer may log and massively increase the speed to resolution. This wouldn’t be possible if they didn’t police what a supported vBlock implementation is, then it would make life very difficult in this issue staging and resolution. Also, yes the vBlock is an impressive of flashing lights and cool tech, but aim of a pre-validated architecture is that this enables the customer conversations to be geared more toward meeting business requirements than technical one, as the technical validation is already done.  All the validated reference architectures are available at http://www.vcecoalition.com/solutions.htm

However, if it is felt that a component is absolutely required the an exception can be applied for and approved at the discretion of the SST. But don’t go asking to add a HP server or juniper switch… not gonna happen 😉

Bear in mind that it is early doors and although it may appear to be restrictive having to abide by the validated architectures and use cases,but  it is early days and more and more validated architectures and options to the vblocks are in the process of going through the required testing to ensure that they are truly technically validated and can be supported by the SST.

I will post more on the positioning and technology of vBlock in due course.    for now..   I gotta eat.


Adminstration of Clariion with VMWare… Getting easier

So, EMC released the NFS plugin for VMWare to support storage administration tasks on Celerra from the VI Client a while back, which was very cool and had some very impressive features..    but what about the Traditional SAN man ?! 

Well, yesterday EMC announced a VMWare plugin for Clariion.. 

Product Overview

The EMC CLARiiON Plug-in for VMware simplifies storage administration between the VMware Virtual Center Server and CLARiiON storage systems. It offers end-to-end management of storage related tasks including provisioning of datastores, provisioning of raw device mapping (RDM) devices, and array-based virtual machine replication.

New Feature Summary 

The EMC CLARiiON Plug-in for VMware allows you to perform the following specific tasks directly from the VMware vSphere client:

  • You can provision new datastores (VMFS volumes) or raw device mapping (RDM) volumes
  • Delete existing datastores backed by CLARiiON CX4 storage
  • Creation of virtual machine replicas using array-based replication services
  • The plug-in also gives you the option to publish the replicated virtual machines to a View Manager.

Notes

·       EMC CLARiiON Plug-in for VMware is customer-installable.

·       EMC CLARiiON Plug-in for VMware requires CX4 storage systems running Release 29 FLARE.

 Thats all I have at the minute, but will be picking the brain of the EMC bods as I go to get some more info.

Very usefull feature though !!


SAN Based replication ? no problem.. latency.. Problem..

Disaster recovery has become something which is moving higher and higher up agenda on companies “to do” list. Its becoming increasingly more apparent what the costs to a given business are when companies suffer downtime and/or loss of data..   people are starting to think about the monetary cost to the business is when services or applications are unavailable to both internal staff and more importantly customers and with the big push of server virtualization over the last few years.. where is application data/file data/the application server itself sitting ?  on the SAN;  so it makes sense to leverage that existing infrastructure in the SAN and use some form of SAN based replication.     

Bearing in mind the SAN is no longer a luxury only the privileged enterprise has access to and is becoming ever more important to even small businesses..  not all these organisations have access to biiiig dedicated links between sites and if they do, they’re probably subject to significant contention and unfortunately TCP isn’t the most efficient of protocols over distance.    

So what do you do to make sure the DR solution you have in mind is feasible and realistic ?    

Firstly make sure you pick the right technology    

First port of call is sitting down with the customer and mapping out the availability requirements of their applications. Things like the RPO/RTO requirements of the applications they have in use. Alot of the time the company may not have thought about this in alot of detail, so you can really add value here if you are a reseller. Ultimately it boils down to the following being considered for each service :    

  • How much downtime can you afford before the business start losing money  on each given application.
  • How much data can you afford to lose in the event of a a disaster, before it does significant damage to the business

 

If you can get them to apply a monetary figure to the above, it can help when positioning return on investment.    

There are a few types of Array based replication out there. They normally come in 3 flavours, A-syncronous, Synchronous and Jounaling/CDP and Synchronous Replication.  Synchronous replication can be a bit risky for alot of businesses as usually application response time becomes dependent on writes being committed to disk on both production and DR storage (thus  application response times become dependent also on round trip latency across the link between the 2 sites, spindle count becomes very important on both sites here also).  I often find that aside from banks and large conglomerates the main candidate for synchronous replication in the SMB space  is actually universities. Why ? because often universities don’t replicate over massive distances, they will have a campus DR setup where they replicate over a couple of hundred metres from building to building, so laying fibre in this case isn’t too costly. However, for the average SMB who wants to replicate to another town; syncronous replication isn’t usually preferable due to latency over distance and the cost of the large link required.      

Mirrorview A-Syncronous (EMC)    

A-Syncronous replication is typically what I see in the case of most small to medium size businesses. Why ? firstly, because application response times are dependent on the round trip time of  syncronous replication. With A-Synchronous replication, usually a Copy on first write mechanism is utilised to effectively ship snapshots at specified intervals over an IP link. Below is a diagram showing how EMC Mirrorview/A does this :    

    

EMC  uses whats called a Delta Bitmap (A visual representation of the data blocks on the volume), to track what has been sent to the secondary array and what hasn’t. This Delta Bitmap works in conjunction with reserve LUNs (Delta Set) on the array to ensure that the data that is sent across to the secondary array remains consistent. The secondary also has reserve LUNs in place so that if replication were interrupted or the link was lost, the secondary array can roll back to its original form so the data isn’t compromised.    

Also, you can use higher capacity less expensive disks on the DR site without affecting the response times to production (although application response times will still be affected in the event of a failover, as servers will be accessing disk on the DR box).  One potential drawback with asynchronous replication, is that as both SAN’s are no longer in a synchronous state, you have to decide whether it is important that your remote copies of data are in an application consistent state. If it is important, then you’ll have to look at a technology which will sit in the host and talk to the application and will also talk to the storage. In the EMC world we have a tool called replication manager which does all the various required bits on the host side (calling VSS/Hot backup mode , flushing host buffers, etc).    

Replication manager is licenced per application server (or virtual server in a cluster) and also required an agent per mount host, plus a server licence (or 2 depending on the scenario). There is a lot more to replication manager, but that’s a whole post in itself.    

EMC RecoverPoint    

Recoverpoint is another way of  replication technology by EMC which allows very granular restore points and small RPO’s over IP. Because it employs journalling rather than Copy on first write. It stubs and timestamps at very regular intervals (almost every write in some cases), allowing you to roll back volumes to very specific, granular,  points in time. See below diagram for more detail :    

    

RecoverPoint provides out-of-band replication. To be considered out-of-band, the RecoverPoint appliance is not involved in the I/O process. Instead, a component of RecoverPoint, called the splitter (or Kdriver), is involved. The function of a splitter is to intercept writes destined for a volume being replicated by RecoverPoint. The write is then split (“copied”) with one copy being sent to the RecoverPoint appliance and the original being sent to the target.    

With RecoverPoint, three types of splitters can be used. The first splitter resides on a host server that accesses a volume being protected by RecoverPoint. This splitter resides in the I/O stack, below the file system and volume manager layer, and just above the multi-path layer. This splitter operates as a device driver and inspects each write sent down the I/O stack and determines if the write is destined for one of the volumes that RecoverPoint is protecting. If the write is destined to a protected LUN, then the splitter sends the write downward and will rewrite the address packet in the write so that a copy of the write is sent to the RecoverPoint appliance. When the ACK (acknowledged back) from the original write is received, the splitter will wait until a matching ACK is received from the RecoverPoint appliance before sending an ACK up the I/O stack. The splitter can also be part of the storage services on intelligent SAN switches from Brocade or Cisco.    

For a CLARiiON CX4 and CX3, the CLARiiON storage processor also has a write splitter. When a write enters the CLARiiON array (either through a Gigabit Ethernet port or a Fibre Channel port), its destination is examined. If it is destined to one of the LUNs being replicated by RecoverPoint, then a copy of that write is sent back out one of the Fibre Channel ports of the storage processor to the RecoverPoint appliance. Since the splitter resides in the CLARiiON array, any open systems server that is qualified for attachment to the CLARiiON array can be supported by RecoverPoint. Additionally, both Fibre Channel and iSCSI volumes that reside inside the CLARiiON CX4 or CX3 storage array can be replicated by RecoverPoint. RecoverPoint/SE only supports a Windows host-based splitter and the CLARiiON-based write splitter. Also automatic installation and configuration for RecoverPoint/SE only supports the CLARiiON-based write splitter.   

Below is a Video from EMC demonstrating Recoverpoint in a VMWare Environment : 

   

  

Optimise So how do we ensure we are getting the most out of the links we use (especially over contended links such as VPN or MPLS) ? WAN optimisation..  there are a number of ways this can be done, some use an appliance to acknowledge back to the production SAN locally, then cache the data and burst it over the WAN. Some companies have found a more efficient way of transmitting data over a WAN, by using proprietary more efficient  protocols to replace TCP over the WAN  (such as Hyper IP), Below is a snippet from a mail I received from a company called Silverpeak  who seem to deal with the Challenges of optimizing WAN efficiency quite well, in particular with SAN Replication :      

“Just a few years ago, it was unheard of to combine SAN traffic with other storage applications, let alone on the same network as non-storage traffic. That is no longer the case. Silver Peak customers like NYK logistics are doing real-time data replication over the Internet. Want to learn more? Here is a demo of EMC replication running across a shared WAN ”  

 

   

   

    

    

 

 

  In summary   

Replication is a Biiiig topic..  there are many more factors to be considered; such as automation, cluster awareness, etc.   I think the best way to summarise this post is…      

To be continued     

      

 


How does Microsoft Licencing with Virtualisation work ?!

 This questions been asked a few time and due to the fluidity of some of these organisations licencing structures..   I thought it best to defer. The link below is runs through the various options regarding microsoft licencing in a virtual world..  it starts off a little fluffy, but bear with it, they do get to the point.

Microsoft Licencing with Virtualisation


Building Blocks of a Vblock

Seeing as lots of people are asking lots of questions around EMC,Vmware and Cisco’sVblock. I thought I’d best dig something out. Attached is a very concise, granular, document which outlines the different elements of a Vblock, how the disks are configured, supported Vblock applications and…   some pretty pictures for your delectation.

solution_overview_vce

The below clip is the Cisco Vice President talking about the various Vblock packages.


Managing Celerra from Vmware

EMC of late have been very good at increasing the level of integration between their storage and the VMWare platform. First it was DRS integrating with EMC QOS Manager, then the ability to view what VM’s reside on SAN storage from within Clariions Navisphere Manager, then replication manager was pulled inline to facilitate machine consistent snaps/replication with vmware using their VMFS proxy.

All very cool stuff, but now EMC are pulling the ability to manage storage from within EMC’s Celerra platform amd into VMWare’s VI Client.  As of release 5.6.48 of DART (Dart is the firmware/OS for Celerra), you will be able to manage the creation and management of NFS exports from within VMWare and perform the following actions :

Create an NFS file system and mount to ESX systems to be used as a VMware data store. File systems created with the plug-in will be automatically deployed with EMC and VMware best practices, including Celerra Virtual Provisioning.

Data store (NFS file system) extension extends the Celerra file system that is exported to the ESX cluster.

Compress the VMware VMDK files associated with a virtual machine, a folder (of multiple virtual machines), an ESX server, or a whole data store. Decompress a previously compressed VMDK or set of VMDK files.

Full Clone—Make full copies of virtual machine VMDK files using new Celerra-based functionality

Fast Clone—Make thin copies of virtual machine VMDK files instantly using new Celerra NFS file-based snap functionality

 

 

Below is a very good video demonstation provided by the one and only virtualgeek Mr Chad Sakac, demonstrating the feature