Category Archives: Virtualisation – Storage Considerations

Having performance issues with Celerra and NFS Datastore performance ? patch ! Patch ! Patch !

I sat in on an interesting session yesterday which gets under the covers of VMware performance on NFS datastores  hosted on Celerra NS series. This was presented by a chap called Ken Cantrell who works for EMC engineering and was off the back of the fact that feedback from the field showed in many cases that some customers VMware estate simply wasn’t performing using NFS on Celerra. This is not the case for everybody, but it certainly was an issue. Essentially what was happening is the Celerra was dealing with an extensive amount of NFS calls to the UxFS log (predominantly Getattr type calls) and it was slowing down response types back to the host. EMC tested a workload on the Celerra using DART 6.0.4 using Jetstress. Jetstress effectively simulates exchange workload and also halts the benchmark if response times exceed 20 milliseconds on the basis of 20 milliseconds being to poor for a exchange. EMC were seeing that with the base version of DART 6 response time were exceeding 25 Milliseconds for the Exchange workload they tested on an exchange VM sitting on a Celerra hosted NFS datastore. EMC then released a patch upgrade which brought that down to sub 15 ms. EMC then released an experimental epatch (DART 6.0.4.805 which brought response times down further to sub 10ms (closer to 6ms).

So bottom line is…  don’t just suffer poor performance. Feed back to EMC if you see issues and also keep an eye on patch updates to the OS. They are there to resolve noted issues !

Material:

A good post on ECN with details of the new patch :

https://community.emc.com/thread/118430

A good blog comparing iSCSI Vs NFS for VMware:

http://goingvirtual.wordpress.com/2010/04/07/iscsi-or-nfs-with-emc-celerra/

A good blog post by Jason Boche comparing the performance delta between Dart 6.0.4 and Dart 6.0.4.805

http://www.boche.net/blog/index.php/2011/03/21/emc-celerra-beta-patch-pumps-up-the-nfs-volume/

Advertisements

Protocol considerations with VMware

A good video I came across from EMC discussing some storage protocol considerations when looking at VMware.


VBlock 0 : Want a Vblock ? Don’t need to support 1000’s of Virtual machines though ?

So, with all the hype of Vblock and VCE, we have marveled at the integration between the 3 vendors of VCE and slick positioning of vBlock…     but..   what if a vBlock 1 or 2 is going to break the bank and you need something scaled down a touch, but with the benefits of single support across the 3 vendor components ?

Now EMC have released the VBlock 0, things are looking more compelling for aspirations to the private cloud, who may not want to invest in vBlock 1 or 2 and geared to support thousands of Virtual Machines.

In the below video, Scott Lowe and Mike Foley from the VCE team  talk through what the vBlock 0 consists of for the techhead.co.uk site

The reference architecture for vBlock 0 can be found here.


VBlock single support offering

So I’ve just come back from a week over at EMC in Cork  and have the privilege of seeing the flashy lights of a Vblock and speaking with the Various VCE subject matter experts. So where do I start ?

For those of you who aren’t familiar with Vblock or what the VCE (VMware, Cisco, EMC) coalition is all about you can go to www.vcecoalition.com or watch the below vid from the VCE guys for the polished positioning :

This post is more for those of you who are familiar with the VCE offerings. I shall start with the single support element of the Vblock which has been a subject of some debate since, as there was some ambiguity around what acadia does and where it operates…   so, lets start with forgetting about Acadia. That sorts that 🙂 its all about the SST (Seamless support Team). The SST is a dedicated vBlock support team based in Cork (amongst other places), which consists of VMWare, Cisco and EMC qualified staff, all working under one roof, they are responsible for qualifying a vblock, supporting customer implementations of vBlock and more importantly for those who qualify as VCE partners, the SST will also support in the pre-sales element of vBlock and in qualifying the opportunity.

Slide from VCE around the SST

More information on VBlock support can be found here

Can I add a xxxxx to my vBlock ?

No !..   well not without an official exception from the SST anyway and to be fair, aside from maybe adjusting quantities of disks/blades/memory, the architecture for a vBlock shouldn’t need to be changed. For the most part, if your goal if to move toward the virtualised Datacenter then the vBlock should meet that requirement with the validated architecture. Bear in mind the vBlock is designed to sit in a Datacenter environment, effectively at the access layer and  uplink into an existing network core/aggregation layer (which is where you would provide services such as firewall/VPN termination/Layer 3 routing, etc.. ) and these elements do not fall under the remit of the seamless support team. The SST only look after the vBlock component(s), other components aside from the vBlock will have to fall under the support of their native vendors.

Why can’t we just add everything VMWare/Cisco/EMC which we have to the same support contract ?!

One of the reasons the SST is so effective is that they have a number of vBlocks within their support centers which all support personnel have access to, this means that they can re-create any issue which a customer may log and massively increase the speed to resolution. This wouldn’t be possible if they didn’t police what a supported vBlock implementation is, then it would make life very difficult in this issue staging and resolution. Also, yes the vBlock is an impressive of flashing lights and cool tech, but aim of a pre-validated architecture is that this enables the customer conversations to be geared more toward meeting business requirements than technical one, as the technical validation is already done.  All the validated reference architectures are available at http://www.vcecoalition.com/solutions.htm

However, if it is felt that a component is absolutely required the an exception can be applied for and approved at the discretion of the SST. But don’t go asking to add a HP server or juniper switch… not gonna happen 😉

Bear in mind that it is early doors and although it may appear to be restrictive having to abide by the validated architectures and use cases,but  it is early days and more and more validated architectures and options to the vblocks are in the process of going through the required testing to ensure that they are truly technically validated and can be supported by the SST.

I will post more on the positioning and technology of vBlock in due course.    for now..   I gotta eat.


Adminstration of Clariion with VMWare… Getting easier

So, EMC released the NFS plugin for VMWare to support storage administration tasks on Celerra from the VI Client a while back, which was very cool and had some very impressive features..    but what about the Traditional SAN man ?! 

Well, yesterday EMC announced a VMWare plugin for Clariion.. 

Product Overview

The EMC CLARiiON Plug-in for VMware simplifies storage administration between the VMware Virtual Center Server and CLARiiON storage systems. It offers end-to-end management of storage related tasks including provisioning of datastores, provisioning of raw device mapping (RDM) devices, and array-based virtual machine replication.

New Feature Summary 

The EMC CLARiiON Plug-in for VMware allows you to perform the following specific tasks directly from the VMware vSphere client:

  • You can provision new datastores (VMFS volumes) or raw device mapping (RDM) volumes
  • Delete existing datastores backed by CLARiiON CX4 storage
  • Creation of virtual machine replicas using array-based replication services
  • The plug-in also gives you the option to publish the replicated virtual machines to a View Manager.

Notes

·       EMC CLARiiON Plug-in for VMware is customer-installable.

·       EMC CLARiiON Plug-in for VMware requires CX4 storage systems running Release 29 FLARE.

 Thats all I have at the minute, but will be picking the brain of the EMC bods as I go to get some more info.

Very usefull feature though !!


Iomega/EMC’s new lovechild

Iomega first started life selling removable storage. The world marvelled at the might of the 200MB Zip drive, brought gifts of  gold , frankincense and murr as offerings to the almighty Jazz drive and sacrificed livestock in awe of the the Ditto Drive  (I exagerate..  but bear with me, I’m setting the scene). Then, as removable storage media started to give way to internet and USB drives became the standard for removable storage..  we started to see the likes of the zip and jazz drive fade away.

So..  out with the old, in with the new ? No..  Now Iomega have a massive play in the consumer space for External Hard drives and networked storage. The upper end of the networked storage range was the IX4 (now on its second generation). A nice tidy box which would hold up to 8TB of RAW capacity and fit well in a remote office environment, home office, even as a media server for your movies and music (all legitimately obtained of course). They even did a rackmount NAS device..  Brilliant !!

But what if you need a little more grunt… a bit more redundancy, scalability.. something more feature rich. Iomega/EMC are on the verge of releasing the IX12. This box fits nice and snug between the IX4-200R and EMC’s Celerra NX4; it supports up to 24TB of RAW capacity, supports all the RAID types you’d ever want to use and has 4 Gigabit ports which can support up to 256 iSCSI initiators (servers) or 256 LUN’s for block level access. All the other usual protocols still apply in the oh so familiar forms of CIFS, NFS, FTP, HTTP, etc and there are even a few nice bells and whistles such as port aggregation, DFS, array based replication, WebDav Support for online collaboration and it also sports drive spin down (very cool if its being used for a backup to disk or archive target). 

The IX12 has also been certified by a number of other vendors; it is obviously certified and on VMwares Hardware compatibility List for shared storage (also supported by a number of other virtualization vendors). Microsoft have verified that it will support Exchange 2010 Mailstores for environments of up to 250 users.

Its being stated by Iomega that these boxes are sitting in at between $5,000 and $10,000 list,  so will help EMC break even further into the lower SMB market. Personally, I think this box will play really well in spaces such as remote office,  graphic design organisations, departmental dedicated storage, backup to disk targets (admittedly would be more compelling if it supported NDMP, but we’ll leave that to the big boys), archive storage for the likes of EMC’s SourceOne, EV, Commvault, etc…

I’ll put together a more clear and concise post after the announcements to come, but I think Iomega could be onto a winner on this one..


Interesting Times

I wanted to do a post simply based on some of the technologies which have facilitated this vision of…  the cloud and to look at some of those things in isolation with a view to understanding the bigger picture.

IT is at an interesting crossroads at the moment, there is a whisper in the wind accumulating Clarity and momentum by the day. A whisper which tells us that the way people think about IT is changing. The concept of the cloud is not a new one, but its shape and purpose have differed quite dramatically depending on who you talk to. For the moment at least and for the various vendor channels, its been very much business as usual. People are still buying tin, assessing the viability of virtualization, putting out to tender for the traditional server/SAN type solutions and vendors will continue to cater for those traditional needs. However, Vendors have also been doing something else…   better defining this cloud thing, how they can commoditise it, slap a price tag on it, stick it in a box and sell it.

Lets look at some of these technologies which have been developed to facilitate this transition.

Virtualisation on the whole giving us the ability to better utilise  tin and deploy new virtual servers with speed and ease. VMWare’s Vmotion/Dynamic power management/Distributed Resource Scheduling gives us the ability to move virtual servers between physical servers without disruption for any number of reasons (DPM allows us to reduce our power requirements by moving virtual servers onto a lesser number of physical machines, powering down machines left unused, as and when the business deems it suitable. DRS allows us to distribute virtual servers dynamically between physical servers based on the resource requirements of the virtual server).  This mobility allows the business be flexible and adaptive. The advent of virtualisation also allows us to in effect commiditise resource, be it memory, CPU resource or storage and distribute that in the most effective manner possible.

Storage has become something which is intelligent. Virtualisation and automation technologies in the storage world have given storage platforms the ability to adapt. Things like thin provisioning and online archive give us the ability to make better use of storage. Also players like compellent and EMC with their FAST technology gives storage the edge by digging down into the bare blocks of storage and moving individual blocks of data between fast/expensive and cheaper/high density storage based on how often those blocks of data are being access and their IOPS requirements. Deduplication, again another technology allowing transparency to the user while efficiently storing data.

Mobility. VMWare again, with virtual desktops being delivered on demand to where-ever the user needs it and maintaining access to all their bits and pieces. IP telephony and VPN, giving the external user the ability to access all the resources of the internal user and be as mobile as they need to. With networking capabilities becoming ever more efficient and robust also with things like 10GbE anf FCoE coming to the market, the datacenter is able to consolidate their network infrastructure and provide resources to the user in ever more efficient and increasingly more intelligent ways.

Here are a couple of video’s just showing some deployments of IP telephony and virtual desktops and the likes which I found interesting :

 Here’s a very cool way in which Subway have deployed IP telephony in their setup

and a video showing VMware virtual desktop offering

Management. We’re seeing integration between the network, the serverside and the storage in a big way. You can now manage EMC storage from within VMWare, VMware have pulled Ionix into their portfolio meaning they can manage physical and virtual infrastructure. Again, Ionix have released the unified infrastructure Manager which can manage Cisco Nexus networking tin, VMWare and EMC Storage. Meaning that not only can you have all these separate and different technologies working as one, but you can manage them as one.

 Here’s a video showing the fore mentioned Ionix Unified infrastructure manager

EMC/VMWare/Cisco have their offering with the Vblock, Netapp are hot on the tails of EMC and microsoft and HP/Lefthand are all working to a common goal (in competition with eachother ofcourse). To be right where its happening when service providers take a the next step from providing telecoms, disaster recovery, software as a service..  and start providing effectively resource as a service, infrastructure as a service.

When organisations are comfortable that they trust a 3rd party to host their applications, their user data and their desktops even, any vendor worth their salt wants to be there. Before long, we won’t be asking customers what switches, servers and storage they want. We’ll be asking broader questions…   How many IOPS do you want ? how much memory ? how much computing power you need ? and how much bandwidth you want?  How many people do you want to be able to make phone calls ? This adaptable, mobile architecture we’re seeing now will be doing the math….  Service Providers will be selling virtual commodities.

Below is a video by Gartner, with some of their analysts discussing some of the points of cloud computing :

Ofcourse, we’re a little while away from seeing that happen in the mainstream, a little way away from seeing the masses flock to these service providers.  People like to have control over their data, they know that if its in a rack they can walk up to and touch..   they have control. The market needs to have confidence in this concept that is the cloud…  and again, there are businesses who understand and are comfortable with this concept and have adopted it with aspects of their business. But when people start entrusting there critical core business applications, which are bound to OLA’s and SLA’s… this is when it will get really exciting.


Managing Celerra from Vmware

EMC of late have been very good at increasing the level of integration between their storage and the VMWare platform. First it was DRS integrating with EMC QOS Manager, then the ability to view what VM’s reside on SAN storage from within Clariions Navisphere Manager, then replication manager was pulled inline to facilitate machine consistent snaps/replication with vmware using their VMFS proxy.

All very cool stuff, but now EMC are pulling the ability to manage storage from within EMC’s Celerra platform amd into VMWare’s VI Client.  As of release 5.6.48 of DART (Dart is the firmware/OS for Celerra), you will be able to manage the creation and management of NFS exports from within VMWare and perform the following actions :

Create an NFS file system and mount to ESX systems to be used as a VMware data store. File systems created with the plug-in will be automatically deployed with EMC and VMware best practices, including Celerra Virtual Provisioning.

Data store (NFS file system) extension extends the Celerra file system that is exported to the ESX cluster.

Compress the VMware VMDK files associated with a virtual machine, a folder (of multiple virtual machines), an ESX server, or a whole data store. Decompress a previously compressed VMDK or set of VMDK files.

Full Clone—Make full copies of virtual machine VMDK files using new Celerra-based functionality

Fast Clone—Make thin copies of virtual machine VMDK files instantly using new Celerra NFS file-based snap functionality

 

 

Below is a very good video demonstation provided by the one and only virtualgeek Mr Chad Sakac, demonstrating the feature


Is your EMC Solution Supported ? Why not check ?

EMC are pretty good at making sure they test, test and test again when it comes to interoperability with other vendors. The EMC Elab enables you to make use of all that testing and check that your Storage environment is supported with EMC.

See below for a walkthough guide of the Elab storage wizard.


EMC Clariion FAST.. Fly like a butterfly, sting like a bee… really ?

So the FAST suite is available on Clariion. Good news !! If you don’t know what FAST (Fully Automated Storage Tiering) is, in a nutshell it’s automated storage tiering (as implied in the name), it ensures that LUNs which may have critical performance requirements and variable characteristics in terms of IO utilisation are using disk as efficiently as possible. LUNs of data which are accessed frequently will be serviced by one tier of disk (ie solid state) and others by another (ie fiber channel disk or SATA) and all this is done dynamically on the fly using the Clariion Virtual LUN technology (meaning you can migrate a LUN from 1 set of disks to another seamlessly to the application and retaining all the properties of the LUN); All this is done on the automatically using FAST.  This is especially relevant now as virtualisation is rife and the ability to be so flexible by way of server deployment requires that the storage is either meticulously designed and frequently reviewed, adjusted, etc or the storage platform is adaptive and flexible (even more relevant in multi-tenancy environments which offer a managed service) . FAST enables EMC storage to fall under the latter.

So, is there a sting in the tail ? is it silly money ? much like Control Center was (although that did get better to be fair)..    surprisingly…   no. The FAST suite of tools is suprisingly well priced at a touch over £6000 list. Bearing in mind that the FAST suite isn’t just the FAST software, it also includes Navisphere Analyser (for analyzing and monitoring SAN bandwidth and IO),  QOS Manager (for performance tuning) and they throw in an assessment service to provide recommendations as to which LUNs are most suitable for migration to fast managed LUNs which will utilise EFD (solid state drives to the rest of us) and/or SATA etc..   (I’ll come back to why this is required a but later)   considering that you’re looking at a £10k list price for Navisphere Analyser and QOS Manager alone, thats not a bad deal. But then it wouldn’t be as you’re still looking at just under £8000 for an enterprise flash drive and FAST is as good a mechanism as any to drive sales of solid state drives.  But this isn’t just a smoke and mirrors mechanism to sell solid state drives, the benefits are real. The capital expenditure involved in deploying Enterprise flash drives with FAST may be undesirable to a lot of businesses, but the return on investment is again very real. The requirement to procure mass amounts of FC drives to support highly transactional databases is not gone, but certainly minimized, man hours required for certain laborious storage admin tasks is reduced (especially in environments may have applications with extremely variable disk loads), power and cooling requirements are reduced, the list goes on..

So why is there an assessment service ?  can’t I just chuck everything on FAST managed LUNs and tell it to go do ?…   Yes,  you could. But Solid state drives are still expensive, so make the best use of them you can.  So I might suggest that LUNs with a lesser performance requirement and predictable disk load characteristics sit on standard LUNs.

See below to see FAST on EMC’s VMax..   now just waiting for this on Clariion and Celerra (sub LUN level)

and before some boxing boff corrects me on my Muhammad Ali quotation in the post title. I know its “flout like a butterfly, sting like a bee”..  but cut me some slack, float didn’t quite fit.. call it creative license  😉