Category Archives: SAN

EMC World 2011 – Las Vegas – day 1

So after the first day at EMC World what Marvels of technology have been announced ?
What groundbreaking nuggets of geeky goodness to be announced. So, first things first VPLEX ! looks like they may have cracked it..   Active/active storage over a synchronous distances, Geoclusters will never be the same again !!..   and also a slightly ambiguous announcement around integration with Hadoop opensource (more to follow on that).

What was the message of the day though ? What was this years theme..   This year EMC are talking about Big data and the cloud. Clearly recent acquisitions of Isilon and Greenplum have planted EMC’s head firmly back in the clouds.  Greenplum giving end users the ability to scale out Database architectures for data analytics to mammoth scale with Greenplums distributed node architecture and massive parallel processing capabilities. To br frank, learning about the technology was borderline mind numbing, but my god its a cool technology. Then we have large scale out NAS with Isilon and its OneFS system giving the ability to present massive NAS repositories and scale NAS on a large scale. So obviously, EMC are talking about big data.

I also had the opportunity to sit in on an NDA VNX/VNXe session and what they’re going to do is….    aaah, I’m not that stupid. But needless to say, there are some nice additions on the way, the usual thing with higher capacity smaller footprint drives and getting more IO in less U space, but also some very cool stuff on the way which will enable EMC to offer a much cheaper entry point for compliance ready storage..  watch this space.

In true style EMC threw out some interesting IDC touted metrics further justifying the need to drive storage efficiencies and re-iterating the fact that there will always be a market for storage. So, our digital universe consists of 1.2 Zettabytes of data, currently, of which 90% of that is unstructured data and that figure is predicted to grow by x44 over this decade. Also 88% of fortune 500 companies have to deal with Botnet attacks on a regular basis and have to contend with 60 Million Malware variants.  So making this relevant, the 3 main pain points of end users are; firstly our time old friend budget, then explosive data growth and securing data.

So how have EMC addressed these ? Well, budget is always a fun one to deal with, but with efficiencies in storage by way of deduplication, compression, thin provisioning and auto tiering of data, end users should get more bang for their buck. Also, EMC easing up on the rains with pricing around Avamar and the low entry point of VNXe, this should help the case. Dealing with explosive data growth again tackles with deduplication, compression, thin provisioning and auto tiering of data, but also now with more varied ways of dealing with large sums of data with technologies such as Atmos, greenplum, Isilon. Then the obvious aquisition of RSA to tie in with the security message, all be it that has had its challenges.

I’m also recently introduced the concept of a cloud architect certification track and the concept of a Data Scientist (god knows, but I’ll find out). So I went over to the proven professionals lounge and had a chat with the guys that developed the course. Essentially it gives a good foundation for steps to consider when architecting a companies private cloud, around Storage, virtualisation, networking and compute. If you’re expecting a consolidated course which covers the storage consolidate courseware, Cisco DCNI2, DCUCD course and VMware install configure manage,  then think again, but it does set a good scene as an overlay to understanding these technologies. It also delves into some concepts around cloud service change management and control considerations and the concept of a cloud maturity model (essentially EMM, but more cloud specific). I had a crack at the practice exam and passed with 68%, aside from not knowing the specific cloud maturity terms and EMC specific cloud management jargon anyone with knowledge of servers, Cisco Nexus and networking, plus virtualization shouldn’t have to many issues, but you may want to skim over the video training package.

There was also a nice shiny demo from the Virtual Geek Chad Sakkac showing the new Ionix UIM 2.1 with Vcloud integration using CSC’s cloud service to demonstrate not only the various subsets of multi tenancy, but also mobility between disparate systems. When they integrate with public cloud providers such as Amazon EC2 and Azure, then things will really hot up, but maybe we need some level of cloud standards in place ?…   but we all know the problem with standards, innovation gives way to bureaucracy and slows up…   but then again with recent cloud provider issues, maybe it couldn’t hurt to enforce a bit of policy which allows the market to slow up a little and take a more considered approach to the public cloud scenario..   who knows ?

Anyway.. watch this space..  more to come


Protocol considerations with VMware

A good video I came across from EMC discussing some storage protocol considerations when looking at VMware.


Adminstration of Clariion with VMWare… Getting easier

So, EMC released the NFS plugin for VMWare to support storage administration tasks on Celerra from the VI Client a while back, which was very cool and had some very impressive features..    but what about the Traditional SAN man ?! 

Well, yesterday EMC announced a VMWare plugin for Clariion.. 

Product Overview

The EMC CLARiiON Plug-in for VMware simplifies storage administration between the VMware Virtual Center Server and CLARiiON storage systems. It offers end-to-end management of storage related tasks including provisioning of datastores, provisioning of raw device mapping (RDM) devices, and array-based virtual machine replication.

New Feature Summary 

The EMC CLARiiON Plug-in for VMware allows you to perform the following specific tasks directly from the VMware vSphere client:

  • You can provision new datastores (VMFS volumes) or raw device mapping (RDM) volumes
  • Delete existing datastores backed by CLARiiON CX4 storage
  • Creation of virtual machine replicas using array-based replication services
  • The plug-in also gives you the option to publish the replicated virtual machines to a View Manager.

Notes

·       EMC CLARiiON Plug-in for VMware is customer-installable.

·       EMC CLARiiON Plug-in for VMware requires CX4 storage systems running Release 29 FLARE.

 Thats all I have at the minute, but will be picking the brain of the EMC bods as I go to get some more info.

Very usefull feature though !!


SAN Based replication ? no problem.. latency.. Problem..

Disaster recovery has become something which is moving higher and higher up agenda on companies “to do” list. Its becoming increasingly more apparent what the costs to a given business are when companies suffer downtime and/or loss of data..   people are starting to think about the monetary cost to the business is when services or applications are unavailable to both internal staff and more importantly customers and with the big push of server virtualization over the last few years.. where is application data/file data/the application server itself sitting ?  on the SAN;  so it makes sense to leverage that existing infrastructure in the SAN and use some form of SAN based replication.     

Bearing in mind the SAN is no longer a luxury only the privileged enterprise has access to and is becoming ever more important to even small businesses..  not all these organisations have access to biiiig dedicated links between sites and if they do, they’re probably subject to significant contention and unfortunately TCP isn’t the most efficient of protocols over distance.    

So what do you do to make sure the DR solution you have in mind is feasible and realistic ?    

Firstly make sure you pick the right technology    

First port of call is sitting down with the customer and mapping out the availability requirements of their applications. Things like the RPO/RTO requirements of the applications they have in use. Alot of the time the company may not have thought about this in alot of detail, so you can really add value here if you are a reseller. Ultimately it boils down to the following being considered for each service :    

  • How much downtime can you afford before the business start losing money  on each given application.
  • How much data can you afford to lose in the event of a a disaster, before it does significant damage to the business

 

If you can get them to apply a monetary figure to the above, it can help when positioning return on investment.    

There are a few types of Array based replication out there. They normally come in 3 flavours, A-syncronous, Synchronous and Jounaling/CDP and Synchronous Replication.  Synchronous replication can be a bit risky for alot of businesses as usually application response time becomes dependent on writes being committed to disk on both production and DR storage (thus  application response times become dependent also on round trip latency across the link between the 2 sites, spindle count becomes very important on both sites here also).  I often find that aside from banks and large conglomerates the main candidate for synchronous replication in the SMB space  is actually universities. Why ? because often universities don’t replicate over massive distances, they will have a campus DR setup where they replicate over a couple of hundred metres from building to building, so laying fibre in this case isn’t too costly. However, for the average SMB who wants to replicate to another town; syncronous replication isn’t usually preferable due to latency over distance and the cost of the large link required.      

Mirrorview A-Syncronous (EMC)    

A-Syncronous replication is typically what I see in the case of most small to medium size businesses. Why ? firstly, because application response times are dependent on the round trip time of  syncronous replication. With A-Synchronous replication, usually a Copy on first write mechanism is utilised to effectively ship snapshots at specified intervals over an IP link. Below is a diagram showing how EMC Mirrorview/A does this :    

    

EMC  uses whats called a Delta Bitmap (A visual representation of the data blocks on the volume), to track what has been sent to the secondary array and what hasn’t. This Delta Bitmap works in conjunction with reserve LUNs (Delta Set) on the array to ensure that the data that is sent across to the secondary array remains consistent. The secondary also has reserve LUNs in place so that if replication were interrupted or the link was lost, the secondary array can roll back to its original form so the data isn’t compromised.    

Also, you can use higher capacity less expensive disks on the DR site without affecting the response times to production (although application response times will still be affected in the event of a failover, as servers will be accessing disk on the DR box).  One potential drawback with asynchronous replication, is that as both SAN’s are no longer in a synchronous state, you have to decide whether it is important that your remote copies of data are in an application consistent state. If it is important, then you’ll have to look at a technology which will sit in the host and talk to the application and will also talk to the storage. In the EMC world we have a tool called replication manager which does all the various required bits on the host side (calling VSS/Hot backup mode , flushing host buffers, etc).    

Replication manager is licenced per application server (or virtual server in a cluster) and also required an agent per mount host, plus a server licence (or 2 depending on the scenario). There is a lot more to replication manager, but that’s a whole post in itself.    

EMC RecoverPoint    

Recoverpoint is another way of  replication technology by EMC which allows very granular restore points and small RPO’s over IP. Because it employs journalling rather than Copy on first write. It stubs and timestamps at very regular intervals (almost every write in some cases), allowing you to roll back volumes to very specific, granular,  points in time. See below diagram for more detail :    

    

RecoverPoint provides out-of-band replication. To be considered out-of-band, the RecoverPoint appliance is not involved in the I/O process. Instead, a component of RecoverPoint, called the splitter (or Kdriver), is involved. The function of a splitter is to intercept writes destined for a volume being replicated by RecoverPoint. The write is then split (“copied”) with one copy being sent to the RecoverPoint appliance and the original being sent to the target.    

With RecoverPoint, three types of splitters can be used. The first splitter resides on a host server that accesses a volume being protected by RecoverPoint. This splitter resides in the I/O stack, below the file system and volume manager layer, and just above the multi-path layer. This splitter operates as a device driver and inspects each write sent down the I/O stack and determines if the write is destined for one of the volumes that RecoverPoint is protecting. If the write is destined to a protected LUN, then the splitter sends the write downward and will rewrite the address packet in the write so that a copy of the write is sent to the RecoverPoint appliance. When the ACK (acknowledged back) from the original write is received, the splitter will wait until a matching ACK is received from the RecoverPoint appliance before sending an ACK up the I/O stack. The splitter can also be part of the storage services on intelligent SAN switches from Brocade or Cisco.    

For a CLARiiON CX4 and CX3, the CLARiiON storage processor also has a write splitter. When a write enters the CLARiiON array (either through a Gigabit Ethernet port or a Fibre Channel port), its destination is examined. If it is destined to one of the LUNs being replicated by RecoverPoint, then a copy of that write is sent back out one of the Fibre Channel ports of the storage processor to the RecoverPoint appliance. Since the splitter resides in the CLARiiON array, any open systems server that is qualified for attachment to the CLARiiON array can be supported by RecoverPoint. Additionally, both Fibre Channel and iSCSI volumes that reside inside the CLARiiON CX4 or CX3 storage array can be replicated by RecoverPoint. RecoverPoint/SE only supports a Windows host-based splitter and the CLARiiON-based write splitter. Also automatic installation and configuration for RecoverPoint/SE only supports the CLARiiON-based write splitter.   

Below is a Video from EMC demonstrating Recoverpoint in a VMWare Environment : 

   

  

Optimise So how do we ensure we are getting the most out of the links we use (especially over contended links such as VPN or MPLS) ? WAN optimisation..  there are a number of ways this can be done, some use an appliance to acknowledge back to the production SAN locally, then cache the data and burst it over the WAN. Some companies have found a more efficient way of transmitting data over a WAN, by using proprietary more efficient  protocols to replace TCP over the WAN  (such as Hyper IP), Below is a snippet from a mail I received from a company called Silverpeak  who seem to deal with the Challenges of optimizing WAN efficiency quite well, in particular with SAN Replication :      

“Just a few years ago, it was unheard of to combine SAN traffic with other storage applications, let alone on the same network as non-storage traffic. That is no longer the case. Silver Peak customers like NYK logistics are doing real-time data replication over the Internet. Want to learn more? Here is a demo of EMC replication running across a shared WAN ”  

 

   

   

    

    

 

 

  In summary   

Replication is a Biiiig topic..  there are many more factors to be considered; such as automation, cluster awareness, etc.   I think the best way to summarise this post is…      

To be continued     

      

 


Building Blocks of a Vblock

Seeing as lots of people are asking lots of questions around EMC,Vmware and Cisco’sVblock. I thought I’d best dig something out. Attached is a very concise, granular, document which outlines the different elements of a Vblock, how the disks are configured, supported Vblock applications and…   some pretty pictures for your delectation.

solution_overview_vce

The below clip is the Cisco Vice President talking about the various Vblock packages.


An Apple a day…. Could help EMC get back into the Mac space ?

Now, for the bulk of organisations (in the UK at least), the majority of business applications are hosted on operating systems such as Windows, Linux, HPUX and Solaris. EMC do very well with these organizations; they have extensive lists of supported operating systems with all their revisions and service pack releases to boot. For these organisations and resellers selling into them, life is good, interoperability is rife and big vendors such as EMC give them much love. But there is another world out there, one often overlooked by the likes of EMC…   A world of glorious white, multicoloured fruit and virus free environments..   I shall call this place  Mac land, often visited by the likes of graphics design, advertising and publishing companies.

Without a support statement in site involving the words Mac OSX for some years and the likes of Emulex and Qlogic not forthcoming with a resolution, the future was looking bleak for resellers wanting to sell EMC SAN storage into Mac user environments.  But wait !! a solution has presented itself!!  in the form of a company called ATTO technology..  much like saint nick delivering presents in the night..  these guys are sneaking Mac OSX support statements onto EMC interoperability support matrices. I heard no song and dance about this !? but I was pleased to see it none the less….

The supported range of FC HBA’s come in single port, dual port and quad port models (FC-41ES, FC-42ES, FC-44ES) and the iSCSI software initiator is downloadable from their website.

Supported with Mac OSX 10.5.5 through 10.5.10 on apple Xserve servers and Intel based Mac Pro Workstations attaching to EMC’s CX4 range only; rather than just providing basic support out of neccesity, there are a few bells and whistles. Multipathing is supported with ATTO’s own multipathing driver and integrates with ALUA on the Clariion, a number of Brocade, Cisco MDS and Qlogic Sanbox switches are supported (with the exception a few popular recent switches such as Brocade silkworm 300, 5100, 5300 switches and Qlogic SANBox 1404’s).  Also, ATTO have released an iSCSI software initiator for iSCSI connectivity to Clariion or Celerra which is also supported.

Just a brief disclaimer..   I’ve mentioned some specific support statements, that is not to say that EMC would not support the switches I mentioned aren’t currently listed, but you may have to jump through some hoops to get your solution supported if certain elements aren’t on standard support statements. I would recommend checking the relevant support statements from EMC if you are Mac users looking at EMC, just to make sure your bases are covered.

Take a look at the press release from ATTO Technologies here


Is your EMC Solution Supported ? Why not check ?

EMC are pretty good at making sure they test, test and test again when it comes to interoperability with other vendors. The EMC Elab enables you to make use of all that testing and check that your Storage environment is supported with EMC.

See below for a walkthough guide of the Elab storage wizard.


The Long awaited….

A few new things comming out on EMC’s mid tier storage range to look out for.  One of which is very discreetly named Project Odin and will make the life of EMC Celerra users and resellers alike a touch easier. Its a management console to manage Celerra and its respective back end Clariion rather than having to jump into navisphere to manage the Clariion directly ! from what I gather its will run on any DART or FLARE OS and is pointed at the system to which it needs to manage via IP address, then the appropriate profile is loaded to reflect the functions relevant to said Clariion/Celerra..  about time !!   as I understand it, there will be an announcement in feb..  but it won’t be going GA for a little while. Watch this space !!


EMC Clariion FAST.. Fly like a butterfly, sting like a bee… really ?

So the FAST suite is available on Clariion. Good news !! If you don’t know what FAST (Fully Automated Storage Tiering) is, in a nutshell it’s automated storage tiering (as implied in the name), it ensures that LUNs which may have critical performance requirements and variable characteristics in terms of IO utilisation are using disk as efficiently as possible. LUNs of data which are accessed frequently will be serviced by one tier of disk (ie solid state) and others by another (ie fiber channel disk or SATA) and all this is done dynamically on the fly using the Clariion Virtual LUN technology (meaning you can migrate a LUN from 1 set of disks to another seamlessly to the application and retaining all the properties of the LUN); All this is done on the automatically using FAST.  This is especially relevant now as virtualisation is rife and the ability to be so flexible by way of server deployment requires that the storage is either meticulously designed and frequently reviewed, adjusted, etc or the storage platform is adaptive and flexible (even more relevant in multi-tenancy environments which offer a managed service) . FAST enables EMC storage to fall under the latter.

So, is there a sting in the tail ? is it silly money ? much like Control Center was (although that did get better to be fair)..    surprisingly…   no. The FAST suite of tools is suprisingly well priced at a touch over £6000 list. Bearing in mind that the FAST suite isn’t just the FAST software, it also includes Navisphere Analyser (for analyzing and monitoring SAN bandwidth and IO),  QOS Manager (for performance tuning) and they throw in an assessment service to provide recommendations as to which LUNs are most suitable for migration to fast managed LUNs which will utilise EFD (solid state drives to the rest of us) and/or SATA etc..   (I’ll come back to why this is required a but later)   considering that you’re looking at a £10k list price for Navisphere Analyser and QOS Manager alone, thats not a bad deal. But then it wouldn’t be as you’re still looking at just under £8000 for an enterprise flash drive and FAST is as good a mechanism as any to drive sales of solid state drives.  But this isn’t just a smoke and mirrors mechanism to sell solid state drives, the benefits are real. The capital expenditure involved in deploying Enterprise flash drives with FAST may be undesirable to a lot of businesses, but the return on investment is again very real. The requirement to procure mass amounts of FC drives to support highly transactional databases is not gone, but certainly minimized, man hours required for certain laborious storage admin tasks is reduced (especially in environments may have applications with extremely variable disk loads), power and cooling requirements are reduced, the list goes on..

So why is there an assessment service ?  can’t I just chuck everything on FAST managed LUNs and tell it to go do ?…   Yes,  you could. But Solid state drives are still expensive, so make the best use of them you can.  So I might suggest that LUNs with a lesser performance requirement and predictable disk load characteristics sit on standard LUNs.

See below to see FAST on EMC’s VMax..   now just waiting for this on Clariion and Celerra (sub LUN level)

and before some boxing boff corrects me on my Muhammad Ali quotation in the post title. I know its “flout like a butterfly, sting like a bee”..  but cut me some slack, float didn’t quite fit.. call it creative license  😉


FCoE or not FCoE?.. that is the question

So I spent a day at Cisco last Friday for a cram training session on Cisco’s new Nexus range of switches, aside from some very impressive features of the switches by way of flexibility, speeds and feeds (12 Terrabits per seconds capability of you’re interested) and some very cool virtualisation features in the switches; one thing that came up was FCoE (Fibre Channel over Ethernet) which effectively encapsulates a Fibre Channel frame into an Ethernet frame with no extra frills,  no additional clunky IP protocols..  keep it simple, keep it quick, as opposed to iSCSI which is used to carry SCSI commands and provide block level storage connectivity over TCP/IP which is prone to packet loss. FCoE uses something coined enhanced Ethernet and something called priority flow control where packets are embedded with Class of Service (CoS) value.. where packets with a certain CoS value (namely FCoE packets) utilize no drop, reliable behavior and other packets use the familiar ‘Best Effort’ Policy as traditional Ethernet does. This means that Ethernet can be a viable medium to carry fibre channel traffic within the data center, where iSCSI is not.  Don’t get me wrong, iSCSI still has its place, FCoE is currently only supported over 10GbE and it will be a while until many organizations adopt 10 GbE fully.

Currently FCoE is geared more to the datacenter, the idea being that less cabling is required within the datacenter rack as both storage connectivity and standard IP traffic can be delivered through one form of connectivity; no need for NICs and FC HBA’s…  Converged Network Adapters or CNA’s are the new flavour (basically a NIC which supports FCoE). This means that server build times are reduced, doubling up on switching infrastructure to support FC and IP traffic is no more, cabling is reduced and everything is generally tidier.

There are still arguements against FCoE..   to site one example from the blog Etherealmind.com and the flip coin of the arguement from a Cisco Chappy Dante Malagrino (see here for his response to the previous link), so its not to the taste of all…

Have a dig into it and make your own judgements.

Some good sites of reference :

http://www.fcoe.com/

http://features.techworld.com/storage/3306/fcoe–or-iscsi/