Tag Archives: emc

Implementing your own corporate drop box ?

Upon perusing the Intel Cloud Builders site for interesting new cloudy vendors and reference architectures, I came across an interesting new company called Oxygen Cloud.  Although Storage as a Service is a reasonably well formed concept, much of the attention has been around public provider services such as livedrive, drop box or backup with products such as EMC Mozy. This is all well and good, but a number of companies have concerns over how the “public cloud” type products align to corporate policy. Take drop box for example, the ease of how data is shared or migrated across to other devices maybe doesn’t align to how they want to control one of an organisations most valuable commodities.. data.

So how does an organisation offer device agnostic storage, not based on the contraints of conventional file systems, in such a fashion where they maintain control ? Ultimately there are 101 ways to skin a cat… but as far as skinning cats goes, I quite like this one.

The Back End

You take a product like EMC Atmos; EMC Atmos is what we call cloud optimised storage. In real terms this means the way data is stored, how available it is, how its tiered across different costed storage and where it is stored geographically is handled by repeatable policy, not only this, but also meta data is leveraged to the nth degree (beyond that of traditional metadata uses in traditional file system). I won’t re-invent the explanation as EMC has done a good job of explaining this concept with pretty pictures (video below).

Atmos itself has a fair amount to it, but my point being  is that this use of metadata means that not only can the way data is handled be derived from this meta data, but now the infrastructure can have some awareness the context of data, context which is relevant to a front end such as Oxygen Cloud. Yes Atmos can deliver storage with NFS or CIFS, this is fine, but not overly exciting. The cool part is giving a front end direct access to the context of a file or a set of files using REST, rather than just last modified date and all the usual stuff. The metatags can be used to define the segregation of data in a muti-tenant environment or application specific elements, such as how a file can be shared and with whom.

Also, with Atmos being scale out storage the upper limits of scalability or need is say endless ? (or as near as), with the beauty of the storage being content addressable and not based around hierarchal file systems meaning that as the system is grown, you are not constrained and challenged by overly complex file system structures which need to be maintained.

Clearly availability is important, but hey..  this is expected. Needless to say, the system handles it very well.


The Front End

I’m not going to spend a great deal of time upping my word count on this section, as Oxygen Cloud have some very descriptive videos (further down), but the key things here are that the company controls the data in their own way. We have LDAP/AD integration, full access controls, we can set expiration of a link if we do share a file publicly, encryption at all point of a files transit and file can be presented as a normal explorer/finder plugin (same way we view normal CIFS shares) or files can be accessed via devices such as iPhone/Pad.  One nice feature for me is that if a phone is stolen or an employ leaves, the organisation can sever access to data/directories on a per user or device basis.

Anyway, worth spending a bit of time watching the below :


I shall be building this solution out on the lab over the next month or so (as much as the day job allows), so watch this space for more info and a revised review.

Advertisements

What is a VBlock.. the latest

Overview

Back in 2009 VMware, Cisco and EMC joined forces to create a new approach to selling full datacenter  pre-configured solution stacks. Rather than simply a gentlemen’s agreement and a cross pollination of development from the three companies, it was decided they would  create a new start up business as the delivery mechanism to drive this new concept to market. This new start up, known as VCE (Virtual Computing Environment), would take to market a new range of pre-validated, pre-configured and singularly supported solution stacks called VBlock.

The purpose of a VBlock is to simplify infrastructure down to effectively units of IT and define that a workload can be supported by “a number of floor tiles” in the data centre. This approach is enabled by the fact that everything within a VBlock is pre-validated from an interoperability perspective and customizable components are reduced down to packs of Blades (compute), Disks and network components  required to connect into the upstream customer environment, means that solution design is massively simplified and can be focus to supoprting the identified workload.

Pre-Validated

VCE extensively soak test workloads and configurations available within the VBlock to reduce pre-sales time spent on researching interoperability between the Network/compute/storage layers of the Data centre. This means that defining how a workload is supported is the focus and planning phases are significantly reduced. This pre-validated approach means that power and cooling requirements are easily determined  in preparation for site deployment.

Pre Build and Pre Configured

As part of the VBlock proposition, the physical and logical build process are carried out in VCE facilities, so that time on customer site is restricted to that if integrating into the customer environment and application layer services. This reduces deployment time massively.

Single Support Presence

 Rather than dealing with the parent companies (VMware, Cisco, EMC) of VCE on a per vendor basis. VCE act as a single support presence and will own any VBlock related issue end to end. This is partly enabled by the pre-validated aspect of VBlock, as VCE have a number of VBlocks in house and provided the VBlock is constructed as per approved architectures, VCE can simulate the environment which has caused the error to decrease time to resolution.

The Technology

The technology element at the core of the VBlock consists of VMware VSphere, Cisco UCS (Cisco’s Unified compute solution), Cisco Nexus (Cisco’s Unified fabric offering) and EMC VNX’s unified storage platform. Cisco simplify management of their blade computing platform down to a single point of management (UCS Manager) which resides on the 6100 Fabric interconnects and allows for  “stateless” computing, in that it is possible to  abstract the server “personality” (Mac addresses, word wide names, firmware, etc) away from the server hardware, then create and apply these personalities on demand to any blade within the UCS system. This management system manages all aspects of the UCS system (blade/chassis management, connectivity, firmware and connectivity). Cisco’s Unified Fabric commonly refers to their Nexus range (but elements of unified fabric apply to UCS). Cisco Nexus allows both IP network traffic and fibre channel traffic to be delivered over common 10 Gigabit switches using FcoE (Fibre Channel over Ethernet). In addition the Cisco Nexus 1000v enables deployment of a virtual switch within the Vmware environment ,allowing network services to be deployed within virtual infrastructure  where it was previously only possible in the physical world.

EMC VNX is a multi protocol storage array allowing for storage connectivity via block storage technologies (iSCSI/Fibre Channel) or NAS connectivity (CIFS/NFS/pNFS), giving the end user free choice as to how storage is provided to the UCS Server estate. EMC also drive efficiencies in how capacity and performance are handled by leveraging technologies such as deduplication and thin provisioning to achieve a lower cost per gigabyte. EMC are also able to leverage solid state disk technologies to extend storage Cache or enable sub LUN level tiering of data between Solid state disk and traditional mechanical disk technologies based on data access patterns.

VMware Vsphere has provided many companies cost saving in the past but in the Vblock is leveraged to maximum effect to provide operational efficiencies with features such as dynamic and automated mobility of virtual machines between physical servers based in load, high availability and the native integration that is inherent between VMware and EMC with the VAAI API integration. This integration enables much lower SAN fabric utilisation for what were very intensive storage network operations such as storage migration. EMC Powerpath/VE is also included in the Vblock which enables true intelligent load balancing of storage traffic across the SAN fabric.

Management

VCE utilise the Ionix Unified Infrastructure Manager (UIM) as a management overlay which integrates with the Storage,Compute,Network and Virtualisation  technologies within the Vblock and allows high level automation of and operational simplicity with how resources are provisioned within the VBlock. UIM will discover resources within the VBlock and the administrator then classifies those resources. As an example High performance blades may be deemed “Gold” blades verses lower specification blades which may be classified as “silver” blades. This classification is also applied to other resources within the Vblock such as storage. Once resources have been classified, then they can be applied on a per tenancy/application/department basis which is allowed access to differing levels of Gold/silver/Bronze resources within the Vblock. UIM now also includes operational aspects which give end to end visibility of exactly which hardware within a VBlock a particular VM is utilising (Blades, disks, etc).  Native Vendor management tools can be utilised, although with the exception of Vcenter, UIM would be the point of management of 90% of VBlock tasks after initial deployment.

In Summary

The VCE approach to IT infrastructure with VBlock enables simplification of procurement and IT infrastructure  planning as VCE are able to reduce their infrastructure offerings to essentially  units of IT which are sized to support a defined workload  within a number of “floor tiles” in the data centre. These predetermined units of IT have deterministic power and cooling requirements and scale in such aware to where all VBlock instances (be it few or Many) can be managed from a single point of management and are all supported under a single instance of support. Leveraging technologies which drive efficiencies around Virtualisation, networking, storage and computing we see benefits such as higher performance in smaller physical footprints when addressing storage and compute, minimised cables management and complexity with 10GbE enabling technologies such as Fibre Channel over Ethernet and operational simplicity with the Native Vblock unified infrastructure management tool UIM.management tool UIM.


EMC World 2011 – Las Vegas – day 1

So after the first day at EMC World what Marvels of technology have been announced ?
What groundbreaking nuggets of geeky goodness to be announced. So, first things first VPLEX ! looks like they may have cracked it..   Active/active storage over a synchronous distances, Geoclusters will never be the same again !!..   and also a slightly ambiguous announcement around integration with Hadoop opensource (more to follow on that).

What was the message of the day though ? What was this years theme..   This year EMC are talking about Big data and the cloud. Clearly recent acquisitions of Isilon and Greenplum have planted EMC’s head firmly back in the clouds.  Greenplum giving end users the ability to scale out Database architectures for data analytics to mammoth scale with Greenplums distributed node architecture and massive parallel processing capabilities. To br frank, learning about the technology was borderline mind numbing, but my god its a cool technology. Then we have large scale out NAS with Isilon and its OneFS system giving the ability to present massive NAS repositories and scale NAS on a large scale. So obviously, EMC are talking about big data.

I also had the opportunity to sit in on an NDA VNX/VNXe session and what they’re going to do is….    aaah, I’m not that stupid. But needless to say, there are some nice additions on the way, the usual thing with higher capacity smaller footprint drives and getting more IO in less U space, but also some very cool stuff on the way which will enable EMC to offer a much cheaper entry point for compliance ready storage..  watch this space.

In true style EMC threw out some interesting IDC touted metrics further justifying the need to drive storage efficiencies and re-iterating the fact that there will always be a market for storage. So, our digital universe consists of 1.2 Zettabytes of data, currently, of which 90% of that is unstructured data and that figure is predicted to grow by x44 over this decade. Also 88% of fortune 500 companies have to deal with Botnet attacks on a regular basis and have to contend with 60 Million Malware variants.  So making this relevant, the 3 main pain points of end users are; firstly our time old friend budget, then explosive data growth and securing data.

So how have EMC addressed these ? Well, budget is always a fun one to deal with, but with efficiencies in storage by way of deduplication, compression, thin provisioning and auto tiering of data, end users should get more bang for their buck. Also, EMC easing up on the rains with pricing around Avamar and the low entry point of VNXe, this should help the case. Dealing with explosive data growth again tackles with deduplication, compression, thin provisioning and auto tiering of data, but also now with more varied ways of dealing with large sums of data with technologies such as Atmos, greenplum, Isilon. Then the obvious aquisition of RSA to tie in with the security message, all be it that has had its challenges.

I’m also recently introduced the concept of a cloud architect certification track and the concept of a Data Scientist (god knows, but I’ll find out). So I went over to the proven professionals lounge and had a chat with the guys that developed the course. Essentially it gives a good foundation for steps to consider when architecting a companies private cloud, around Storage, virtualisation, networking and compute. If you’re expecting a consolidated course which covers the storage consolidate courseware, Cisco DCNI2, DCUCD course and VMware install configure manage,  then think again, but it does set a good scene as an overlay to understanding these technologies. It also delves into some concepts around cloud service change management and control considerations and the concept of a cloud maturity model (essentially EMM, but more cloud specific). I had a crack at the practice exam and passed with 68%, aside from not knowing the specific cloud maturity terms and EMC specific cloud management jargon anyone with knowledge of servers, Cisco Nexus and networking, plus virtualization shouldn’t have to many issues, but you may want to skim over the video training package.

There was also a nice shiny demo from the Virtual Geek Chad Sakkac showing the new Ionix UIM 2.1 with Vcloud integration using CSC’s cloud service to demonstrate not only the various subsets of multi tenancy, but also mobility between disparate systems. When they integrate with public cloud providers such as Amazon EC2 and Azure, then things will really hot up, but maybe we need some level of cloud standards in place ?…   but we all know the problem with standards, innovation gives way to bureaucracy and slows up…   but then again with recent cloud provider issues, maybe it couldn’t hurt to enforce a bit of policy which allows the market to slow up a little and take a more considered approach to the public cloud scenario..   who knows ?

Anyway.. watch this space..  more to come


Protocol considerations with VMware

A good video I came across from EMC discussing some storage protocol considerations when looking at VMware.


Unisphere.. yay !! what about Celerra Manager and Advanced Manager on the NX4 ?

Right, so EMC have got rid of Basic and advanced editions of Celerra Manager and replaced them Unisphere. Fantastic ! no more questions about what the difference is between the basic and advanced edition of Celerra Manager !!  Naaay..    Interest has peaked on the Celerra NX4; EMC’s little Unified storage box must be hitting a sweet spot as we’re getting lots of requests..  and this box still runs Celerra Manager.

So, What do you need to know about Celerra manager when comparing the two editions.

The Advanced Edition gives you the ability to manage multiple Celerra’s – So if replicating two of Celerra’s, I would strongly suggest the Advanced Edition.

The Advanced Edition gives you more control of Provisioning Disk – The Basic edition will automate management of how disks are carved up in order to present file systems and shares out to the network. I nice feature for the IT manager with not enough time on his hands to do this. But if you want to carve up Meta’s, volumes and disks in a specific way to meet specific performance requirements, then you need the advanced edition to circumnavigate the Automated volume manager.

The advanced edition has a inbuilt migration tool called CDMS (Celerra Data Migration Service) – I would advise that tool is reserved for only those who are well versed in Celerra and migrations. But effectivly it offers migration capability for file data to Celerra with minimal down time. If you are going to use this, make sure you know what you’re doing or engage an EMC partner.

Those are the important bits you need to know..     any further questions…    ask your EMC Partner


VBlock 0 : Want a Vblock ? Don’t need to support 1000’s of Virtual machines though ?

So, with all the hype of Vblock and VCE, we have marveled at the integration between the 3 vendors of VCE and slick positioning of vBlock…     but..   what if a vBlock 1 or 2 is going to break the bank and you need something scaled down a touch, but with the benefits of single support across the 3 vendor components ?

Now EMC have released the VBlock 0, things are looking more compelling for aspirations to the private cloud, who may not want to invest in vBlock 1 or 2 and geared to support thousands of Virtual Machines.

In the below video, Scott Lowe and Mike Foley from the VCE team  talk through what the vBlock 0 consists of for the techhead.co.uk site

The reference architecture for vBlock 0 can be found here.


VBlock single support offering

So I’ve just come back from a week over at EMC in Cork  and have the privilege of seeing the flashy lights of a Vblock and speaking with the Various VCE subject matter experts. So where do I start ?

For those of you who aren’t familiar with Vblock or what the VCE (VMware, Cisco, EMC) coalition is all about you can go to www.vcecoalition.com or watch the below vid from the VCE guys for the polished positioning :

This post is more for those of you who are familiar with the VCE offerings. I shall start with the single support element of the Vblock which has been a subject of some debate since, as there was some ambiguity around what acadia does and where it operates…   so, lets start with forgetting about Acadia. That sorts that 🙂 its all about the SST (Seamless support Team). The SST is a dedicated vBlock support team based in Cork (amongst other places), which consists of VMWare, Cisco and EMC qualified staff, all working under one roof, they are responsible for qualifying a vblock, supporting customer implementations of vBlock and more importantly for those who qualify as VCE partners, the SST will also support in the pre-sales element of vBlock and in qualifying the opportunity.

Slide from VCE around the SST

More information on VBlock support can be found here

Can I add a xxxxx to my vBlock ?

No !..   well not without an official exception from the SST anyway and to be fair, aside from maybe adjusting quantities of disks/blades/memory, the architecture for a vBlock shouldn’t need to be changed. For the most part, if your goal if to move toward the virtualised Datacenter then the vBlock should meet that requirement with the validated architecture. Bear in mind the vBlock is designed to sit in a Datacenter environment, effectively at the access layer and  uplink into an existing network core/aggregation layer (which is where you would provide services such as firewall/VPN termination/Layer 3 routing, etc.. ) and these elements do not fall under the remit of the seamless support team. The SST only look after the vBlock component(s), other components aside from the vBlock will have to fall under the support of their native vendors.

Why can’t we just add everything VMWare/Cisco/EMC which we have to the same support contract ?!

One of the reasons the SST is so effective is that they have a number of vBlocks within their support centers which all support personnel have access to, this means that they can re-create any issue which a customer may log and massively increase the speed to resolution. This wouldn’t be possible if they didn’t police what a supported vBlock implementation is, then it would make life very difficult in this issue staging and resolution. Also, yes the vBlock is an impressive of flashing lights and cool tech, but aim of a pre-validated architecture is that this enables the customer conversations to be geared more toward meeting business requirements than technical one, as the technical validation is already done.  All the validated reference architectures are available at http://www.vcecoalition.com/solutions.htm

However, if it is felt that a component is absolutely required the an exception can be applied for and approved at the discretion of the SST. But don’t go asking to add a HP server or juniper switch… not gonna happen 😉

Bear in mind that it is early doors and although it may appear to be restrictive having to abide by the validated architectures and use cases,but  it is early days and more and more validated architectures and options to the vblocks are in the process of going through the required testing to ensure that they are truly technically validated and can be supported by the SST.

I will post more on the positioning and technology of vBlock in due course.    for now..   I gotta eat.


Adminstration of Clariion with VMWare… Getting easier

So, EMC released the NFS plugin for VMWare to support storage administration tasks on Celerra from the VI Client a while back, which was very cool and had some very impressive features..    but what about the Traditional SAN man ?! 

Well, yesterday EMC announced a VMWare plugin for Clariion.. 

Product Overview

The EMC CLARiiON Plug-in for VMware simplifies storage administration between the VMware Virtual Center Server and CLARiiON storage systems. It offers end-to-end management of storage related tasks including provisioning of datastores, provisioning of raw device mapping (RDM) devices, and array-based virtual machine replication.

New Feature Summary 

The EMC CLARiiON Plug-in for VMware allows you to perform the following specific tasks directly from the VMware vSphere client:

  • You can provision new datastores (VMFS volumes) or raw device mapping (RDM) volumes
  • Delete existing datastores backed by CLARiiON CX4 storage
  • Creation of virtual machine replicas using array-based replication services
  • The plug-in also gives you the option to publish the replicated virtual machines to a View Manager.

Notes

·       EMC CLARiiON Plug-in for VMware is customer-installable.

·       EMC CLARiiON Plug-in for VMware requires CX4 storage systems running Release 29 FLARE.

 Thats all I have at the minute, but will be picking the brain of the EMC bods as I go to get some more info.

Very usefull feature though !!


Iomega ? Consumer only ?.. pfft, Me thinks not

It would appear that the aquisition of Iomega by EMC is paying is dividends by way of cool tech being added to the Iomega Range.  So, as you may be aware Iomega released their new IX12 NAS box earlier this month (see previous post for more info) , which has many of the gubbins of “proper” NAS. What could this Sub £10k little box have that pips EMC and Netapps big enterprise boxes to the post ?  It has an Avamar agent installed in the NAS device !!…    Granted,  if you don’t know what avamar is, that previous statement may have been something of an anti-climax…   Let me elaborate:

  • Typically what type of data contain the most commonalilty?
  • Typically which type of data consumes the most storage ?
  • Which type of data takes the longest time to backup ?

The answer to the question my pedigree chums.. is file data (in most cases, not all..  granted).  So,  Company X (The commercial division of the Xmen..  obviously), has a head office in London and a number of regional small branch offices dotted around the country. Each one of these offices is serving up user home directories and network drives from said Iomega IX12 (lets say 4TB per office)..   When it comes to backing those sites up; do they back it all up to tape or disk locally, taking up time and budget on a per site basis for their backups ? Do they back it all up to disk, replicate data to a central site for DR and try and shove how ever many terrabytes down a 100MB link wondering why it takes sooo long ?   nay..  After a the first full backup they only backup the block level changes over the link to their central site , allowing them to negate the requirement to backup to disk locally on their smaller regional offices..     bearing in mind that typically the daily rate of change on unstructured data is less than a percent..  nightly backups can be done quick sharp and are treated as full backups when it comes to restore, so you don’t have to run through all your incremental backups to ensure you’re up to date.

Not a bad bit of tin if you ask me..


Dedupe your file data !! save our hard drives !!

Just a little video I put together showing file server consolidation (in a blue peter here’s one I made earlier style). 2 minutes, nothing too fancy..   just a bit of fun.  

(best watched in full screen)

I do hope geek is the new chic …    because if not…   I feel dirty