Category Archives: DataCentre & Cloud

Implementing your own corporate drop box ?

Upon perusing the Intel Cloud Builders site for interesting new cloudy vendors and reference architectures, I came across an interesting new company called Oxygen Cloud.  Although Storage as a Service is a reasonably well formed concept, much of the attention has been around public provider services such as livedrive, drop box or backup with products such as EMC Mozy. This is all well and good, but a number of companies have concerns over how the “public cloud” type products align to corporate policy. Take drop box for example, the ease of how data is shared or migrated across to other devices maybe doesn’t align to how they want to control one of an organisations most valuable commodities.. data.

So how does an organisation offer device agnostic storage, not based on the contraints of conventional file systems, in such a fashion where they maintain control ? Ultimately there are 101 ways to skin a cat… but as far as skinning cats goes, I quite like this one.

The Back End

You take a product like EMC Atmos; EMC Atmos is what we call cloud optimised storage. In real terms this means the way data is stored, how available it is, how its tiered across different costed storage and where it is stored geographically is handled by repeatable policy, not only this, but also meta data is leveraged to the nth degree (beyond that of traditional metadata uses in traditional file system). I won’t re-invent the explanation as EMC has done a good job of explaining this concept with pretty pictures (video below).

Atmos itself has a fair amount to it, but my point being  is that this use of metadata means that not only can the way data is handled be derived from this meta data, but now the infrastructure can have some awareness the context of data, context which is relevant to a front end such as Oxygen Cloud. Yes Atmos can deliver storage with NFS or CIFS, this is fine, but not overly exciting. The cool part is giving a front end direct access to the context of a file or a set of files using REST, rather than just last modified date and all the usual stuff. The metatags can be used to define the segregation of data in a muti-tenant environment or application specific elements, such as how a file can be shared and with whom.

Also, with Atmos being scale out storage the upper limits of scalability or need is say endless ? (or as near as), with the beauty of the storage being content addressable and not based around hierarchal file systems meaning that as the system is grown, you are not constrained and challenged by overly complex file system structures which need to be maintained.

Clearly availability is important, but hey..  this is expected. Needless to say, the system handles it very well.


The Front End

I’m not going to spend a great deal of time upping my word count on this section, as Oxygen Cloud have some very descriptive videos (further down), but the key things here are that the company controls the data in their own way. We have LDAP/AD integration, full access controls, we can set expiration of a link if we do share a file publicly, encryption at all point of a files transit and file can be presented as a normal explorer/finder plugin (same way we view normal CIFS shares) or files can be accessed via devices such as iPhone/Pad.  One nice feature for me is that if a phone is stolen or an employ leaves, the organisation can sever access to data/directories on a per user or device basis.

Anyway, worth spending a bit of time watching the below :


I shall be building this solution out on the lab over the next month or so (as much as the day job allows), so watch this space for more info and a revised review.


VMware – AppBlast. One word…. Wow!

VMware have a history of innovation and creating disruptive technology. Disruption may sound like bad thing, although as we know with things like the VMware hypervisor, disruption makes people money. It may be disruptive, but if the benefits are clear then people standardise on the technology and IT Resellers, Vendors and professionals benefit from the plethora of technology requirements which spill out the sides to accomodate these new marvells of modern tech.

VMware first set the trend when they abstracted the OS dependancy on directly seeing physical hardware, by introducing a hypervisor; now they have taken away the application dependancy on seeing the operating system..  lovelly jubbly ! this sounds good, but why ? how? what? 

I’m a little light on the nuts and bolts right now, but needless to say; needless to say, if you can deliver a windows/linux/mac application to any device with a browser supporting HTML5, the benefit is clear ! Visio on my iPad.. yes please, Safari on my Windows PC.. Why not ?!

I shall await the finer details with baited breath, but leave you with a pretty cool demo as shown below..   geeky soul food ! Enjoy !!

 


What is a VBlock.. the latest

Overview

Back in 2009 VMware, Cisco and EMC joined forces to create a new approach to selling full datacenter  pre-configured solution stacks. Rather than simply a gentlemen’s agreement and a cross pollination of development from the three companies, it was decided they would  create a new start up business as the delivery mechanism to drive this new concept to market. This new start up, known as VCE (Virtual Computing Environment), would take to market a new range of pre-validated, pre-configured and singularly supported solution stacks called VBlock.

The purpose of a VBlock is to simplify infrastructure down to effectively units of IT and define that a workload can be supported by “a number of floor tiles” in the data centre. This approach is enabled by the fact that everything within a VBlock is pre-validated from an interoperability perspective and customizable components are reduced down to packs of Blades (compute), Disks and network components  required to connect into the upstream customer environment, means that solution design is massively simplified and can be focus to supoprting the identified workload.

Pre-Validated

VCE extensively soak test workloads and configurations available within the VBlock to reduce pre-sales time spent on researching interoperability between the Network/compute/storage layers of the Data centre. This means that defining how a workload is supported is the focus and planning phases are significantly reduced. This pre-validated approach means that power and cooling requirements are easily determined  in preparation for site deployment.

Pre Build and Pre Configured

As part of the VBlock proposition, the physical and logical build process are carried out in VCE facilities, so that time on customer site is restricted to that if integrating into the customer environment and application layer services. This reduces deployment time massively.

Single Support Presence

 Rather than dealing with the parent companies (VMware, Cisco, EMC) of VCE on a per vendor basis. VCE act as a single support presence and will own any VBlock related issue end to end. This is partly enabled by the pre-validated aspect of VBlock, as VCE have a number of VBlocks in house and provided the VBlock is constructed as per approved architectures, VCE can simulate the environment which has caused the error to decrease time to resolution.

The Technology

The technology element at the core of the VBlock consists of VMware VSphere, Cisco UCS (Cisco’s Unified compute solution), Cisco Nexus (Cisco’s Unified fabric offering) and EMC VNX’s unified storage platform. Cisco simplify management of their blade computing platform down to a single point of management (UCS Manager) which resides on the 6100 Fabric interconnects and allows for  “stateless” computing, in that it is possible to  abstract the server “personality” (Mac addresses, word wide names, firmware, etc) away from the server hardware, then create and apply these personalities on demand to any blade within the UCS system. This management system manages all aspects of the UCS system (blade/chassis management, connectivity, firmware and connectivity). Cisco’s Unified Fabric commonly refers to their Nexus range (but elements of unified fabric apply to UCS). Cisco Nexus allows both IP network traffic and fibre channel traffic to be delivered over common 10 Gigabit switches using FcoE (Fibre Channel over Ethernet). In addition the Cisco Nexus 1000v enables deployment of a virtual switch within the Vmware environment ,allowing network services to be deployed within virtual infrastructure  where it was previously only possible in the physical world.

EMC VNX is a multi protocol storage array allowing for storage connectivity via block storage technologies (iSCSI/Fibre Channel) or NAS connectivity (CIFS/NFS/pNFS), giving the end user free choice as to how storage is provided to the UCS Server estate. EMC also drive efficiencies in how capacity and performance are handled by leveraging technologies such as deduplication and thin provisioning to achieve a lower cost per gigabyte. EMC are also able to leverage solid state disk technologies to extend storage Cache or enable sub LUN level tiering of data between Solid state disk and traditional mechanical disk technologies based on data access patterns.

VMware Vsphere has provided many companies cost saving in the past but in the Vblock is leveraged to maximum effect to provide operational efficiencies with features such as dynamic and automated mobility of virtual machines between physical servers based in load, high availability and the native integration that is inherent between VMware and EMC with the VAAI API integration. This integration enables much lower SAN fabric utilisation for what were very intensive storage network operations such as storage migration. EMC Powerpath/VE is also included in the Vblock which enables true intelligent load balancing of storage traffic across the SAN fabric.

Management

VCE utilise the Ionix Unified Infrastructure Manager (UIM) as a management overlay which integrates with the Storage,Compute,Network and Virtualisation  technologies within the Vblock and allows high level automation of and operational simplicity with how resources are provisioned within the VBlock. UIM will discover resources within the VBlock and the administrator then classifies those resources. As an example High performance blades may be deemed “Gold” blades verses lower specification blades which may be classified as “silver” blades. This classification is also applied to other resources within the Vblock such as storage. Once resources have been classified, then they can be applied on a per tenancy/application/department basis which is allowed access to differing levels of Gold/silver/Bronze resources within the Vblock. UIM now also includes operational aspects which give end to end visibility of exactly which hardware within a VBlock a particular VM is utilising (Blades, disks, etc).  Native Vendor management tools can be utilised, although with the exception of Vcenter, UIM would be the point of management of 90% of VBlock tasks after initial deployment.

In Summary

The VCE approach to IT infrastructure with VBlock enables simplification of procurement and IT infrastructure  planning as VCE are able to reduce their infrastructure offerings to essentially  units of IT which are sized to support a defined workload  within a number of “floor tiles” in the data centre. These predetermined units of IT have deterministic power and cooling requirements and scale in such aware to where all VBlock instances (be it few or Many) can be managed from a single point of management and are all supported under a single instance of support. Leveraging technologies which drive efficiencies around Virtualisation, networking, storage and computing we see benefits such as higher performance in smaller physical footprints when addressing storage and compute, minimised cables management and complexity with 10GbE enabling technologies such as Fibre Channel over Ethernet and operational simplicity with the Native Vblock unified infrastructure management tool UIM.management tool UIM.


Power to the People !! A beginners guide to the life force of your datacenter

 Aside from chalking, talking, designing and evangelising about the exciting things such as the whizzy storage bits, new blade technologies and the wonder that is unified fabric; I also have to drop back into my corduroys and sandals to get down and geeky with some of the more fundamental elements of the data center. One in particular being..  power.

As much as the network, the storage and all these other elements are pivotal in any solution without paying close attention to the life force behind all of this, we might aswell be selling rocks.

This isn’t going to be an extensive post, just enough to cover a few principles. 

Ok, so we’ve put together a high level design, we’ve worked out how the servers talk to each other, how the get their storage, how they see the outside world and how we make sure this we stop our little world from falling over should we suffer a failure. What else is there ?! 

Lets throw a few scenario’s out there and get to the point. You are a project manager and under strict deadlines to get your infrastructure implemented in time for a new global application going live. You get your pallets of hardware in good time and your engineering resource is all booked in..  fantastic.

So, next step..   avoid aggravating implementation engineers and project manager alike by following a few key points :

Establish whether your inrack devices require C20 or C14 ports on your PDU, then ensure your PDU will accommodate this. Also make sure you have specified power cables for cabinet power when ordering your devices. (running around after power cables can be annoying when running behind on network configuration).

Ensure that your PDU’s will support the power draw of the device when they cycle. Normally vendor specifications should show the cycle power draw and the operating power draw. If you want to turn everything on at the same time, you need to pay attention to the first one.

Make sure that you are matching the current and phase requirements of your PDU’s with the power you are driving to the rack. Many organisations will run 3 phase power to the room, then single phases to the rack. If you have a rack full of blade servers, you may need to drive 3 Phase power to the rack and most likely 32 Amp (unless you can cram 4 PDU’s in each rack with a little bit of creative cable management, although be warned 16 Amp PDU’s tend to be light on C20 connections).

Make sure that if you are running IEC type commando power connections to the rack, you don’t go and specify PDU’s with NEMA power drop’s. A bit of communication between your electrician and they guys specifying your PDU’s can solve save a world of pain.

If you run a global operation ensure your map the power and current requirements to the countries of deployment. There is a page on my site which maps some of these requirements.

So, A little bit of maths :

To determine what power load you can support on a PDU, it goes something like this :

Single phase PDU

Current (Amps) x Input voltage  = Watts

So for a single phase 32 Amp PDU in the UK, we would see:

32 x 230 = 7.36 Kw

For a 3 Phase PDU we need to find our input voltage, which is the output voltage multiplied by the square root of 3 (1.73), so for a UK 3 Phase we would have 230 x 1.73 = near on 400 (398).

We then take our input voltage (400) and multiply this by the current (lets say 32), then multiply by 1.73 again, so :

(230 x 1.73) x 32 x 1.73 = 22 Kw.

So get the power cycle and operating power draw information from the vendor (or do your own testing). Check you have the right power connectivity and size accordingly.

Then all you have to do is ensure you balance the power between your PDU’s while providing some redundancy (don’t plug both your server PSU’s into one PDU !!). Also remember that your blade chassis may have 3 powers supplies in it for N+1 redundancy to protect you from PSU failure, but if the PDU with 2 x PSU’s plugged into it fails, then you’re buggered, so you may want to add that magic number 4, to give you grid redundancy.

So, there we have it, a somewhat rambling of a post and most likely telling a whole load of people how to suck eggs…    but if I can save just one project manager a headache, its worth while 😉


EMC World 2011 – Las Vegas – day 1

So after the first day at EMC World what Marvels of technology have been announced ?
What groundbreaking nuggets of geeky goodness to be announced. So, first things first VPLEX ! looks like they may have cracked it..   Active/active storage over a synchronous distances, Geoclusters will never be the same again !!..   and also a slightly ambiguous announcement around integration with Hadoop opensource (more to follow on that).

What was the message of the day though ? What was this years theme..   This year EMC are talking about Big data and the cloud. Clearly recent acquisitions of Isilon and Greenplum have planted EMC’s head firmly back in the clouds.  Greenplum giving end users the ability to scale out Database architectures for data analytics to mammoth scale with Greenplums distributed node architecture and massive parallel processing capabilities. To br frank, learning about the technology was borderline mind numbing, but my god its a cool technology. Then we have large scale out NAS with Isilon and its OneFS system giving the ability to present massive NAS repositories and scale NAS on a large scale. So obviously, EMC are talking about big data.

I also had the opportunity to sit in on an NDA VNX/VNXe session and what they’re going to do is….    aaah, I’m not that stupid. But needless to say, there are some nice additions on the way, the usual thing with higher capacity smaller footprint drives and getting more IO in less U space, but also some very cool stuff on the way which will enable EMC to offer a much cheaper entry point for compliance ready storage..  watch this space.

In true style EMC threw out some interesting IDC touted metrics further justifying the need to drive storage efficiencies and re-iterating the fact that there will always be a market for storage. So, our digital universe consists of 1.2 Zettabytes of data, currently, of which 90% of that is unstructured data and that figure is predicted to grow by x44 over this decade. Also 88% of fortune 500 companies have to deal with Botnet attacks on a regular basis and have to contend with 60 Million Malware variants.  So making this relevant, the 3 main pain points of end users are; firstly our time old friend budget, then explosive data growth and securing data.

So how have EMC addressed these ? Well, budget is always a fun one to deal with, but with efficiencies in storage by way of deduplication, compression, thin provisioning and auto tiering of data, end users should get more bang for their buck. Also, EMC easing up on the rains with pricing around Avamar and the low entry point of VNXe, this should help the case. Dealing with explosive data growth again tackles with deduplication, compression, thin provisioning and auto tiering of data, but also now with more varied ways of dealing with large sums of data with technologies such as Atmos, greenplum, Isilon. Then the obvious aquisition of RSA to tie in with the security message, all be it that has had its challenges.

I’m also recently introduced the concept of a cloud architect certification track and the concept of a Data Scientist (god knows, but I’ll find out). So I went over to the proven professionals lounge and had a chat with the guys that developed the course. Essentially it gives a good foundation for steps to consider when architecting a companies private cloud, around Storage, virtualisation, networking and compute. If you’re expecting a consolidated course which covers the storage consolidate courseware, Cisco DCNI2, DCUCD course and VMware install configure manage,  then think again, but it does set a good scene as an overlay to understanding these technologies. It also delves into some concepts around cloud service change management and control considerations and the concept of a cloud maturity model (essentially EMM, but more cloud specific). I had a crack at the practice exam and passed with 68%, aside from not knowing the specific cloud maturity terms and EMC specific cloud management jargon anyone with knowledge of servers, Cisco Nexus and networking, plus virtualization shouldn’t have to many issues, but you may want to skim over the video training package.

There was also a nice shiny demo from the Virtual Geek Chad Sakkac showing the new Ionix UIM 2.1 with Vcloud integration using CSC’s cloud service to demonstrate not only the various subsets of multi tenancy, but also mobility between disparate systems. When they integrate with public cloud providers such as Amazon EC2 and Azure, then things will really hot up, but maybe we need some level of cloud standards in place ?…   but we all know the problem with standards, innovation gives way to bureaucracy and slows up…   but then again with recent cloud provider issues, maybe it couldn’t hurt to enforce a bit of policy which allows the market to slow up a little and take a more considered approach to the public cloud scenario..   who knows ?

Anyway.. watch this space..  more to come


Cisco UCS – Extended memory architecture.. What is it ?

As promised in my previous post, lets go through the blades available in Ciscos Unified Computing System. Essentially we have a few flavours of blades, full width and half width blades, some which utilise extended memory architecture (co developed by Intel and Cisco, which we’ll touch on), Daul socket for the most part with the exception of one which is 4 socket and a veritable feast of different memory options, processor options, IO card options and drive options.

However, I wanted to start with the component pieces before we delve into schematics (because you can read about those on the Cisco Site) and spend a little more time on each piece.

So what is this extended memory architecture Cisco keep bangin on about ? lets start with the why before we get to the how. Any tom, dick and harry can stick a load of Memory DIMMs in a server and scream about the fact they’ve got a few hundred gig of memory..   so why is this different ?

Typcally each CPU on a server has 3 memory channels for… you guessed it.. accessing memory. The number of transfers per second at which memory will perform is typically dictated by the number of DIMMs that are populated per memory channel. Typically when you populate 1 x DIMM per memory channel memory runs at 1333 MTpS (Million transfers per second), when you populate 2 DIMMs it would run at 1066 MTpS and when you get to a depth of 3 DIMMs per channel you’re running at 800 MTpS (not ideal). So as memory desity gets higher, performance can suffer (as shown below).

 

Cisco, in combination with Intel have developed something called the Catalina chipset. Despite sounding like a car, the Catalina chipset is quite a nifty addition. Effectively acting like a RAID controller for memory, it sits downstream of the CPU memory controllers  (one per memory channel) and presents out four additional memory sockets per channel, then presents an aggregate of the memory sitting beneath it as one logical DIMM up to the CPU memory channel, meaning that you can have denser memory configurations without memory ever clocking in below 1066 MTpS ( as shown below).

The two benefits of this being that you can address a larger amount of memory for memory intensive applications/virtual machines/whatever with a lower socket count, also making it possible to see higher consolidation ratio’s when virtualising machines or you can achieve moderate memory configurations using lower capacity less costly DIMMs. Cisco currently utilise this technology with the Westmere and Nehalem CPUs, B250 Blades servers and C250 Rackmount servers.

I nice little clip from the Cisco Datacenter youtube channel with a brief intro into extended memory

Either way..   not a bad idea..


Adding Value, above and beyond shifting tin.

So in the last 3 weeks I’ve spent time un cork with a number of tier 1 VMWare, Cisco and EMC Partners, aswell as Subject matter experts from the 3 vendors themselves; I’ve also just come back from Arizona after course around Cisco’s UCS B Series offering and Nexus piece.

The Infrastructure offerings from VMware, Cisco, EMC are all very impressive; there are integration points between the 3 vendors which go beyond just marketing fud. Cisco have their Nexus 1000v which extends the network access layer into the virtual server environment rather than simply at the hypervisor OS itself, EMC offer direct integration and management capability of their systems from VMWare’s Management suite by making optimal use of the various Vstorage API’s, EMC/VMWare’s Ionix portfolio integrates with both management of the 3 vendor offerings, but also giving application discovery capability visible from VCenter and granular trending and reporting cababilities; even covering change control for those lucky folks who must be ITIL compliant.

So that’s the whole package..   job done..     NAY !!.   In my humble opinion, the businesses that really excel are those organisations that can offer all of this, but can also wear a development and integration hat. Dealing with the presentation layer as to how all of this is managed, provisioned and tweaked to meet business needs, not just IT infrastructure needs. IT is moving more and more towards a self service model, to where within the constraints of  what a business or provider allows; a user/customer/business can spin up instances of applications/servers/resource/storage on the fly and the underlying infrastructure simply goes and does.   

From a Service provider instance this might be a virtual machine or computing resource thats spun up, from an internal business perspective it may be a complete virtual environment that’s spun up for dev purposes or demonstration purposes, It may simply be using something like XML to extend on the management capabilities of  the native vendor tools (much like BMC Bladelogic have with Cisco UCS) or simply making the management tools more personal and relevant to an organisation.

Kaavo is one company which is working on management of public and private cloud deployments

The below video is a very good example of someone that has taken the open XML framework and tuned an IT deployment specifically to an organisations business needs

So in summary, selling tin and selling licences will make you money, BUT !! consultancy, development and services demonstrate more value, a deeper fundamental understanding of how business needs map to IT requirements and are more margin rich.


VBlock 0 : Want a Vblock ? Don’t need to support 1000’s of Virtual machines though ?

So, with all the hype of Vblock and VCE, we have marveled at the integration between the 3 vendors of VCE and slick positioning of vBlock…     but..   what if a vBlock 1 or 2 is going to break the bank and you need something scaled down a touch, but with the benefits of single support across the 3 vendor components ?

Now EMC have released the VBlock 0, things are looking more compelling for aspirations to the private cloud, who may not want to invest in vBlock 1 or 2 and geared to support thousands of Virtual Machines.

In the below video, Scott Lowe and Mike Foley from the VCE team  talk through what the vBlock 0 consists of for the techhead.co.uk site

The reference architecture for vBlock 0 can be found here.


VBlock single support offering

So I’ve just come back from a week over at EMC in Cork  and have the privilege of seeing the flashy lights of a Vblock and speaking with the Various VCE subject matter experts. So where do I start ?

For those of you who aren’t familiar with Vblock or what the VCE (VMware, Cisco, EMC) coalition is all about you can go to www.vcecoalition.com or watch the below vid from the VCE guys for the polished positioning :

This post is more for those of you who are familiar with the VCE offerings. I shall start with the single support element of the Vblock which has been a subject of some debate since, as there was some ambiguity around what acadia does and where it operates…   so, lets start with forgetting about Acadia. That sorts that 🙂 its all about the SST (Seamless support Team). The SST is a dedicated vBlock support team based in Cork (amongst other places), which consists of VMWare, Cisco and EMC qualified staff, all working under one roof, they are responsible for qualifying a vblock, supporting customer implementations of vBlock and more importantly for those who qualify as VCE partners, the SST will also support in the pre-sales element of vBlock and in qualifying the opportunity.

Slide from VCE around the SST

More information on VBlock support can be found here

Can I add a xxxxx to my vBlock ?

No !..   well not without an official exception from the SST anyway and to be fair, aside from maybe adjusting quantities of disks/blades/memory, the architecture for a vBlock shouldn’t need to be changed. For the most part, if your goal if to move toward the virtualised Datacenter then the vBlock should meet that requirement with the validated architecture. Bear in mind the vBlock is designed to sit in a Datacenter environment, effectively at the access layer and  uplink into an existing network core/aggregation layer (which is where you would provide services such as firewall/VPN termination/Layer 3 routing, etc.. ) and these elements do not fall under the remit of the seamless support team. The SST only look after the vBlock component(s), other components aside from the vBlock will have to fall under the support of their native vendors.

Why can’t we just add everything VMWare/Cisco/EMC which we have to the same support contract ?!

One of the reasons the SST is so effective is that they have a number of vBlocks within their support centers which all support personnel have access to, this means that they can re-create any issue which a customer may log and massively increase the speed to resolution. This wouldn’t be possible if they didn’t police what a supported vBlock implementation is, then it would make life very difficult in this issue staging and resolution. Also, yes the vBlock is an impressive of flashing lights and cool tech, but aim of a pre-validated architecture is that this enables the customer conversations to be geared more toward meeting business requirements than technical one, as the technical validation is already done.  All the validated reference architectures are available at http://www.vcecoalition.com/solutions.htm

However, if it is felt that a component is absolutely required the an exception can be applied for and approved at the discretion of the SST. But don’t go asking to add a HP server or juniper switch… not gonna happen 😉

Bear in mind that it is early doors and although it may appear to be restrictive having to abide by the validated architectures and use cases,but  it is early days and more and more validated architectures and options to the vblocks are in the process of going through the required testing to ensure that they are truly technically validated and can be supported by the SST.

I will post more on the positioning and technology of vBlock in due course.    for now..   I gotta eat.


Iomega/EMC’s new lovechild

Iomega first started life selling removable storage. The world marvelled at the might of the 200MB Zip drive, brought gifts of  gold , frankincense and murr as offerings to the almighty Jazz drive and sacrificed livestock in awe of the the Ditto Drive  (I exagerate..  but bear with me, I’m setting the scene). Then, as removable storage media started to give way to internet and USB drives became the standard for removable storage..  we started to see the likes of the zip and jazz drive fade away.

So..  out with the old, in with the new ? No..  Now Iomega have a massive play in the consumer space for External Hard drives and networked storage. The upper end of the networked storage range was the IX4 (now on its second generation). A nice tidy box which would hold up to 8TB of RAW capacity and fit well in a remote office environment, home office, even as a media server for your movies and music (all legitimately obtained of course). They even did a rackmount NAS device..  Brilliant !!

But what if you need a little more grunt… a bit more redundancy, scalability.. something more feature rich. Iomega/EMC are on the verge of releasing the IX12. This box fits nice and snug between the IX4-200R and EMC’s Celerra NX4; it supports up to 24TB of RAW capacity, supports all the RAID types you’d ever want to use and has 4 Gigabit ports which can support up to 256 iSCSI initiators (servers) or 256 LUN’s for block level access. All the other usual protocols still apply in the oh so familiar forms of CIFS, NFS, FTP, HTTP, etc and there are even a few nice bells and whistles such as port aggregation, DFS, array based replication, WebDav Support for online collaboration and it also sports drive spin down (very cool if its being used for a backup to disk or archive target). 

The IX12 has also been certified by a number of other vendors; it is obviously certified and on VMwares Hardware compatibility List for shared storage (also supported by a number of other virtualization vendors). Microsoft have verified that it will support Exchange 2010 Mailstores for environments of up to 250 users.

Its being stated by Iomega that these boxes are sitting in at between $5,000 and $10,000 list,  so will help EMC break even further into the lower SMB market. Personally, I think this box will play really well in spaces such as remote office,  graphic design organisations, departmental dedicated storage, backup to disk targets (admittedly would be more compelling if it supported NDMP, but we’ll leave that to the big boys), archive storage for the likes of EMC’s SourceOne, EV, Commvault, etc…

I’ll put together a more clear and concise post after the announcements to come, but I think Iomega could be onto a winner on this one..