Category Archives: Posts with no home

Cisco UCS – Extended memory architecture.. What is it ?

As promised in my previous post, lets go through the blades available in Ciscos Unified Computing System. Essentially we have a few flavours of blades, full width and half width blades, some which utilise extended memory architecture (co developed by Intel and Cisco, which we’ll touch on), Daul socket for the most part with the exception of one which is 4 socket and a veritable feast of different memory options, processor options, IO card options and drive options.

However, I wanted to start with the component pieces before we delve into schematics (because you can read about those on the Cisco Site) and spend a little more time on each piece.

So what is this extended memory architecture Cisco keep bangin on about ? lets start with the why before we get to the how. Any tom, dick and harry can stick a load of Memory DIMMs in a server and scream about the fact they’ve got a few hundred gig of memory..   so why is this different ?

Typcally each CPU on a server has 3 memory channels for… you guessed it.. accessing memory. The number of transfers per second at which memory will perform is typically dictated by the number of DIMMs that are populated per memory channel. Typically when you populate 1 x DIMM per memory channel memory runs at 1333 MTpS (Million transfers per second), when you populate 2 DIMMs it would run at 1066 MTpS and when you get to a depth of 3 DIMMs per channel you’re running at 800 MTpS (not ideal). So as memory desity gets higher, performance can suffer (as shown below).

 

Cisco, in combination with Intel have developed something called the Catalina chipset. Despite sounding like a car, the Catalina chipset is quite a nifty addition. Effectively acting like a RAID controller for memory, it sits downstream of the CPU memory controllers  (one per memory channel) and presents out four additional memory sockets per channel, then presents an aggregate of the memory sitting beneath it as one logical DIMM up to the CPU memory channel, meaning that you can have denser memory configurations without memory ever clocking in below 1066 MTpS ( as shown below).

The two benefits of this being that you can address a larger amount of memory for memory intensive applications/virtual machines/whatever with a lower socket count, also making it possible to see higher consolidation ratio’s when virtualising machines or you can achieve moderate memory configurations using lower capacity less costly DIMMs. Cisco currently utilise this technology with the Westmere and Nehalem CPUs, B250 Blades servers and C250 Rackmount servers.

I nice little clip from the Cisco Datacenter youtube channel with a brief intro into extended memory

Either way..   not a bad idea..

Advertisements

Iomega/EMC’s new lovechild

Iomega first started life selling removable storage. The world marvelled at the might of the 200MB Zip drive, brought gifts of  gold , frankincense and murr as offerings to the almighty Jazz drive and sacrificed livestock in awe of the the Ditto Drive  (I exagerate..  but bear with me, I’m setting the scene). Then, as removable storage media started to give way to internet and USB drives became the standard for removable storage..  we started to see the likes of the zip and jazz drive fade away.

So..  out with the old, in with the new ? No..  Now Iomega have a massive play in the consumer space for External Hard drives and networked storage. The upper end of the networked storage range was the IX4 (now on its second generation). A nice tidy box which would hold up to 8TB of RAW capacity and fit well in a remote office environment, home office, even as a media server for your movies and music (all legitimately obtained of course). They even did a rackmount NAS device..  Brilliant !!

But what if you need a little more grunt… a bit more redundancy, scalability.. something more feature rich. Iomega/EMC are on the verge of releasing the IX12. This box fits nice and snug between the IX4-200R and EMC’s Celerra NX4; it supports up to 24TB of RAW capacity, supports all the RAID types you’d ever want to use and has 4 Gigabit ports which can support up to 256 iSCSI initiators (servers) or 256 LUN’s for block level access. All the other usual protocols still apply in the oh so familiar forms of CIFS, NFS, FTP, HTTP, etc and there are even a few nice bells and whistles such as port aggregation, DFS, array based replication, WebDav Support for online collaboration and it also sports drive spin down (very cool if its being used for a backup to disk or archive target). 

The IX12 has also been certified by a number of other vendors; it is obviously certified and on VMwares Hardware compatibility List for shared storage (also supported by a number of other virtualization vendors). Microsoft have verified that it will support Exchange 2010 Mailstores for environments of up to 250 users.

Its being stated by Iomega that these boxes are sitting in at between $5,000 and $10,000 list,  so will help EMC break even further into the lower SMB market. Personally, I think this box will play really well in spaces such as remote office,  graphic design organisations, departmental dedicated storage, backup to disk targets (admittedly would be more compelling if it supported NDMP, but we’ll leave that to the big boys), archive storage for the likes of EMC’s SourceOne, EV, Commvault, etc…

I’ll put together a more clear and concise post after the announcements to come, but I think Iomega could be onto a winner on this one..


Managing Celerra from Vmware

EMC of late have been very good at increasing the level of integration between their storage and the VMWare platform. First it was DRS integrating with EMC QOS Manager, then the ability to view what VM’s reside on SAN storage from within Clariions Navisphere Manager, then replication manager was pulled inline to facilitate machine consistent snaps/replication with vmware using their VMFS proxy.

All very cool stuff, but now EMC are pulling the ability to manage storage from within EMC’s Celerra platform amd into VMWare’s VI Client.  As of release 5.6.48 of DART (Dart is the firmware/OS for Celerra), you will be able to manage the creation and management of NFS exports from within VMWare and perform the following actions :

Create an NFS file system and mount to ESX systems to be used as a VMware data store. File systems created with the plug-in will be automatically deployed with EMC and VMware best practices, including Celerra Virtual Provisioning.

Data store (NFS file system) extension extends the Celerra file system that is exported to the ESX cluster.

Compress the VMware VMDK files associated with a virtual machine, a folder (of multiple virtual machines), an ESX server, or a whole data store. Decompress a previously compressed VMDK or set of VMDK files.

Full Clone—Make full copies of virtual machine VMDK files using new Celerra-based functionality

Fast Clone—Make thin copies of virtual machine VMDK files instantly using new Celerra NFS file-based snap functionality

 

 

Below is a very good video demonstation provided by the one and only virtualgeek Mr Chad Sakac, demonstrating the feature


Marketing FUD or… food ?

The storage industry has a reputation for being a touch dog eat dog and has its fair share of FUD flying around, this FUD isn’t limited to the array vendors, but also extends to SAN component manufacturers such as Qlogic. I’m staying impartial on this one, but had to post, as it did make me chuckle…     although, I’d imagine the chuckling has subsided somewhat in the Emulex camp now as Qlogic are taking them to court over the use of an egg to demonstrate the heat given out by a Qlogic CNA…  see for yourself..

see full story from the register here