Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Migrating from DAS to XSan

Hello!


I am looking for some advice regarding migration from Direct Attached Storage to an XSan setup.


Our current Infrastructure:

An XServe (G5) attached to an XServe RAID Configured as two RAID5 LUN's

An XServe Intel (Late 2008) attached to an XServe RAID (SFP) Configured as two RAID 5 LUN's

An XServe Intel (Late 2008) attached to an XServe RAID (SFP) Configured as two RAID 5 LUN's

An XServe Intel (Late 2008) attached to a Promise VTrak configured as two RAID 6 LUN's (This one is mainly empty... it could be re-configured with zero heartache)

We additionally have 2 XServe Intel (Late 2008) machines I intend to set up as Metadata and Open Directory master and slave, respectively.

Finally, we have access to Brocade fibre channel switches along with QLogic switches if we choose.


Obviously I would prefer to create a SAN using XSan that will allow for future data growth without adding a server to host the RAID hardware each time.


I am familiar with the fundamentals of XSan but all the setup documentation and guides online that I've found are for new installations, ie all the drives are empty. The units I'm dealing with, with the exception of the VTrak, have live data on them. Some LUN's are used in a backup capacity, some are truly shared volumes.


My question is this: is it possible to add LUN's to an XSan storage pool if they already have data on them? I would prefer to start with a clean setup, but that's not really possible... I could potentially start fresh with the VTrak, configuring it with the scripts in the Knowledge Base but how would I go from there?


Any insight you can provide would be wonderful.


On a side note, why the obsession with RAID5 + hot spare with these huge logical drives? I realize the automatic failover is convenient, but the storage provided by RAID5 + hot spare is equal to RAID6 (no spare) but RAID6 is quite a bit safer, provided you have notification of a failed drive. Twice now I've had drive errors during rebuilds one of which resulted in permanent data loss; that would not have happened with RAID6. (sorry, I digress...)

Posted on Aug 10, 2012 5:33 AM

Reply
Question marked as Best reply

Posted on Aug 10, 2012 8:34 PM

Considering these are all 2008 Xserves, and Mountain Lion only supports 2009 and newer xserves, you have zero upgrade path already. I don't think it's a great idea to implement a new xsan layout. My opinion of XSan right now is that it's on generous life support from Apple.


create a SAN using XSan that will allow for future data growth without adding a server to host the RAID hardware each time


You don't need to add servers to support RAID hardware when expanding with any well designed system, including XSan. Even a RHEL software RAID can either expand the array on the fly by adding one or more drives; or you can create a new array and add it to the volume group, and expand logical volumes - all with the file system remaining online while this occurs. ZFS based solutions have similiar capabilities to this.


You might be thinking of clustered file systems. At a certain point it makes sense to not just add storage to a brick, but to add a brick (which is server + storage) to the cluster, but that's about needing not just more storage, but more distribution or replication or striping.


is it possible to add LUN's to an XSan storage pool if they already have data on them?


Sure it's possible but any data on them will be nuked in the process. Like any SAN, XSan clients have block level access to storage. In the case of Xsan, the FC LUNs contain only raw data, it doesn't even have a file system on it. The "file system" is what the MDC's are managing over the private ethernet network, that file system is StorNext.


I think the purpose for this storage needs to be more clearly defined to see if SAN is truly applicable compared to NAS, let alone if you should implement XSan.


why the obsession with RAID5 + hot spare


Haven't heard of this. RAID 5 today borders on malfeasance, if the data is even remotely important. The probability of losing the entire array upon one disk failure in RAID 5 configuration is actually really high.

7 replies
Question marked as Best reply

Aug 10, 2012 8:34 PM in response to Brian Dieckman

Considering these are all 2008 Xserves, and Mountain Lion only supports 2009 and newer xserves, you have zero upgrade path already. I don't think it's a great idea to implement a new xsan layout. My opinion of XSan right now is that it's on generous life support from Apple.


create a SAN using XSan that will allow for future data growth without adding a server to host the RAID hardware each time


You don't need to add servers to support RAID hardware when expanding with any well designed system, including XSan. Even a RHEL software RAID can either expand the array on the fly by adding one or more drives; or you can create a new array and add it to the volume group, and expand logical volumes - all with the file system remaining online while this occurs. ZFS based solutions have similiar capabilities to this.


You might be thinking of clustered file systems. At a certain point it makes sense to not just add storage to a brick, but to add a brick (which is server + storage) to the cluster, but that's about needing not just more storage, but more distribution or replication or striping.


is it possible to add LUN's to an XSan storage pool if they already have data on them?


Sure it's possible but any data on them will be nuked in the process. Like any SAN, XSan clients have block level access to storage. In the case of Xsan, the FC LUNs contain only raw data, it doesn't even have a file system on it. The "file system" is what the MDC's are managing over the private ethernet network, that file system is StorNext.


I think the purpose for this storage needs to be more clearly defined to see if SAN is truly applicable compared to NAS, let alone if you should implement XSan.


why the obsession with RAID5 + hot spare


Haven't heard of this. RAID 5 today borders on malfeasance, if the data is even remotely important. The probability of losing the entire array upon one disk failure in RAID 5 configuration is actually really high.

Aug 11, 2012 4:12 AM in response to Christopher Murphy

You're right, I should have been more specific about my data needs.


The major hurdle we're facing at this point is that all four servers need concurrent high-speed access to all the data currently stored on the three XServe RAID's (and the VTrak in the future),


Space requirements are increasing as customer data submission size is increasing so that is a consideration, but not the primary one. We're not editing video or anything but there are background tasks that have considerable bandwidth requirements between servers. I'm sorry I can't be more specific than that. Gigabit ethernet simply doesn't cut it any more.


I'm not interested in "upgrading" to Mountain Lion. (Or Lion for that matter) I don't see it as an upgrade, for one, and Snow Leopard Server is working brilliantly in our case.


The RAID5 obsession I speak of is in regard to Apple and their relationship with Promise; the configuration scripts for the VTrak, whether DAS or SAN, all specify RAID5+spare. I've also noticed this with OWC's products (The Qx2 and Rack Pro for instance) which I was considering for DAS on a local workstation or two. It seems that the concept of RAID6 has not really permeated the Apple world.

Aug 11, 2012 10:52 AM in response to Brian Dieckman

I'm not interested in "upgrading" to Mountain Lion. (Or Lion for that matter) I don't see it as an upgrade, for one, and Snow Leopard Server is working brilliantly in our case.


I think you're hosed because XSan is only included in Lion and Mountain Lion. It was a separate product for Snow Leopard, something like $2k. But it's not on the Apple Store anymore either so I think, poof, it's gone. Do you already have a copy of XSan? Maybe people are willing to sell their old Snow Leopard licenses... I don't know.


What were you planning on using as MDCs for XSan? You'll need two computers for MDCs, both need FC cards, all future LUNs must be FC, and you'll need an FC switch to tie MDCs, LUNs, and the four xserves together. That's a bit of bank for something that will not ever be upgraded beyond XSan 2, along with continued FC LUN purchases to grow storage.


Gigabit ethernet simply doesn't cut it any more.


The xserves are in short order not going to cut it anymore either. You're talking about implementing this from day 1 with hardware that has no warranties, no more bug or security updates of any kind. Once it's setup, you can't add new Apple hardware to it, because the xsan versions won't be compatible, and you can't downgrade Mac OS X on new hardware in order to stay on Snow Leopard. So your hardware replacement strategy is all old computers with no warranties of any kind.


It seems like you need an xserve migration plan too. What is that going to look like? Almost certainly that would be 10GigE based to network storage. So why not build 10GigE based network storage now, and then as xserves die you can replace them with whatever you want: OS doesn't matter, OS version doesn't matter, even the hardware won't matter. Way easier than the corner you'd box yourself into with FC SAN.


So what about the cost of 10GigE for the xserves? I think that G5 xserve either needs to be retired, or live with 2Gbps LACP bonded ethernet ports for its bandwidth.


What other connection type do your existing LUNs have other than FC? Anything?


FWIW, bandwidth wise over GigE, I can push 100+MB/s with async NFS to a Linux server. Those are "big" files, i.e. 5+MB. Smaller files, like documents, suffer a lot more as it's not as much bandwidth you need but IOPS. But you said the servers need bandwidth so maybe you're talking about bigger files in which case maybe you need a better protocol as a stop gap. Over the exact same wire, using AFP I get at best 35MB/s regardless if the server is running Linux or Mac OS X.


It seems that the concept of RAID6 has not really permeated the Apple world.


That's because serious storage isn't happening in the Apple world. Much more interesting and innovative things are happening on other platforms, turning storage into a commodity.

Aug 12, 2012 3:55 PM in response to Christopher Murphy

...But it's not on the Apple Store anymore either so I think, poof, it's gone. Do you already have a copy of XSan?...


I just purchased Snow Leopard Server and XSan from an online dealer. $369 for SLS and $549 for XSan. You can still get it, just not from Apple.


...What were you planning on using as MDCs for XSan?...


If you had read my post you would see that I have 4 XServes ready to go. I also have three additional Intel XServes serving other duties that are available. The G5 will be retired as soon as I can get its duties migrated to one of the other machines.


...why not build 10GigE based network storage now...


Trying to do this on the cheap. If I was starting from scratch I wouldn't be using Apple (or Apple-spec) hardware. For about $3,000 I can get all the XServe RAIDs and XServes plus the VTrak in a 24+TB SAN. I can't touch an Enterprise-class NAS for twice that. For another $2,000 I can upgrade the XServe RAID's with SATA/ATA adapters and get the SAN up to 42TB, but again, that's not really my goal.

Aug 13, 2012 12:21 AM in response to Brian Dieckman

If you had read my post you would see that I have 4 XServes ready to go.


I did read it, I just spaced out the two xserves slated as MDCs, and counted only a total of four xserves.


Anyway back to your original question, XSan does not use HFSJ/HFSX at all. It's its own file system. So you will need to start with at least one clean LUN, there is no way around it, and it's not ideal for that first LUN to be used for both metadata/journal and data.


xserveraids are slow, like 180MB/s in RAID 5. 4Gb/s FC is 500MB/s. A major plus of XSan is striping LUNs within a storage pool, which you can only do if they're added at the same time. (And obviously the LUNs must be the same size or you will waste space.) So ideally you'd have three clean LUNs to start out with.


While it's straightforward to add storage pools, it's not as easy to grow a storage pool by adding LUNs, as they all need to be the same size.


The list of XSan restrictions for your situation is growing. Why did you disqualify metaSAN in your research?


Trying to do this on the cheap. If I was starting from scratch I wouldn't be using Apple (or Apple-spec) hardware. For about $3,000 I can get all the XServe RAIDs and XServes plus the VTrak in a 24+TB SAN. I can't touch an Enterprise-class NAS for twice that.


You said future growth was important. I see this as expensive, complicated, non-transitional, and boxing yourself in. How are you going to go non-Apple with replacement hardware, unless you replace everything all at once on the same weekend, buy 4Gb/s FC cards for new servers and a StorNext license?

Aug 13, 2012 4:51 AM in response to Christopher Murphy

metaSAN looks interesting. I'll have to do some research on that one.


This whole project grew out of the previous Admin's notes and purchases over a couple of years. He's recently left the company so I'm picking up the pieces. If XSan isn't the way to go, then fine. It seems overly complicated to me too.


All the Servers and Storage devices are F/C. 2GB in the case of the XServe RAID's and 4GB on the VTrak. The XServe RAID's have decent speeds; I get a little over 200 regularly with big files. Their biggest drawback is of course the ATA interface to the drives. They're just old, that's all. In our environment we are used to supporting older equipment. I have workstations running Tiger!


You've given me some food for thought. Thanks, Christopher.

Aug 13, 2012 10:31 AM in response to Brian Dieckman

If you aren't going to stripe the LUNs to get better performance than 2Gb/s, you need to evaluate if you need MPIO or if LACP is adequate. Because you already have 2Gb/s in the form of two ethernet ports on each xserve.


e.g. If you have a single huge file to push over a network, MPIO will cause that stream to be split between those two ethernet ports. LACP will cause one port to be saturated, leaving the other port unused but available for other data. So if you have a lot of files simultaneously being pushed over a network, LACP will help keep both saturated. Many files, LACP may be adequate. If you have a fairly singular sourced and steady stream of data you need MPIO.


Next, I'd check if xserve built-in ethernet ports can do MPIO with the Small Tree iSCSI initiator. I know the xserve ethernet ports can do LACP but I don't know if they can do MPIO. If so, you get 2Gbps iSCSI over ethernet and you don't have to build a full SAN fabric. Just build a baby one so that you can use the existing FC LUNs with a linux box, and use LVM to aggregate all of that storage. Plus, LVM volume groups accept arbitrary sized block level devices - new storage doesn't have to be FC. It can be anything. Then you slice that storage up into logical volumes which can be exported to an xserve via iSCSI and formatted HFSJ. This is not a shared disk SAN like XSan. Each xserve gets dedicated storage from an aggregated pool. New servers can of course be configured 10GigE. So this is a transitional solution that would serve existing xserves and new hardware, existing storage and new storage.


If LACP is adequate, then you have other options. You can format each LUN XFS, and then stripe across them, even if arbitrarily sized, using GlusterFS. And serve the storage via async NFS. Again, you can add arbitrary storage, does not need to be FC. And you can obviously serve NFS over 10GigE. With GlusterFS you can also incorporate replication to ease your very understandable concern about RAID 5.


The software is free. It's not expensive to do enterprise class NAS. GlusterFS is what Red Hat uses for the Red Hat Storage Appliance on RHEL 6. And LVM has been around for almost 2 decades for aggregating storage and exposing it as a block level logical device, the current ~ 6 year old version (LVM 2) hilariously supports a maximum of 4.2 billion physical volumes (LUNs, arrays, individual disks, whatever) as well as snapshots.


I don't think the xserveraids can be put in a true JBOD mode. But if I'm wrong, then I'd consider an OpenIndiana or FreeBSD 9 approach because you can have ZFS, and RAIDZ2 (double parity) and likewise reduce the anxiety level. Just let it exclusively manage the disks. And serve the storage either with iSCSI (yes you can format ZFS logical volumes as HFSJ or NTFS or whatever you want), or NFS for speed. Or CIFS/AFP if you prefer.


XSan honestly gives me a headache in comparison. But it does work, and what it does it does well. But the admin guide reads like the rules of hockey: a surprising amount of fundamental caveats reside in exceptions to the rules.

Migrating from DAS to XSan

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.