Migrating from DAS to XSan
Hello!
I am looking for some advice regarding migration from Direct Attached Storage to an XSan setup.
Our current Infrastructure:
An XServe (G5) attached to an XServe RAID Configured as two RAID5 LUN's
An XServe Intel (Late 2008) attached to an XServe RAID (SFP) Configured as two RAID 5 LUN's
An XServe Intel (Late 2008) attached to an XServe RAID (SFP) Configured as two RAID 5 LUN's
An XServe Intel (Late 2008) attached to a Promise VTrak configured as two RAID 6 LUN's (This one is mainly empty... it could be re-configured with zero heartache)
We additionally have 2 XServe Intel (Late 2008) machines I intend to set up as Metadata and Open Directory master and slave, respectively.
Finally, we have access to Brocade fibre channel switches along with QLogic switches if we choose.
Obviously I would prefer to create a SAN using XSan that will allow for future data growth without adding a server to host the RAID hardware each time.
I am familiar with the fundamentals of XSan but all the setup documentation and guides online that I've found are for new installations, ie all the drives are empty. The units I'm dealing with, with the exception of the VTrak, have live data on them. Some LUN's are used in a backup capacity, some are truly shared volumes.
My question is this: is it possible to add LUN's to an XSan storage pool if they already have data on them? I would prefer to start with a clean setup, but that's not really possible... I could potentially start fresh with the VTrak, configuring it with the scripts in the Knowledge Base but how would I go from there?
Any insight you can provide would be wonderful.
On a side note, why the obsession with RAID5 + hot spare with these huge logical drives? I realize the automatic failover is convenient, but the storage provided by RAID5 + hot spare is equal to RAID6 (no spare) but RAID6 is quite a bit safer, provided you have notification of a failed drive. Twice now I've had drive errors during rebuilds one of which resulted in permanent data loss; that would not have happened with RAID6. (sorry, I digress...)