Skip navigation
This discussion is archived

Eliminating points of failure...

1222 Views 11 Replies Latest reply: Feb 20, 2012 9:23 AM by Geff Kunert RSS
nei0angei0 Level 1 Level 1 (5 points)
Currently Being Moderated
Dec 20, 2011 4:40 PM

Hello Everyone,

I am looking into a SAN system for a client and wanted to get some feedback regarding the current status of XSAN


1. First and foremost, Lion server OS and XSAN 2, is this combination something that is production ready? Should I present such a system to a client or should I consider and alternative software or hardware solution?


If we are production ready, then would the following be possible?


I would like to eliminate all single points of failure. In a SAN system I can identify several key components. In there you will find my confusion.


1. The Fibre channel switch create the fibre channel fabric- CAN use a secondary switch, which would connect to the second slot on my Fibre channel cards.

2. The metadata controller-CAN use a standby secondary (or I imagine as many as I want) back up metadata controller. How many can I use?

3. Client machines, these will actually be running file sharing services and allowing access for my users to connect. CAN use 2 more servers connected to the fibre channel switch to server AFP and SMB users. Could load balance by splitting up users or make only one available at a time.

4. The private switch-CAN user another private switch with a THIRD ethernet card, but mac mini's cannot use a third 1gb card, sooo what do I do? Mac pro's only??


Now confusion sets in...

5. Metadata RAID Array-How can I make this redundant, obviously it would be RAID 1 if a drive failed but what if this Raid unit physically fails, like blows up? Can I have a second physical Raid unit also acting as the Backup metadata raid array? Also in the setup Scripts the promise raids are built with Metadata Raid LUN and Storage Raid LUN in the same box? Do people use Dedicated Raid unit just for the Metadata LUN or are they combined into the same box in an ideal set up?

6. That leads me in the second part, the storage pool. Can I spread the data redundantly across two Raid unit but keep it 1 volume for my client server, so if the same thing happens, 1 raid unit blows up with my whole storage pool, the 2nd one already has the data redundantly written on it.


I am still reading up and would appreciate anyone's help in this matter. Thank you,

  • Christopher Murphy Level 2 Level 2 (470 points)
    Currently Being Moderated
    Dec 27, 2011 4:34 PM (in response to nei0angei0)

    I can't tell from the documentation, therefore I think what you want to do is not possible. I think your post marked correct, is correct, which is that it will stripe data across LUNs in a storage pool. But that does seem risky.


    I think XSan is a constrained implementation of Quantum's StorNext, which should be able to do what you want. The examples I've seen with the full StorNext clustered file system, is an ability to put one of their tape libraries into the storage pool, and you automatically get (near) live, or delayed (configurable?) backups to tape. It can even do dedup off of faster storage, if those files aren't being used. They stay in the tape library as archive, while the fast storage is kept clear of files that haven't been accessed in months or a year or whatever. But if a user tries to retrieve them, my understanding is the system will pull them off tape automagically - because that's how the system was designed. Or I have a very vivid imagination...


    What you're talking about might be more well suited for going directly to Quantum for. Or possibly looking at a RHEL 6 based High Availability server solution with GFS2 (shared disk) or Lustre (distributed storage). Then you're not stuck with the fibre channel requirement that Xsan has. You could do mixed fibre for the machines that really need it, or 10Gb iSCSI if that's less expensive (and the ~100m distance is workable), or 1Gb iSCSI with conventional cards and cables for everyone else. I'm pretty sure there is Mac support for Lustre, not sure about GFS2, that might be linux only. But Blue Whale exists for all three platforms.


    Another option is to export the SAN storage from the linux server as NFSv4 shares, if your Mac clients are on Lion they now have NFSv4 support.

  • Blaidd Drwg Level 1 Level 1 (70 points)
    Currently Being Moderated
    Dec 27, 2011 10:46 PM (in response to nei0angei0)

    nei0angei0 wrote:


    And on top of that, NO SAN mirroring, therefore if a Backplane fails on a RAID/LUN you are so SOL!!!


    Vicom Vmirror?

  • Christopher Murphy Level 2 Level 2 (470 points)
    Currently Being Moderated
    Dec 27, 2011 10:55 PM (in response to Christopher Murphy)

    FWIW CentOS is a free variant of RHEL that is nearly binary identical as they try to stay as close to source as possible, and spend a lot of time on that aspect. So if you need to set something up for R&D, CentOS is the way to go. But I have a number of colleagues using CentOS as the base for their VOIP servers. Stay up and running for years - they just forget about them.

  • Geff Kunert Calculating status...
    Currently Being Moderated
    Feb 20, 2012 6:21 AM (in response to nei0angei0)

    I agree with Chris and yes, you would reduce your costs quite a lot with iSCSI.


    And take a look at this site


    They have some cheap storage solutions and they sell their software for you to build your on protected storage.


    I don't know what's your final goal here but actually I wouldn't use raid 5 if you're gona replicate the LUNs on the same site (not an external replication for disaster prevention). I would go for Raid 0 or simply a pool.

  • Geff Kunert Level 1 Level 1 (0 points)
    Currently Being Moderated
    Feb 20, 2012 6:36 AM (in response to Geff Kunert)

    Ps: forget about the lime-tech site.


    Look for

  • Christopher Murphy Level 2 Level 2 (470 points)
    Currently Being Moderated
    Feb 20, 2012 7:40 AM (in response to Geff Kunert)

    Not really a big fan of Drobo, I've just seen too many people have problems. While their tech support is good, the functionaity of their technology is sufficient obscured to make me question it. I'd look at FreeNAS 8, in this category. More significant requirements, I'd look at GlusterFS.

  • Geff Kunert Level 1 Level 1 (0 points)
    Currently Being Moderated
    Feb 20, 2012 9:23 AM (in response to nei0angei0)

    I have deployed Drobo before. Taking aside it's obscured technology, it works well. Never had problems. Used the Drobo Pro direct attached to a Windows Server.


    And I had a FreeNAS 7 server at home. I must say that I didn't trust too much on it's ability to handle driver fails and to grow a pool of storage as the Drobo (I never managed to do that. Always had to create an separete volume... not so much what I wanted.) . But, taking that aside, it was a really inexpensive iSCSI storage.


    I heard about this GlusterFS and ZFS as well. The ZFS seems good for my storage needs, but have to study futher more how to properly deploy to handle hard drive fails. I use it for video editing and post production related stuff and need solutions with iSCSI that can be accessible through multiple hosts.


    If anyone have any hints here too feel free to drop a message!


More Like This

  • Retrieving data ...

Bookmarked By (0)


  • This solved my question - 10 points
  • This helped me - 5 points
This site contains user submitted content, comments and opinions and is for informational purposes only. Apple disclaims any and all liability for the acts, omissions and conduct of any third parties in connection with or related to your use of the site. All postings and use of the content on this site are subject to the Apple Support Communities Terms of Use.