Currently Being ModeratedDec 20, 2011 4:54 PM (in response to nei0angei0)
Can I answer my own question?? Maybe!
It seems that I can configure the Raid units as I see fit, then when I go to create the SAN volume, it will allow me to combine the 2 separate raid units together to make 1 metadata volume. The same goes for the file pool, I can combine 2 more Raid units into 1 xsan volume, which I can then use to host files for users.
Am I right or am I right?
Currently Being ModeratedDec 20, 2011 5:22 PM (in response to nei0angei0)
Actually looks like I was wrong, when I combine multiple raid 5 arrays together to make a single SAN volume it stripes the data across with no redudancy. So while a DRIVE may fail, and be replaced in the RAID 5 array, if a complete LUN goes offline I assume I lose all my data....is that correct? Any way to create a Raid 5 SAN so to speak, or a mirroed SAN? Or do I need to sync my data hourly across to separate XSAN volumes made up of RAID 5 LUNS...
Currently Being ModeratedDec 22, 2011 12:42 PM (in response to nei0angei0)
I love this little conversation I am having with myself. MY end goal is to correct an AFP/SMB high availability cluster or syste,.
Basically what I have discovered is the following:
SAN-The SAN system I envisioned is going to cost well into the $35,000 and above range. The problem is the manuals give you examples of systems that still have points of failure (needing 2 metadata ethernet switches and 2 metadata fibre channel controllers.) If you want to build a true high availability system it needs a lot of hardware, which makes me wonder, why even TEASE people with including XSAN knowing it's a whole bag of hurt if set up incorrectly. And on top of that, NO SAN mirroring, therefore if a Backplane fails on a RAID/LUN you are so SOL!!!
Poor Man's System-On the other side of the spectrum I envision building a system containing TWO Mac pro servers handling AFP, but not silmutaneously. Each will has be connecting to a RAID unit using fibre channel, but also only mounted on ONE of the systems and never silmutaneously. Wow seams like a great solution, I could even drop $6,000 for a Fibre channel switch to make it a bit cleaner. BUT wait.... What happened to IP failover in Lion? That's right, it has been removed, silently assassinated in the night without a sound or cry.
Fail over replacement software-My only option was to create a process made up of 3 scripts that fully simulate what heartbeatd and failoverd provided on the previous systems. They check for constant availability and monitor services. When issues are detected they try and repair the issue, then fail over if not successful. Right I am using this with Two servers that do not share the same file pool. They nightly sync so if a fail over occurs, the data is from the night before until we can bring the main server up and sync it.
I would like to incorporate my Poor man's system into my custom fail over software so that the data is relevent from the moment before the fail over.
Currently Being ModeratedDec 27, 2011 4:34 PM (in response to nei0angei0)
I can't tell from the documentation, therefore I think what you want to do is not possible. I think your post marked correct, is correct, which is that it will stripe data across LUNs in a storage pool. But that does seem risky.
I think XSan is a constrained implementation of Quantum's StorNext, which should be able to do what you want. The examples I've seen with the full StorNext clustered file system, is an ability to put one of their tape libraries into the storage pool, and you automatically get (near) live, or delayed (configurable?) backups to tape. It can even do dedup off of faster storage, if those files aren't being used. They stay in the tape library as archive, while the fast storage is kept clear of files that haven't been accessed in months or a year or whatever. But if a user tries to retrieve them, my understanding is the system will pull them off tape automagically - because that's how the system was designed. Or I have a very vivid imagination...
What you're talking about might be more well suited for going directly to Quantum for. Or possibly looking at a RHEL 6 based High Availability server solution with GFS2 (shared disk) or Lustre (distributed storage). Then you're not stuck with the fibre channel requirement that Xsan has. You could do mixed fibre for the machines that really need it, or 10Gb iSCSI if that's less expensive (and the ~100m distance is workable), or 1Gb iSCSI with conventional cards and cables for everyone else. I'm pretty sure there is Mac support for Lustre, not sure about GFS2, that might be linux only. But Blue Whale exists for all three platforms.
Another option is to export the SAN storage from the linux server as NFSv4 shares, if your Mac clients are on Lion they now have NFSv4 support.
Currently Being ModeratedDec 27, 2011 10:55 PM (in response to Christopher Murphy)
FWIW CentOS is a free variant of RHEL that is nearly binary identical as they try to stay as close to source as possible, and spend a lot of time on that aspect. So if you need to set something up for R&D, CentOS is the way to go. But I have a number of colleagues using CentOS as the base for their VOIP servers. Stay up and running for years - they just forget about them.
Currently Being ModeratedFeb 1, 2012 9:48 PM (in response to Christopher Murphy)
Thanks man, appreciate everyone's input. Will look into centos. Looks im going to maybe test a system with two raids and that hardware replicator or get the pro replicator software and write to two servers or two raid units.
Currently Being ModeratedFeb 20, 2012 6:21 AM (in response to nei0angei0)
I agree with Chris and yes, you would reduce your costs quite a lot with iSCSI.
And take a look at this site http://lime-technology.com/
They have some cheap storage solutions and they sell their software for you to build your on protected storage.
I don't know what's your final goal here but actually I wouldn't use raid 5 if you're gona replicate the LUNs on the same site (not an external replication for disaster prevention). I would go for Raid 0 or simply a pool.
Currently Being ModeratedFeb 20, 2012 7:40 AM (in response to Geff Kunert)
Not really a big fan of Drobo, I've just seen too many people have problems. While their tech support is good, the functionaity of their technology is sufficient obscured to make me question it. I'd look at FreeNAS 8, in this category. More significant requirements, I'd look at GlusterFS.
Currently Being ModeratedFeb 20, 2012 9:23 AM (in response to nei0angei0)
I have deployed Drobo before. Taking aside it's obscured technology, it works well. Never had problems. Used the Drobo Pro direct attached to a Windows Server.
And I had a FreeNAS 7 server at home. I must say that I didn't trust too much on it's ability to handle driver fails and to grow a pool of storage as the Drobo (I never managed to do that. Always had to create an separete volume... not so much what I wanted.) . But, taking that aside, it was a really inexpensive iSCSI storage.
I heard about this GlusterFS and ZFS as well. The ZFS seems good for my storage needs, but have to study futher more how to properly deploy to handle hard drive fails. I use it for video editing and post production related stuff and need solutions with iSCSI that can be accessible through multiple hosts.
If anyone have any hints here too feel free to drop a message!