This discussion is locked
RX178

Q: Max 4 LUNs per storage pool?

Hi,

I saw from the Xsan2 Admin Guide v2.2, "Xsan Capacities" on P20, it says "Number of LUNs in a storage pool" is 32 at maximum, but I never able to achieve this "maximum". No matter how many LUNs I dragged to a single Affinity Tag during "Configure Volume Affinities", it always break into a number of storage pools with maximum 4 LUNs in a single storage pool. The problem is the maximum bandwidth is capped on 4 LUNs' speed, it *****. Compare to StorNext, I can really add more storage RAID and get both storage size and bandwidth aggregation, mine Xsan storage, although with a stunning 13 Promise RAID in a single rack (it looks really awesome), bandwidth performance is just like 2 RAIDs? I tested I can only play concurrently 30 streams of ProRes HQ and it started to drop frame.

It prints on the admin guide that a storage pool can support 32 LUNs, and "Xsan distributes file data in parallel across the LUNs in a storage pool using a RAID 0 (striping) scheme", refer to P17. Either I misunderstand the meaning or I've done something wrong on the setting.

Any guru can help me?

Xserve 2009, Apple Promise 16TB, MacPro Early 2008, Mac OS X (10.5.8)

Posted on Mar 4, 2010 7:32 PM

Close

Q: Max 4 LUNs per storage pool?

  • All replies
  • Helpful answers

  • by Strontium90,

    Strontium90 Strontium90 Mar 6, 2010 3:50 AM in response to RX178
    Level 5 (4,087 points)
    Servers Enterprise
    Mar 6, 2010 3:50 AM in response to RX178
    Silly question but gut response... When setting up a new volume with the setup wizard, and you are prompted to choose the volume type, have you tried Custom? Doing a test, it appears that you can do nearly any number of LUNs under Custom.

    Hope this helps
  • by jmyres,

    jmyres jmyres Mar 6, 2010 9:31 AM in response to RX178
    Level 1 (80 points)
    Mar 6, 2010 9:31 AM in response to RX178
    You're right...each individual Xsan client will only write to 4 LUNs at a time, which basically means your maximum per-client bandwidth will equal the bandwidth of two raid shelves. But even if you could write to more LUNs at once, the fibre channel card in your client would become a bottleneck, so you wouldn't see an improvement in speed anyway.

    "I tested I can only play concurrently 30 streams of ProRes HQ and it started to drop frame."

    So, you're saying you're upset at this result?

    JM
  • by RX178,

    RX178 RX178 Mar 8, 2010 9:09 PM in response to Strontium90
    Level 1 (0 points)
    Mar 8, 2010 9:09 PM in response to Strontium90
    Hi Strontium90,

    Yes, I did try Custom, you can custom ANY NUMBER but in the end, system will HELP you to break into 4 LUNs per storage pool. Say you select 10 LUNs, it will break into 3 storage pools with 4/4/2 LUNs but they are under the same affinities tag, at the worst performance is when it loops to the 3rd storage pool (I select round-robin) where I only get a 2 LUNs performance.

    Message was edited by: RX178
  • by RX178,

    RX178 RX178 Mar 8, 2010 9:20 PM in response to jmyres
    Level 1 (0 points)
    Mar 8, 2010 9:20 PM in response to jmyres
    Hi jmyres,

    Thanks for your confirmation. I do hope some one tell me I was wrong. For your record, I have 40 MacPros, 13 Apple Promises, and some Xserve in this facility. Imagine each user edits 2 streams concurrent... no I'm not upset with the result, I'm DEPRESSED with the result. And why would I need 13 RAIDs when it can only give me 2 RAIDs performance?

    Do appreciate your reply.

    RX178
  • by jmyres,

    jmyres jmyres Mar 8, 2010 11:11 PM in response to RX178
    Level 1 (80 points)
    Mar 8, 2010 11:11 PM in response to RX178
    You may have 13 arrays, but you also have 40 clients. If all of your clients were to read from your Xsan at one time, the maximum throughput you would see per client would be about 200 MB/s. More arrays will allow all of your clients to achieve their maximum bandwidth at any given time.

    So, If you had 40 video editors all working in uncompressed HD (~170MB/s) you'd see the limitations in your Xsan pretty quickly. But if you're working in ProRes HQ (27.5 MB/s) each editor would easily be able to work with several streams at once.

    The beauty of an Xsan isn't really in it's raw speed though; it's the collaborative environment it provides and the sheer scalability. There are a lot of "shared" storage storage solutions out there, but very few of them offer concurrent, file-level access to 2 petabytes of storage for up to 64 clients at 800 MB/s each. The speed is just one aspect of it.

    JM
  • by RX178,

    RX178 RX178 Mar 8, 2010 11:21 PM in response to RX178
    Level 1 (0 points)
    Mar 8, 2010 11:21 PM in response to RX178
    Let me modify a bit by example:

    Yes, for a single client case, the bottleneck is in FC card, but I won't go for SAN if I just have a client, right? An important reason you go for SAN is it enables collaboration work among a number of client, and since in round-robin mode, data writes to different storage pools by looping method, the real challenge is multiple reads on the same file.

    Imagine this is a teaching facility with 40 students, when teacher create a teaching material for 40 students to read/edit, he has to create two affinities tags for two groups of students attending the same class, and if each student need to read 2 streams and write 1 stream, then the teacher needs to create 4 affinities tags, copy material 4 times, and there is no collaboration work between groups because files actually locate in different locations, you end up have a lot of file copying work, not really much a collaboration work environment.
  • by Janemann,

    Janemann Janemann Mar 12, 2010 5:19 PM in response to RX178
    Level 1 (5 points)
    Mar 12, 2010 5:19 PM in response to RX178
    Even in Xsan 2.2 you are able to use mor than 4 Luns in one pool. But you have to tweak your .cfg file.

    Begin with just one MDcontroller

    1. label the luns with unique names (lun1-x)
    2.setup a Volume with Metadata Pool and one Datapool and just 4 of your luns.
    3.stop the volume
    4.use Terminal and pico to modify the .cfg File in Library/Filesystems/Xsan/config by adding the left luns in the lists. set "ForceStripeAlignment" to "No" You will find it...
    5. edit the aux-data.plist in Library/Filesystems/Xsan/config and change the ammount of "OptimalLunCount"(or so) to your desired ammount.
    6. do a cvupdate on you Volume via Terminal (needs some minutes) you will be noticed if you did something wrong with editing the .cfg file....if so go back to 4.
    7. start Volume
    8. all Failover Controllers an Clients

    Hope that helps...

    I am currently testing with 18 Dataluns. If anyone makes any experiences with tweaking the Inode and cachesettings pleas let me know!

    Greetings Jan
  • by jmyres,

    jmyres jmyres Mar 14, 2010 10:36 PM in response to Janemann
    Level 1 (80 points)
    Mar 14, 2010 10:36 PM in response to Janemann
    A couple of thoughts...

    1) Awesome.

    2) What you describe here is not common knowledge. While it seems you've found a way to address more than 4 LUNs at once, I doubt many admins would be willing to try something like this on a production volume, with real data, that has real value. Command line is great, but until there is at least some reference to this in Xsan Admin, I'm going to stick with the concept that Xsan only supports 4 LUNs at a time.

    3) We'll be trying this on our Xsan as soon as possible

    JM
  • by RobertKite,

    RobertKite RobertKite Apr 7, 2010 1:56 PM in response to RX178
    Level 1 (120 points)
    Apr 7, 2010 1:56 PM in response to RX178
    Strontium90 is correct.