Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

iSCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

Been doing some performance testing with various protocols related to shared storage...

Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))

Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.

Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4

In seconds:
iSCSI 134.267530
AFP 140.285572
SMB 159.061026
NFSv3 (w/o tuning) 477.432503
NFSv3 (w/tuning) 293.994605

Here's what I put in /etc/nfs.conf to tune the NFS performance:
nfs.client.allow_async = 1
nfs.client.mount.options = rsize=32768,wsize=32768,vers=3

Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.

I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).

If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

iMac 24 (Intel), Mac OS X (10.5.5)

Posted on Dec 8, 2008 3:27 PM

Reply
7 replies

Dec 11, 2008 3:30 PM in response to natdev

Here's what I put in /etc/nfs.conf to tune the NFS performance:
nfs.client.allow_async = 1


That line won't make any difference if you're not using "-o async" as described in the nfs.conf(5) man page:

nfs.client.allow_async
Allow the use of the -o async mount option. This option must be
enabled in order for the async mount option to be honored
because (accidental) use of the async mount option may result in
data loss if the server crashes. The default is 0 (off).

However, as the man page notes, use of this is discouraged because of the risk of data loss.

nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.


These are actually all the default values (TCP too), so I'm not sure how you got different numbers with these changes. In my experience, AFP and NFS should be in about the same ballpark. I just did quick test using the default mount options against a single-disk NFS server and it showed 5x the performance you report.

Perhaps something about the server's configuration is inhibiting its NFS performance?

HTH
--macko

Dec 13, 2008 4:19 AM in response to Mike Mackovitch

Thanks for the reply. It very well could be the NFS service configuration on the Thecus N5200 Pro. I don't have a way to check its settings since they don't grant root access to their NAS appliance. If you're getting comparable performance between NFS and AFP, then it does sound like this is more of a problem on the server side. Are you just using NFS and AFP to another Mac or is your server something else? Also, are you using NFSv3 or the Mac OS experimental NFSv4?

Dec 15, 2008 11:38 AM in response to natdev

The server I happened to run that test against was a Mac server (a Mac Pro serving a single disk). And I was using the default mount options, so it was NFSv3 (the NFSv4 stuff probably isn't going to be ready for prime time for another release or two - plus NFSv4 is unlikely to help raw I/O throughput).

In general, the Mac NFS client should be able to stream data almost as fast as the backing storage can source/sink it.

One typical issue with NFS write performance is that the protocol requires relatively fine-grained guarantees of data being committed to stable storage. If the server's storage facilities take a relatively long time to make sure data is written to stable storage, that can significantly hurt performance. I've seen a hardware RAID (in a Linux server) only get a few MB/s of NFS write throughput even though the NFS read performance was 100MB/s.

HTH
--macko

Dec 18, 2008 4:38 AM in response to natdev

Greetings.

There are always 2 sides to these 'benchmark' claims. The one the bugs me the most is they 'claim' AFP native support but they will NOT show benchmarks or explain this.

For small offices that take advantage of NAS as a working device (NOT a backup), AFP and HFS+ drives means MUCH FASTER seek/search abilities.

The list of AFP supported devices out there, such as this location:
http://www.smallnetbuilder.com/component/option,com_nas/Itemid,190/

don't test AFP/HFS+.

If anyone has knowledge of an accurate test of 2 or 4 drive NAS using AFP/HFS+ it would really be appreciated if they made this available/public for us.

Thank you

Dec 23, 2008 11:17 AM in response to D R

Pretty sure most of these Linux NAS devices are just using NetAtalk. It would be nice to know which version though. When I did my simplistic time test against volumes, I did format the Thecus 5200 Pro's iSCSI target as HFS+. As I mentioned, the performance was a little faster than AFP to the same unit from the same client (which was faster than CIFS/SMB) and way faster than NFS. I think Thecus may have made some bad configuration choices for NFS.

Dec 26, 2008 10:11 PM in response to natdev

With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.

Client:
- iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
- Mac OS X 10.5.6
- globalSAN iSCSI Initiator 3.3.0.43

NAS/Target:
- Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
- OpenSolaris 2008.11
- 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
- For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)

Network:
- Gigabit with MTU of 1500 (performance should be better with jumbo frames).

Average of 3 tests of:
# time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4

# zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
# zfs create -o shareiscsi=on -V 200g vault/iscsi
iSCSI with RAIDZ2: 148.98 seconds

# zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
# zfs create -o shareiscsi=on -V 200g vault/iscsi
iSCSI with RAIDZ: 123.68 seconds

# zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
# zfs create -o shareiscsi=on -V 200g vault/iscsi
iSCSI with two mirrors: 117.57 seconds

# zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
# zfs create -o shareiscsi=on -V 200g vault/iscsi
# zfs set compression=lzjb vault
iSCSI with two mirrors and compression: 112.99 seconds

Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

Jan 3, 2009 2:10 PM in response to natdev

Just started playing with my N5200B Pro (5x1.5TB, RAID-5) & Mac Pro.

The array is still building, so performance isn't optimal but iozone already reports ~35MBytes/sec write and ~40MB/sec read via AFP.

Not sure what the deal is yet, but it seemed like I had to disable SMB in order to connect via AFP.

Haven't tried jumbo-frames at all yet.

Mike

iSCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.