Check the specs on whichever NetGear switch and see what its maximum backplane bandwidth rating is — realizing that your actual speed will probably be lower, if you've enabled any of various advanced functions that a managed switch can have.
The link from the server to the switch is probably going to be a limit, whether due to the server or the link itself — with OS X, you'll need to go to a Mac with expansion capabilities or to a ThunderBolt network adapter (I'd look at a pre 2013 Mac Pro, or an external PCIe Thunderbolt cage or a Thunderbolt 10 GbE adapter to get faster than the standard 1 GbE connection), or to link aggregation for a Mac Pro or other box with multiple NICs. I'd steer clear of USB GbE adapters. Or you'll need to spread your data around your switch. Or some combination of these.
I'd be seriously tempted to investigate and maybe install a higher-end Network Attached Storage device with 10 GbE copper native support or maybe fiber networking support, if you find you're really running out of bandwidth on your storage-to-switch link. (If the client-to-switch links are running out of bandwidth, then the server link is definitely going to be overloaded.) This to get the Mac server out of the file-serving business — unless you're wed to OS X Server here, and then you'll need a pretty serious Mac to get 10 GbE or fiber.
If it's your disk storage that's the bottleneck here, then you're headed toward a fibre channel SAN, PCIe SAN or Thunderbolt SAN adapters, and some low- to mid-range FC SAN storage services (eg: Promise VTrak), and quite possibly with Apple Xsan software to manage that. (If you're going SAN, then it's very likely you'll need upgrades to your network switching, too.)
Run some speed tests on the disks and on the network — disks can range from ~4200 RPM with low transfer rates to 15K RPM with fairly substatial transfer rates — and see what the bottleneck is. SSD will get you a whole lot more performance, here. (Using hybrid SSD-HDD storage might fall back to reading or writing from rotating rust for your usagee here, so I'd wonder about its performance benefits for your case. Plus it's potentially two drives, and two disk bays, depending on the implementation.) Gigabit links and particularly that server-to-switch link are a decent choice for a bottleneck, as is a RAID 5 configuration (RAID 5 is usually decent at read I/O, but often very slow at handling write I/O — RAID 10 or more advanced controller-level RAID would be a more typical choice here), as are a bunch of cheap and slow disks, as is a Mac that's just not very fast serving its storage.
For performance-comparison purposes, A Mac Mini (previous generation) with a Thunderbolt Promise Pegasus (RAID 6) configuration can stream at least 4 full-bandwidth HDTV over-the-air DTV streams from switched GbE receivers to disk while playing back one stream, without performance issues.
A far more creative approach than a FC SAN for I/O bandwidth might be the installation and operation of torrent software used entirely internally, but I don't know if I'd fully trust that in production without a whole lot of testing — that spreads the load across more systems.
In short: you'll want to measure your various components in this configuration, and find and remove the bottlenecks in order of lowest to increasing performance.
I am running a small (post)production company.
Macmini running Server (latest software versions).
Areca 8050 Thunderbolt RAID(5 + spare)
6 Mac clients and 1 Window 7 client.
Network read and write speed is about 100/110MByte per second on every client.
Runs very stable.
Hope this helps.
100-110 MBps is ~GbE speed.
You'll want to measure the file server throughput, if you've not already done so.
Then I'd measure that RAID 5 performance without the network file server in the way.
Read and write I/O speeds depend on how fast the hardware is. 15,000 RPM (15K) disks can be better here than slower HDD. SSD can obviously be better, but tends to be far more expensive.
After measurement and as a first step and without involving new hardware, I'd consider migrating to RAID 10 for performance, or to RAID 6 if you really need the storage and if write performance is less of a factor. But that depends on whether that array is fast enough for your needs; whether the bottlneck here is the Mac Mini, the file server within the Mac Mini, the RAID hardware, or something else.
Mac Mini doesn't have a secondary NIC available (and the USB NICs tend to be slow), so no link aggregation option there.
Full disclosure: RAID 5 is not my favorite RAID configuration in general. That particular RAID level incurs a huge I/O load during failure recovery. A RAID 5 array rebuild is usually I/O saturated for most of a day during the rebuild, and empirical data shows an increased-above-trend-average likelyhood of a secondary and catastrophic disk failure during recovery.
MrHoffman, I am not related to cmscss. It seems he/she wants to speed up his/her company network. Just thought it would be helpful to know what configuration can make that possible. Besides the fact that you have to be a magician to make or keep OS X Server stable, the network setup is actually pretty simple. I move huge files over my network all the time and I am not experiencing any problems (knock on wood).
Hi All - thanks for the replies, I've been traveling so sorry for the late response.
I should've let you know what we currently have:
- 2012 Mac Pro
- 8 Drive Raid box
- RocketRAID card running RAID 10
- miniSAS connection
- Single gigabit connection
- NetGear Prosafe GS724TP switch (clients connected via gigabit)
When copying large amounts of data to and from the server, we seearound 7-12MB per second read/write/verify on the client machines.
After reading some of the replies I'm starting to wonder if we should be seeing faster transfer rates?
Anyway, I'll get a chance to properly read the replies later tonight - thanks heaps.
There's no magic answer here... I'd expect a bit more than that from SAS/SATA storage, but then I haven't worked with that combination and most of the storage stuff I deal with tends to be older gonzo-class big iron. You're going to want to benchmark the individual pieces of the chain and find the slow(er) bits. Check the GbE connections from client to server, and check the RAID storage accessed from the Mac Pro server to the storage, in particular.
Also check the access time and the transfer speed specs for the individual disks in the array — that's the limit on the fastest you can go with this configuration — as those determine how fast the disks can get to the data, and the sustained maximum data transfer speed. Also the specs on the specific RocketRAID controller involved. (This is where what pepmachine had posted had confused me — I was looking up the specs on that gear.)
Read-verify-write from a client is two passes over the network and through the file server, which means ~14 to ~24 MBps in aggregate (nb: I'm using MBps and Mbps for bytes and bits, respectively, and ~120 to ~240 Mbps) plus CPU overhead for the verify and the file services. In a four-member RAID 10 volume, RAID 10 also means all four disks are getting written for each I/O, which means the controller is waiting for all four writes to disk, or it's spoofing the completion by caching that data (hopefully battery-backed) but a big transfer can potentially blow out that cache and you're back to the speeds of the disks.)
The 2012 Mac Pro can run link aggregation, so two and potentially more network uplinks can be possible.
The 2012 Mac Pro cannot run Thunderbolt, so you'll be using PCIe controllers there. Next step up for that box would be a Fibre Channel Storage Area Network (FC SAN) Host Bus Adapter (HBA) and a SAN storage controller, or just a in-board RAID controller — this if it's the storage path that's slow.
We went round and started benchmarking individual components/machines and discovered that the wall sockets/internal cables were faulty on 2 machines causing 100base/T speeds a lot of the time!
For the record, we see 230MBs Write and 310MBs Read between the Server and the miniSAS RAID.
Amazingly, I see 830MBs on my new Macbook Pro's internal SSD - I guess an upgrade to SSDs will gives us another performance boost when prices come down and we need more speed.
Thanks heaps for your suggestions.