Check the specs on whichever NetGear switch and see what its maximum backplane bandwidth rating is — realizing that your actual speed will probably be lower, if you've enabled any of various advanced functions that a managed switch can have.
The link from the server to the switch is probably going to be a limit, whether due to the server or the link itself — with OS X, you'll need to go to a Mac with expansion capabilities or to a ThunderBolt network adapter (I'd look at a pre 2013 Mac Pro, or an external PCIe Thunderbolt cage or a Thunderbolt 10 GbE adapter to get faster than the standard 1 GbE connection), or to link aggregation for a Mac Pro or other box with multiple NICs. I'd steer clear of USB GbE adapters. Or you'll need to spread your data around your switch. Or some combination of these.
I'd be seriously tempted to investigate and maybe install a higher-end Network Attached Storage device with 10 GbE copper native support or maybe fiber networking support, if you find you're really running out of bandwidth on your storage-to-switch link. (If the client-to-switch links are running out of bandwidth, then the server link is definitely going to be overloaded.) This to get the Mac server out of the file-serving business — unless you're wed to OS X Server here, and then you'll need a pretty serious Mac to get 10 GbE or fiber.
If it's your disk storage that's the bottleneck here, then you're headed toward a fibre channel SAN, PCIe SAN or Thunderbolt SAN adapters, and some low- to mid-range FC SAN storage services (eg: Promise VTrak), and quite possibly with Apple Xsan software to manage that. (If you're going SAN, then it's very likely you'll need upgrades to your network switching, too.)
Run some speed tests on the disks and on the network — disks can range from ~4200 RPM with low transfer rates to 15K RPM with fairly substatial transfer rates — and see what the bottleneck is. SSD will get you a whole lot more performance, here. (Using hybrid SSD-HDD storage might fall back to reading or writing from rotating rust for your usagee here, so I'd wonder about its performance benefits for your case. Plus it's potentially two drives, and two disk bays, depending on the implementation.) Gigabit links and particularly that server-to-switch link are a decent choice for a bottleneck, as is a RAID 5 configuration (RAID 5 is usually decent at read I/O, but often very slow at handling write I/O — RAID 10 or more advanced controller-level RAID would be a more typical choice here), as are a bunch of cheap and slow disks, as is a Mac that's just not very fast serving its storage.
For performance-comparison purposes, A Mac Mini (previous generation) with a Thunderbolt Promise Pegasus (RAID 6) configuration can stream at least 4 full-bandwidth HDTV over-the-air DTV streams from switched GbE receivers to disk while playing back one stream, without performance issues.
A far more creative approach than a FC SAN for I/O bandwidth might be the installation and operation of torrent software used entirely internally, but I don't know if I'd fully trust that in production without a whole lot of testing — that spreads the load across more systems.
In short: you'll want to measure your various components in this configuration, and find and remove the bottlenecks in order of lowest to increasing performance.