Host-local interconnects will be faster than network interconnects. FireWire 800 has a theoretical bandwidth of ~786.432 megabits per second. Gigabit is theoretically slightly faster, but you won't get anywhere near that theoretical maximum with the usual baggage.
Unlike the local path, your remote copy test is running through an IP network stack (that's contending with a potentially lossy network) and a file server client and a remote file server, and only then along to the disk, and the network stack and the remote file server are extra layers that will slow down the aggregate performance.
On balance, I'd expect better performance from FireWire 800 than Gigabit Ethernet, if similar disks are used. I'd definitely expect better performance from Thunderbolt, too.
BobHarris is also correct; IP networks encountering Ethernet-level errors or configuration problems can decrease performance. Sometimes massively. This can show up as duplex mismatches, and as connections running at 100 megabits per second or slower, and as bad cables and cabling faults.
One option for troubleshooting some of this — if your present router/switch lacks LEDs or a management interface that can see the negotiated settings — can be either an unmanaged switch, or (for better visibility) a managed switch. The latter are more expensive, of course.
Run some benchmarks here, and see what you're getting for performance. Post the details, as well as a general description of the Mac and storage hardware involved. (Some using a Mac with internal SSD will have different I/O performance than some with an old USB 2 disk, for instance.)
The fastest network drives around are usually part of a Fibre Channel Storage Area Network. The FC SAN eliminates the IP network stack and related parts, and the FC SAN is also spec'd with much lower bit error rates than are normally found on Ethernet networks. (Part of why IP goes slower is due to contending with those higher network error rates.)
Neville Hillyer wrote:
I don't have the equipment to try this but my understanding is that if 2 computers are connected to a switch which is in turn connected to a modem/router then initial connection will be at router speed but subsequent file transfer should be at switch speed.
Resolving the path to the destination address via ARP or via router for off-subnet traffic is going to be negligible in a data transfer of any size, and ARP caching will mean the connections operate at full speed. The overhead of shoveling the bits of any decent-sized file will be far larger than a few packets flying around to figure out the network path. Typical benchmarking practice will also deliberately "warm up the caches" with a few transfers, but the ARP caches should completely mask even an underperforming IP router here.
FireWire will be fast because it's a short bus and shorter buses can be fast, and FireWire got a low error rate (which means the software doesn't have to clean up from errors quite as often as happens in a network stack), and you're going through the I/O stacks multiple times. (Compare disk to host memory back down through the network stack and across the network (slow, physically long, shared) and back up the remote network stack into the file server and back down to the disk. Versus disk I/O (fast) to host memory to disk I/O (fast).)
A Terminal.app command such as system_profiler SPNetworkDataType will tell you if your systems are both configured and running GbE, but based on the calculations (below), I'd suspect you are. Here's an example of a box that's negotiated and running GBe, extracted from the output of that command:
MAC Address: aa:bb:cc:dd:ee:ff
Media Options: Full Duplex
Media Subtype: 1000baseT
820 megabytes is roughly 8 gigabits. Over 30 seconds, that works out to be about 0.2 Gbps for the transfer, which isn't particularly out of the question for GbE performance involving a host-to-host (presumably) AFP file transfer.