SCP is rather slow
I'm trying to setup an XServe and I've noticed that scp (ssh copy) is really pretty slow. On a gigabit network I get 15 MB/s between a linux machine and the XServe while I get about 32 MB/s between two linux machines. This is copy of a very large file (> 300MB) on otherwise quiescent systems. The "control" linux machine is a 1.8GHz Athlon64, the XServe is a single CPU 2.0GHz, and the "server" linux machine is a 2.4GHz Xeon. While I would expect some differences as ssh tends to be CPU-bound and the performance of these machines is likely somewhat unmatched, but the XServe isn't twice as slow as a Xeon (shouldn't be!).
A quick check with top shows that there is a huge amount of time being spent in the kernel. A whooping 43%. I don't see anything like this on the linux machines. As you can see, there is plenty of memory on this machine so the VM should have plenty of room. I checked with vm_stat and the page faults are normal. I don't know what's going on, anyone have any thoughts?
Here is the output from top:
Processes: 78 total, 3 running, 75 sleeping... 248 threads 19:47:11
Load Avg: 0.54, 0.37, 0.21 CPU usage: 55.5% user, 43.7% sys, 0.8% idle
SharedLibs: num = 164, resident = 31.7M code, 4.23M data, 10.8M LinkEdit
MemRegions: num = 11182, resident = 84.2M + 32.0M private, 50.2M shared
PhysMem: 202M wired, 160M active, 1.17G inactive, 1.53G used, 1.47G free
VM: 4.82G + 105M 27406(0) pageins, 501(0) pageouts
PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE
5657 top 9.1% 0:01.05 1 19 22 240K 368K 696K 27.1M
5656 scp 8.4% 0:01.05 1 14 17 140K 324K 812K 26.6M
5653 sshd 64.8% 0:08.03 1 11 48 580K 1.37M 924K 30.3M
5651 sshd 0.0% 0:00.06 1 34 41 28K 1.37M 1.47M 30.0M
5592 bash 0.0% 0:00.03 1 14 16 204K 820K 848K 27.1M
5588 sshd 0.4% 0:00.12 1 11 41 104K 1.37M 516K 29.9M
5586 sshd 0.0% 0:00.06 1 34 42 148K 1.37M 1.62M 30.0M
5432 named 0.0% 0:00.22 1 14 20 624K 1.24M 1.78M 83.9M
5362 lookupd 0.1% 0:00.73 5 44 44 516K+ 832K 1.29M+ 30.0M+
5119 pickup 0.0% 0:00.02 1 16 19 180K 420K 928K 26.7M
A quick check with top shows that there is a huge amount of time being spent in the kernel. A whooping 43%. I don't see anything like this on the linux machines. As you can see, there is plenty of memory on this machine so the VM should have plenty of room. I checked with vm_stat and the page faults are normal. I don't know what's going on, anyone have any thoughts?
Here is the output from top:
Processes: 78 total, 3 running, 75 sleeping... 248 threads 19:47:11
Load Avg: 0.54, 0.37, 0.21 CPU usage: 55.5% user, 43.7% sys, 0.8% idle
SharedLibs: num = 164, resident = 31.7M code, 4.23M data, 10.8M LinkEdit
MemRegions: num = 11182, resident = 84.2M + 32.0M private, 50.2M shared
PhysMem: 202M wired, 160M active, 1.17G inactive, 1.53G used, 1.47G free
VM: 4.82G + 105M 27406(0) pageins, 501(0) pageouts
PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE
5657 top 9.1% 0:01.05 1 19 22 240K 368K 696K 27.1M
5656 scp 8.4% 0:01.05 1 14 17 140K 324K 812K 26.6M
5653 sshd 64.8% 0:08.03 1 11 48 580K 1.37M 924K 30.3M
5651 sshd 0.0% 0:00.06 1 34 41 28K 1.37M 1.47M 30.0M
5592 bash 0.0% 0:00.03 1 14 16 204K 820K 848K 27.1M
5588 sshd 0.4% 0:00.12 1 11 41 104K 1.37M 516K 29.9M
5586 sshd 0.0% 0:00.06 1 34 42 148K 1.37M 1.62M 30.0M
5432 named 0.0% 0:00.22 1 14 20 624K 1.24M 1.78M 83.9M
5362 lookupd 0.1% 0:00.73 5 44 44 516K+ 832K 1.29M+ 30.0M+
5119 pickup 0.0% 0:00.02 1 16 19 180K 420K 928K 26.7M