Let me explain what is going on with sysctl.conf. Mostly from memory so I don't spend all day on this, so forgive me if there are a couple minor mistakes.
You normally don't need to adjust sysctl variables. Therefore, not having a sysctl.conf is not unusual. The kernel is configured 'out of the box' with the correct settings for 'most' users.
That is why a clean install does not have this and an upgraded box does. The Apple Broadband tuner does some measurements and sets those values for you based on what it finds.
What window sizes you use, buffer sizes, etc. are largely dependent upon what kind of network you sit on and more importantly, what sort of latencies you have between you and what you're testing against.
This is typically called the 'Long Fat Pipe' problem and has become more of an issue over the last few years. As a network engineer, we routinely have to 'tune out' latency to improve the efficiency of TCP. You have to remember that when this protocol was written, we didn't have Gbit pipes spanning 40+ ms (or at all) and people wanting to actually send a TB of data across it at full Gbit speeds. In addition, there have always been folks wanting to tune out every spare bit over their 225ms PPP link over a 33.5 kbps modem.
Another description of the 'problem' and a simple calculator to help calculate what to set these values to is here:
http://en.wikipedia.org/wiki/Bandwidth-delay_product
Without going into an entire seminar on TCP/IP, the major factors that you are dealing with are:
Latency (time between point A and point B)
Bandwidth (how much you want each individual transfer to use - you might not want a single TCP session to be able to eat up an entire network pipe - for reasons I'll explain in a minute)
Window size (Goes by many names, but RWIN is one of the values that impacts this)
The math works out as:
Bandwidth * Delay = # of packets to keep 'in flight' (Window size)
So if I want to fill a 15 Mbit pipe (my verizon FIOS speed) and I have 40ms of latency between me and the end node, I need to have 75,000 bytes of data 'in flight' at any one time.
So what do I mean by 'in flight'. One of the reasons why TCP/IP (opposed to UDP/IP) is reliable is that it has a method to acknowledge receipt of packets. The window size is one of the things that is used to determine how often data needs to be acknowledged.
If you have an 8KB window (which is typically what the effective window is of NetBIOS when you talk to a window server for instance), then every 8KB the received data has to be acknowledged or the sender will quit sending data.
Therefore, if there is more than 1ms of latency between me and the server, I'm not going to be able to fill that 100Mbit link I have since I have to wait for the other side to ack the packets I sent - so I can't keep that pipe full. This only gets worse when you get up to Gbit and higher speeds.
What's worse is that the 'window' value in a TCP/IP packet can only be an 16-bit number. Which means it can only be as high as 65535.
So let's go back to that FIOS problem of 15 Mbps and 40ms latency. We need 75,000 bytes of data in flight and we can only do 65,535. So without enabling a new 'feature' of TCP, we're not going to get that 15 Mbps (In a single tcp session).
That's where the window scaling comes into play - rfc1323 (
http://www.ietf.org/rfc/rfc1323.txt). This is enabled/disabled with the sysctl variable net.inet.tcp.rfc1323. You can enable/disable this with sysctl -w net.inet.tcp.rfc1323=1 (enable) and/or set the value you want in /etc/sysctl.conf.
Window scaling adds a multiplier to the value set for the TCP window. So now we can get past 65535 and fill that pipe - but only if we ask for the right size value.
The value of 358,400 provided earlier should be good up to what... almost 9 Gbps over 40ms. So you should be good there.
So this is great - we can now get a pile of data UNACKNOWLEDGED out on the wire at once and we're keeping our pipe full.
Then a packet gets dropped.
Without yet another feature, the receiver is only able to acknowledge the last bit of data that is received. This means that the entire window size (70KB+ of data in this case) has to be RE-SENT.
What's worse is that we no longer trust that the network has the carrying capacity to support this massive amount of data we're shoving down the pipe, so as a good network citizen, the window size gets auto-magically cut in half and we (the sender machine) then has to 'slide' it back to where it was until it's comfortable that it's not going to cause more congestion problems.
That's where SACK (Selective Acknowledgements) comes into play and sysctl value net.inet.tcp.sack.
As long as both sides of the connection support SACK (you'll be surprised at how many that do not), we can now specify both the start and end of the data that we received. So the sender can now only transmit the data that was lost rather than the entire window.
That said, other behaviors such as congestion avoidance (the reduction and re-growth of the window size) will still take place.
There are a number of extensions that have been enabled that try to 'self tune' these parameters and help recover more gracefully from packet loss on these long fat links.
One of them that you'll also see OS X now doing is adding a timestamp option to its packets (also part of RFC1323). In this, the sender puts in a timestamp when it sends a packet. When the other side acknowledges it, it puts in a timestamp. With this, we now know how much time it takes to get from point A to point B (without having to ping it and manually adjust things) and can adjust ourselves.
So this gets to the root of what I think you may be seeing here.
If you are not able to get full speed out of the self tuning protocols AND when you force set the values (basically kick-starting the self tuning values) your transfer rates fall back down, you may have packet loss issues.
If you are, the default behavior is the correct one as it does not cause more of an issue since it'll grow up to the point where the drops start, then back off, maintaining a balance. Where kick-starting may cause a large number of drops, drastically reducing your throughput and delaying that growth back up to where it optimally should be.
You may want to do a network trace and see if you have packets getting lost.
sudo tcpdump -ni en0 host x.x.x.x
... where x.x.xx is the host you are having issues with. You may also need to change en0 to en1 if you're using Airport and not wired network. netstat -in will show you which interface to use for sure - check for your current IP address.
If Sack is supported (you'll see 'sackOK' on both initiating connections (marked with an S for SYN)), it'll be pretty easy to detect packet loss.
For more info on how to read these traces, you can check out this thread:
http://discussions.apple.com/message.jspa?messageID=5994347#5994347
I hope this helps.