Gary and Andy - Thanks for the advice but I'm still looking...
I suppose I should have explained what I was looking for a bit more in detail in regards to the FAQ. It seems to reason that there is one (or two) programs that should be responsible for enforcing these limits.
Let's take open files or max user processes. I would think that both the parent spawning the "process or file" that breaks the camel's back would report the error to stderr, but also logging it to /var/log/system.log or /Library/Logs/Console/$UID/console.log would seem of practical use to me. Then one could look over these logs and know that a limit had been broached (rather than guessing). Also, when we try to set the limit higher than the system supports, feedback in the logs would be nice.
Even better would be some consistency -
ulimit -n unlimited reports success ($? is 0), but instead sets a high limit that seems to match the value of kern.maxfilesperproc (currently 10240 for me), which is certainly not unlimited. On the other hand
ulimit -u unlimited reports an error:
bash: ulimit: open files: cannot modify limit: Operation not permitted
and ($? is 1) - yet there is still room to increase the max user processes as
ulimit -u 150 works. I can see that some of the hard limits some from where sysctl reads/sets them, and others (like 532 max user processes) might be a bash limitation instead of sysctl??
I'm lucky enough to have a dual G5 with 2G of RAM and lately I've really been cramped with processes not starting and perhaps getting killed when too many processes/memory pages are in use (I use X11 a lot and ssh onto many servers to keep tabs on a lot of systems - so that is 2 processes per window.) At first, when I open a few new xterms - they simply paint briefly on the screen and go away. Later as the system fills, I sometimes get exec fork errors that system resources are temporarily unavailable - again it would be nice to know exactly which one - file, system process count, user process count, etc...
I'm also a bit spoiled from using other UNIX where we have uptimes of hundreds of days and 7 and 8 digit PID numbers (note those UNIX are not as deterministic as Darwin and skip around a bit to make it harder to guess the next PID) Am I the only one that finds 2k processes per UID (with 532 per shell) a bit measly?