6 Replies Latest reply: Apr 15, 2006 7:56 AM by Michael Bradshaw
Michael Bradshaw Level 5 Level 5 (4,135 points)
Does anyone have a good pointer to a FAQ/HowTo for setting ulimit on the mac. Something that covers where to look for signs that these limits are being hit (/var/log/???), which have to be set in sysctl and which in the shell...

TIA,
Mike

Mac OS X (10.4.3)
  • Gary Kerbaugh Level 6 Level 6 (18,040 points)
    Hi Mike,
       I've never seen much on it. It's not a topic that lends itself to extensive discussion because each of the variables is different. There's very little that one can say that applies to all of them except to give the syntax for the command and to tell you how to set them using a sysctl.conf file. I did a Google Search: "sysctl.conf" and all of the hits on the first page were short. The best one looked like Tuning with sysctl from the FreeBSD Handbook but it is also short.

       You can read a variable to see if your setting "took" but seeing if it had any effect is quite dependent on the variable. For instance to test the effect of maxproc, someone here once wrote a script that simply spawned new processes until it failed.

       Also there are compiled in limits on some of the variables that I haven't seen documented. For instance, I think the absolute maxproc limit is just over 2000 because one time I tried setting it higher and the setting didn't "take" until I brought it down to about that range.

       Hopefully that gives you some idea why a lengthy discussion would be almost impossible. I'm sorry my news isn't better. Maybe someone has seen a really detailed discussion. I'd like to see it myself.
    --
    Gary
    ~~~~
       Steal this tagline. I did.
  • Nils C. Anderson Level 4 Level 4 (3,495 points)
    Michael,

    While not a FAQ. perhaps this manpage will give you a better idea about just what's getting adjusted by ulimit.

    getrlimit, setrlimit -- control maximum system resource consumption

    Andy
  • Nils C. Anderson Level 4 Level 4 (3,495 points)
    Michael,

    While not a FAQ. perhaps this manpage will give you a better idea about just what's getting adjusted by ulimit.

    getrlimit, setrlimit -- control maximum system resource consumption

    Andy
  • Michael Bradshaw Level 5 Level 5 (4,135 points)
    Gary and Andy - Thanks for the advice but I'm still looking...

    I suppose I should have explained what I was looking for a bit more in detail in regards to the FAQ. It seems to reason that there is one (or two) programs that should be responsible for enforcing these limits.

    Let's take open files or max user processes. I would think that both the parent spawning the "process or file" that breaks the camel's back would report the error to stderr, but also logging it to /var/log/system.log or /Library/Logs/Console/$UID/console.log would seem of practical use to me. Then one could look over these logs and know that a limit had been broached (rather than guessing). Also, when we try to set the limit higher than the system supports, feedback in the logs would be nice.

    Even better would be some consistency - ulimit -n unlimited reports success ($? is 0), but instead sets a high limit that seems to match the value of kern.maxfilesperproc (currently 10240 for me), which is certainly not unlimited. On the other hand ulimit -u unlimited reports an error:

    bash: ulimit: open files: cannot modify limit: Operation not permitted

    and ($? is 1) - yet there is still room to increase the max user processes as ulimit -u 150 works. I can see that some of the hard limits some from where sysctl reads/sets them, and others (like 532 max user processes) might be a bash limitation instead of sysctl??

    I'm lucky enough to have a dual G5 with 2G of RAM and lately I've really been cramped with processes not starting and perhaps getting killed when too many processes/memory pages are in use (I use X11 a lot and ssh onto many servers to keep tabs on a lot of systems - so that is 2 processes per window.) At first, when I open a few new xterms - they simply paint briefly on the screen and go away. Later as the system fills, I sometimes get exec fork errors that system resources are temporarily unavailable - again it would be nice to know exactly which one - file, system process count, user process count, etc...

    I'm also a bit spoiled from using other UNIX where we have uptimes of hundreds of days and 7 and 8 digit PID numbers (note those UNIX are not as deterministic as Darwin and skip around a bit to make it harder to guess the next PID) Am I the only one that finds 2k processes per UID (with 532 per shell) a bit measly?
  • Nils C. Anderson Level 4 Level 4 (3,495 points)
    Michael,

    just my 2 cents.

    to get an idea on how the various resource limits are set take a look at
    the kernel source. specifically the file xnu-792.6.22/bsd/kern/kern_resource.c, and the function dosetrlimit()

    you might be able to figure out the problem, by using ktrace/kdump. they will show you which system calls are being made. and from looking over the output it should be possible to get a idea about which resource limit that you are slamming into, base on what the application was attempting to do at the time (open, creat, brk, etc...).

    Andy
  • Michael Bradshaw Level 5 Level 5 (4,135 points)
    Well - I never found a concise reference and didn't resort to ktrace - but am unwilling to do the work myself. Thanks Andy and others for the help on this.