There is no reason to have more than 256 files open. Whatever you are doing, you are doing it wrong.
The setrlimit() function only applies to new processes. It does not affect the current process. You can either run "ulimit -n 10000" in the shell and then start your program or split your program into two segments:
#include <stdio.h>
#include <stdlib.h>
#include <sys/resource.h>
#include <unistd.h>
int main(void)
{
struct rlimit rlp;
int r;
r = getrlimit(RLIMIT_NOFILE, &rlp);
if (r == -1)
{
perror("getrlimit()");
exit(1);
}
printf("before %d %d ", (int) rlp.rlim_cur, (int) rlp.rlim_max);
rlp.rlim_cur = 10000;
/* I'm curious how anyone ever expected it to work or got it to work
without setting the max, because it sure seems to be required. */
rlp.rlim_max = 10000;
r = setrlimit(RLIMIT_NOFILE, &rlp);
if (r == -1)
{
perror("setrlimit()");
exit(1);
}
r = getrlimit(RLIMIT_NOFILE, &rlp);
if (r == -1)
{
perror("setrlimit()");
exit(1);
}
printf("after %d %d ", (int) rlp.rlim_cur, (int) rlp.rlim_max);
return execl("/tmp/f", "/tmp/f", NULL);
}
And the source to the "f" program is:
#include <stdio.h>
int main(void)
{
FILE *fp[10000];
int i;
for(i = 0; i < 10000; i++)
{
fp = fopen("a.out", "r");
if (fp == NULL)
{
perror("fopen()");
fprintf(stderr, "i == %d ", i);
return 1;
}
}
return 0;
}
[jdaniel@Pele:584] /tmp $ a.out
before 256 256
setrlimit(): Operation not permitted
[jdaniel@Pele:585] /tmp $ sudo a.out
before 256 256
after 10000 10000
fopen(): Too many open files
i == 9997
As you can see, it is convoluted and a real hassle. It isn't a bug in MacOS X 10.6 or any other version.
So, I repeat my question yet again. Why do you want to open more than 256 files? It is, of course, a rhetorical question. Don't do it. If you feel you need more than 256 open files, review your architecture and correct it.