Well the solution for our setup was related to disabling 'Allow Host Cache Flushing' in the RAID Admin software. Might help someone else to look into that aswell. I'm not taking credit for this, since I did find it mentioned in another thread in this forum regarding CPU-hogging. Thank's for that BTW!
Anyway, our CPU's maxed out after a clean installation this summer to 10.5.8 from 10.4.11 on our four G5 Xserves with Apple Raid 14TB setup. Apparently wasn't that cache setting of any effect under 10.4, but with 10.5 it sure messed things up. I just went with a 10.5 setup with the same basic settings as I had under 10.4 (that had worked quite alright, apart from when users searched the server, and it went into running a halt with catsearch. Also the main reason we now moved to 10.5-server).
Our load was alright up to ~80-90 connected afp roamed user homes (per server), at which point the cpu got maxed up to 100%, not dropping until the connected userbase was down to ~40 again. The I/O was just terrible aswell, no matter what the userbase was in numbers.
Now with that Cache Flushing off, we're back to normal 30–40% load, with 120–140 users.
All clients are tweakes to wan/quantum as mentioned elsewhere. The Cachefolder is always redirected to the local client. The clients are about 300 on 10.4 and 250 on 10.5, mixed PPC/Intel-base. (BTW, as for the homefolder mounting, safekeeping and application distributions we use MacAdministrator, and also ARD for OS updates). The servers are connected with link aggregation at gigabit speed to the backbone, and the clients are on 100-base.
Now I just whish I could redirect more in the Adobe CS3-suite aswell, since most CS-apps won't work redirected. And even getting properly working serverside savesupport from Adobe would be nice indeed, some 20 years later or so.
later,
Jesper