Skip navigation

kernel.log rotation? and runaway process "mount_hfs"

803 Views 3 Replies Latest reply: Apr 22, 2012 8:09 AM by MrHoffman RSS
Jason Buecker Calculating status...
Currently Being Moderated
Apr 21, 2012 5:16 PM

Hi folks,

I have 2 serious issues.


The server is a mid 2011 Mac Mini running OS X SERVER 10.6.8.

The server is in a different  city than I, but I do have SSH and ARD.



(1) The Server is spewing tons of errors to the kernel.log (I believe) And is consuming Massive amounts of HD space...


Do I control log rotation for kernel.log in the same manner that I contreol other various logs in /etc/newsyslog.conf ?


I did not see a default entry there for kernel.log

- if not in  /etc/newsyslog.conf, how do I govern the size and when it rotates?




(2) I have a process that is killing my cpu (and probably causing the log bloat problem)... it is "mount_hfs" ... I cannot for the life of me kill it, I have tried killing it with kill -9 9pid) under SU.

nothing happens.


I believe this is responsible for potentially filling my HD with Log errors, and I mean its happening FAST ... 4 GB in 3 hours.

Inspection shows that this process is trying to mount a failed mirror raid set ... I have since formatted BOTH internal drives, thus the raid does not exist!


How Can I kill this process without a restart ?  (I tried retsrating earlier), the Server Hung.... and I am not at the physical location.



Thanks to all in advance for helping me out....

I have had a bad day... my server's Mirrored raid set first degraded, than the other healthy drive failed to boot my server! (Had to resort to an external nightly clone)

No joy,


Any help is SERIOUSLY appreciated.



  • MrHoffman Level 6 Level 6 (11,700 points)

    First, do you have complete, full and off-this-box disk backups?    (If not, please stop reading now, and go get those created.  Though your comments imply there might not be any data currently on these disks.)


    "mount_hfs" is the command used to mount HFS+ disks.  And given a choice, I generally wouldn't aim kill commands at parts of the file system.


    Log rotations and related are automatic; if they're not working or if you're seeing a blizzard of errors, then there's likely something (else?) going on.


    What are the errors that are (presumably) repeating in the log(s).  Those log entries will hopefully point to some of what's being encountered.  Given what you're reporting, I'd tend to guess that there was a failed disk here.


    Given the vintage of this system, it may be under warranty or it will be under AppleCare if that was purchased.  If it is covered, call Apple.  If not, then swapping a disk is probably going to be on the agenda.

  • MrHoffman Level 6 Level 6 (11,700 points)

    Did you wipe the disks and reinstall and migrate in the old data, or did you reinstall from backup?  (I'd tend to guess the latter, based on what's in that log.)


    If the latter was the cause and the guess was correct, see if using Disk Utility booted from an installation DVD can clear the RAID1 settings from the configuration.  Wipe all the disks present inside the box, and reinstall OS X (or OS X Server) from distro, and migrate in your old RAID disk data from your backups.  (Here's the Apple RAID1 recovery sequence (HT2559), but that doesn't look applicable with the current state of the RAID1 volume set here.)  You might choose to re-form the RAID using Disk Utility, of course.


    In the near- to mid-term, I view RAID5 as a "doomed" RAID format.  It was a reasonable choice when disks were small and very expensive.  But the bigger and cheaper the disks get, the more likely a secondary (and catastrophic) error arises during the massive, massive, massive I/O overhead of a RAID5 recovery from that initial unrecoverable disk error.  At some capacity, that likelihood will approach a certainty, and the RAID5 format will be dead.  Due to the potential for that secondary error and the massive, massive recovery I/O load triggering a secondary (and caastrophic) error, my preference here would be RAID6 or RAID10.  Not RAID5, if I had that choice.  (I have some more reading on disk error rates and RAID recovery overhead and related on the HoffmanLabs web site.)


    Local preference is RAID6 or RAID10, and backups, as all RAID levels are vulnerable to corruptions and to deletions and to security breaches which modify or delete the contents of the disks.


    The other (good) option for a dual-spindle Mac Mini Server is a single-disk with a backup disk, and where you dump zips of files as stuff changes, and regularly dump database backups of your production data onto the secondary volume.  Saving all of the OS X Server bits isn't as critical or as necessary as saving your application files when they change, and regular or nightly mysqldump dumps of your databases (for instance), so it's possible to get decent coverage with some effort on your part, with just two disks.  Or yes, to an external array, or database mirroring to a second Mac Mini Server, etc.


    I've worked with some very solid software-based RAID1 implementations, and the Apple software unfortunately isn't quite as solid as that stuff yet.  That other and more solid RAID software took probably ten years of design and development and heavy use and engineering remedial fixes to get as solid as it was, and that stuff still had the occasional glitch.  This RAID recovery processing is not an easy problem; there are all manner of weird failures that disks can toss.


    And FWIW, I've worked with hardware-implemented RAID1 and RAID5 that was flaky.


    There's no panacea here.


More Like This

  • Retrieving data ...

Bookmarked By (0)


  • This solved my question - 10 points
  • This helped me - 5 points
This site contains user submitted content, comments and opinions and is for informational purposes only. Apple disclaims any and all liability for the acts, omissions and conduct of any third parties in connection with or related to your use of the site. All postings and use of the content on this site are subject to the Apple Support Communities Terms of Use.