6 Replies Latest reply: Apr 26, 2014 4:22 AM by mark-in-seattle
sungshik Level 1 Level 1

does anyone partition internal hard drive?  what are the pros and cons?

MacBook Pro, Mac OS X (10.6.8)
  • Pondini Level 8 Level 8

    There are some circumstances where it makes sense, but in most cases, no.  OSX runs best, by far, if OSX, apps, and data are all on the same partition.


    If you want to be able to run two different versions of OSX, such as Snow Leopard and Lion, you can either partition your internal HD or put one on an external HD, then start up from the one you want to use.


    Some power users make a "repair and recovery" partition, where they install various tools that can fix things on an OSX volume, but not while running from it.

  • mark-in-seattle Level 1 Level 1

    Yes I definately partition my internal hard drive.  Main reason; the fragility of HFS+ filesystem, Apple's horrible antiquated embarrasment.   It has no real error correction, and lacks the most rudimentary modern filesystem features, unlike; NTFS, EXT3, EXT4, ZFS, btrfs, ...etc.  Appalling.  Mt Lion was supposed to have introduced a new (maybe ZFS varient) filesystem but Apple got cold feet and limped along with creaky 15 year old HFS+... ugh.  How they can build such advanced hardware like the new MacPro and let the OS fly without any error correction net with HFS+ is beyond comprehension.


    By partitioning I can store important data on a REAL filesystem, in my case ZFS.  Also, HFS+ suffers from creeping fragmentation issues, more slowly than some MSFT filesystems, but do not believe the MacOSX fanboys and think MacOSX and HFS+ never require defragmentation - it does and will rejuvenate a tired Mac.  I've save many friends Macs with an overnight defrag using iDefrag an inexpense MacOSX app.  Defraging on a system with the OS on one partition, swapfile on another (small 1st visible partition)  and data on a 3rd partition is much easier and more productive in my opinion.


    I also format the data partition with at least a 16k and sometimes 32k cluster size instead of the HFS+ default of 4k.  More efficient data transfers when the data is mostly large-ish video, audio files ...etc.  You do this from the command line after creating a blank partition with diskutility.  Google MacOSX large cluster size for more info.  It works great for me on several systems, but be careful with the commands because if not done correctly you can damage your OS installation.  Performance gain is real especially video editors.

  • Barry Hemphill Level 8 Level 8

    While interesting, your post responded to one about three years old.



  • PlotinusVeritas Level 6 Level 6


    mark-in-seattle wrote:


    Main reason; the fragility of HFS+ filesystem, Apple's horrible antiquated embarrasment.

    Nonsense and absurd



    and youre 3 YEARS LATE on the answer. 





    Giving a +1 to the late great Pondini



    Homage O' wise captain

  • mark-in-seattle Level 1 Level 1

    I posted because users doing google searches who are looking for an answer to this question don't care when a useful (I hope) piece of information was added.  They just want an answer and since no one else made a positive suggestion about the value of multi-partitioning I did based on my own direct experience.


    Trying to pay it forward since 1977 on the "internets" (Tektronix DARPA net node - Cyber70 mainframe)

  • mark-in-seattle Level 1 Level 1

    Just Google "What's wrong with HFS+" or go directly to the Ars Technica review of Lion from 2011 which delves into some of the hard facts about the back-to-the-future design of HFS+.




    Here is a sampling from that excellent article, read carefully the section on lack of data integrity in HFS+ ;


    When searching for unused nodes in a b-tree file, Apple's HFS+ implementation processes the data 16 bits at a time. Why? Presumably because Motorola's 68000 processor natively supports 16-bit operations. Modern Mac CPUs have registers that are up to 256 bits wide.


    All HFS+ file system metadata read from the disk must be byte swapped because it's stored in big-endian form. The Intel CPUs that Macs use today are little-endian; Motorola 68K and PowerPC processors are big-endian. (The performance cost of this is negligible; it's mostly just silly.)

    The time resolution for HFS+ file dates is only one second. That may have been sufficient a few decades ago when computers and disks were slower, but today, many thousands of file system operations (and many billions of CPU cycles) can be executed in a second. Modern file systems have up to nanosecond precision on their file dates.


    File system metadata structures in HFS+ have global locks. Only one process can update the file system at a time. This is an embarrassment in an age of preemptive multitasking and 16-core CPUs. Modern file systems like ZFS allow multiple simultaneous updates, even to files that are in the same directory.


    HFS+ lacks sparse file support, which allows space to be allocated only as needed in large files. Think about an application that creates a 1GB database file, then writes a few bytes at the start as a header and a few bytes at the end as a footer. On HFS+, slightly less than a gigabyte of zeros would have to be written to disk to make that happen. On a modern file system with sparse file support, only a few bytes would be written to disk.


    Some of those features were an easy fit, but others were very difficult to add to the file system without breaking backwards compatibility. One particularly scary example is the implementation of hard links on HFS+. To keep track of hard links, HFS+ creates a separate file for each hard link inside a hidden directory at the root level of the volume. Hidden directories are kind of creepy to begin with, but the real scare comes when you remember that Time Machine is implemented using hard links to avoid unnecessary data duplication.


    Listing the contents of this hidden directory (named "HFS+ Private Data", but with a bunch of non-printing characters preceding the "H") on my Time Machine backup volume reveals that it contains 573,127 files. B-trees or no b-trees, over half a million files in a single directory makes me nervous.


    That feeling is compounded by the most glaring omission in HFS+—and, to be fair, many other file systems as well. HFS+ does not concern itself with data integrity. The underlying hardware is trusted implicitly. If a few bits or bytes get flipped one way or the other by the hardware, HFS+ won't notice. This applies to both metadata and the file data itself.


    Data corruption in file system metadata structures can render a directory or an entire disk unreadable. (For a double-whammy, think about corruption that affects the "HFS+ Private Data" directory where every single hard link file on a Time Machine volume is stored.) Corruption in file data is arguably worse because it's much more likely to go undetected. Over time, it can propagate into all your backups. When it's finally discovered, perhaps years later when looking at old baby pictures, it's too late to do anything about it.