The relevant part is down near the bottom, "hlink.c". I haven't tested it extensively, but it seems to work.
Or did I misunderstand the question? If by asking "how" you meant how is it possible when on unix systems you aren't supposed to be able to, then I don't know...
The relevant part is down near the bottom, "hlink.c". I haven't tested it extensively, but it seems to work.
Or did I misunderstand the question? If by asking "how" you meant how is it possible when on unix systems you aren't supposed to be able to, then I don't know...
look at the following kernel source code (xnu-1228)
from: bsd/vfs/vfs_syscalls.c:
int
link(__unused proc_t p, struct link_args *uap, __unused register_t *retval)
{
.
.
.
/*
* Normally, linking to directories is not supported.
* However, some file systems may have limited support.
*/
if (vp->v_type == VDIR) {
if (!(vp->v
mount->mnt_vtable->vfcvfsflags & VFC_VFSDIRLINKS)) {
error = EPERM; /* POSIX */
goto out;
}
check HFS code:
from: "bsd/hfs/hfs_vfsops.c"
if (!(hfsmp->hfs_flags & HFS_STANDARD)) {
/* Tell VFS that we support directory hard links. */ <<<<<
mp->mnt
vtable->vfcvfsflags |= VFC_VFSDIRLINKS;
There is nothing that would stop a filesystem implementation from supporting directory hard-links. However, this feature can easily lead to undetectable loops that could send path look-ups into infinite spins, and is therefore considered harmful and usually forbidden in general purpose filesystems.
TimeMachine makes use of directory hard-links for space efficiency reasons. Remember, it brings forward unchanged files into the most recent backup by hard-linking them. If you have large directories full of unchanged files creating a new directory file for every hourly backup and populating it with hard-links for all the files in there is vastly less efficient than simply creating a hard-link for the directory.
In general, directory hard-links are a dangerous feature, but under controlled circumstances (like TimeMachine use) lifting the ban can prove to be beneficial.