You can make a difference in the Apple Support Community!

When you sign up with your Apple Account, you can provide valuable feedback to other community members by upvoting helpful replies and User Tips.

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Time Machine always requires deep traversal?

Hi all-


For the tl;dr crowd: every time my Mac Pro starts a backup, it does a deep traversal (this has gone on for about two months since I got the drive). I'm backing up to a 2TB USB external drive that is always connected. I've added the backup drive to Spotlight's exclude/privacy list. Yesterday I rebooted the machine using the OSX installation disk and performed disk repairs on both the system drive and the backup drive (neither was found to have any errors). I don't have clamXav or any other AV installed. What else can I try to avoid the deep traversal? Here are the console logs for two back-to-back backups (note that these are just before I got into the office, so I don't think user interaction is forcing the deep traversal):


5/31/12 6:53:32 AM
com.apple.backupd[5425]
Starting standard backup
5/31/12 6:53:33 AM
com.apple.backupd[5425]
Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
5/31/12 6:55:00 AM
com.apple.backupd[5425]
Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|
5/31/12 6:57:11 AM
_spotlight[5455]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 6:57:11 AM
_spotlight[5460]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 6:57:21 AM
com.apple.backupd[5425]
No pre-backup thinning needed: 1.87 GB requested (including padding), 1.36 TB available
5/31/12 6:57:23 AM
_spotlight[5466]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 6:57:23 AM
_spotlight[5471]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 6:57:23 AM
_spotlight[5476]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 6:57:23 AM
_spotlight[5481]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 6:57:34 AM
com.apple.backupd[5425]
Copied 2085 files (17.3 MB) from volume Snow Leopard.
5/31/12 6:59:02 AM
com.apple.backupd[5425]
Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|
5/31/12 7:01:05 AM
_spotlight[5498]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:01:05 AM
_spotlight[5503]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:01:13 AM
com.apple.backupd[5425]
No pre-backup thinning needed: 1.87 GB requested (including padding), 1.36 TB available
5/31/12 7:01:15 AM
_spotlight[5508]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:01:15 AM
_spotlight[5513]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:01:15 AM
_spotlight[5518]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:01:15 AM
_spotlight[5523]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:01:26 AM
com.apple.backupd[5425]
Copied 2085 files (16.9 MB) from volume Snow Leopard.
5/31/12 7:02:57 AM
com.apple.backupd[5425]
Starting post-backup thinning
5/31/12 7:32:41 AM
com.apple.backupd[5425]
Deleted backup /Volumes/Time Machine Backups/Backups.backupdb/warp/2012-05-29-143904: 1.36 TB now available
5/31/12 7:32:41 AM
com.apple.backupd[5425]
Post-back up thinning complete: 1 expired backups removed
5/31/12 7:32:42 AM
com.apple.backupd[5425]
Backup completed successfully.
5/31/12 7:53:33 AM
com.apple.backupd[5706]
Starting standard backup
5/31/12 7:53:33 AM
com.apple.backupd[5706]
Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
5/31/12 7:55:13 AM
com.apple.backupd[5706]
Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|
5/31/12 7:57:26 AM
_spotlight[5735]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:57:26 AM
_spotlight[5740]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:57:35 AM
com.apple.backupd[5706]
No pre-backup thinning needed: 1.87 GB requested (including padding), 1.36 TB available
5/31/12 7:57:37 AM
_spotlight[5745]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:57:37 AM
_spotlight[5750]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:57:37 AM
_spotlight[5755]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:57:37 AM
_spotlight[5760]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 7:57:48 AM
com.apple.backupd[5706]
Copied 2085 files (18.9 MB) from volume Snow Leopard.
5/31/12 7:59:15 AM
com.apple.backupd[5706]
Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|
5/31/12 8:01:24 AM
_spotlight[5778]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 8:01:24 AM
_spotlight[5783]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 8:01:32 AM
com.apple.backupd[5706]
No pre-backup thinning needed: 1.87 GB requested (including padding), 1.36 TB available
5/31/12 8:01:34 AM
_spotlight[5788]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 8:01:34 AM
_spotlight[5793]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 8:01:34 AM
_spotlight[5798]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 8:01:34 AM
_spotlight[5803]
vol.notice /usr/fl/etc/volume.cfg not found
5/31/12 8:01:43 AM
com.apple.backupd[5706]
Copied 2085 files (18.9 MB) from volume Snow Leopard.
5/31/12 8:03:15 AM
com.apple.backupd[5706]
Starting post-backup thinning
5/31/12 8:31:33 AM
com.apple.backupd[5706]
Deleted backup /Volumes/Time Machine Backups/Backups.backupdb/warp/2012-05-29-123915: 1.36 TB now available
5/31/12 8:31:33 AM
com.apple.backupd[5706]
Post-back up thinning complete: 1 expired backups removed
5/31/12 8:31:35 AM
com.apple.backupd[5706]
Backup completed successfully.


The long version: in March I got a 2TB USB external drive to back up the Mac Pro at my work. I've had good success with Time Machine at home, backing up two MacBooks and a Mac Mini with no problems what-so-ever (it really "just works"), so I was surprised when Time Machine struggled with the Mac Pro. I did a lot of research on the web, and followed as much advice as I could (excluding the backup drive from Spotlight, making sure backups completed uninterrupted, repairing drives, etc.), but I continue to have backups that take almost a full hour (and thus Time Machine is almost constantly running). The actual quantity of data backed up is pretty small (similar to the log above, it's usually 20-50 MB).


During my initial backup I ran into a problem with Git which may be pertitent, but I'm not sure. I was in charge of my company's conversion from Subversion to Git, and in the process I locally cloned a bunch of Git repositories (on the order of 300-400 repos). Since Git uses hard links between local clones, this all fit on my 500GB drive no problem. But Time Machine attempted to back up each directory independently, and the backup ballooned to over 2.5TB and didn't fit on the 2TB external drive. After I excluded the Git repos from the backup, the intial backup completed without any visible problems.


Otherwise the machine is used for pretty standard software development (python, gcc, gdb, git), plus standard office email/web browsing.


I've been digging into this on and off since I first noticed the problem shortly after getting the drive and I'm running out of ideas. Anyone have additional suggestions on how to avoid the deep traversal? Let me know if there's additional useful information I've left out.


Thanks!

Stephen

Mac Pro (Mid 2010), Mac OS X (10.6.8)

Posted on May 31, 2012 6:23 AM

Reply
Question marked as Top-ranking reply

Posted on May 31, 2012 6:03 PM

Stephen Bash wrote:

. . .

5/31/12 6:55:00 AM com.apple.backupd[5425] Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|
...
5/31/12 6:59:02 AM com.apple.backupd[5425] Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|

That's curious. I've rarely seen those reason codes; it looks like there was a very large volume of file system changes, but that usually sends a different message.


Even stranger, it's doing a deep traversal on both passes of the same backup. I don't recall seeing that before, either, so I suspect a problem with the File System Event Store, the hidden log of file changes that OSX keps on each volume. TM normally uses it to see what needs to be backed-up, instead of the deep traversal.


Are you running any apps that routinely make many, many changes to the file system (files or folders added, changed, moved, renamed, etc.)?


5/31/12 7:01:05 AM _spotlight[5503] vol.notice /usr/fl/etc/volume.cfg not found

Spotlight is also very unhappy about whatever that is. I don't have an fl folder in my /usr folder, so have no idea what it is, or whether that's connected to the Time Machine problem, but it sure looks suspicious.


Any idea what it is?

174 replies

Jun 27, 2012 6:01 PM in response to Pondini

Pondini wrote:


If you've tried everything he or I listed above, all I can suggest now is, something damaged in your installation of OSX. You might try installing the "combo" update, per Installing the ''combo'' update and/or Reinstalling OSX. If that doesn't help, installing a fresh version of OSX (that won't disturb anything else), per the same article, might do the trick.


(Since Stephen hasn't posted back with his results of that, we don't have a very good idea whether that may fix it or not.)


Sorry for the radio silence. It's been a busy couple of weeks at work (and it's my office machine that is misbehaving). Hopefully next week with the holiday (and a lot of the office taking time off), I can find a bit of time to try the combo update.


Thanks,

Stephen

Jun 28, 2012 7:28 AM in response to Pondini

Hello,

I did try the combo install. No changes.

I think I'll do a full reinstallation next week, including formatting the HDD... This will clear everything an allow to discard old applications I don't use anymore.. 😉

I'll tell you if it changes something.

Just a note, I also do a bit of programmi,ng on my mac, using Lazarus installed with Fink. I noticed that Stephen is also doing programming, maybe there is something there, don't know...

Cheers


Pierrick

Jun 28, 2012 7:36 AM in response to PierrickL

PierrickL wrote:


I did try the combo install. No changes.


Hm. That's unfortunate. I'll give it a run just in case, but I'm pessimistic I'll get a different result.


PierrickL wrote:


Just a note, I also do a bit of programmi,ng on my mac, using Lazarus installed with Fink. I noticed that Stephen is also doing programming, maybe there is something there, don't know...


Interesting. For the record, my build directories are excluded from TM (I thought that would generate the most churn), though the actual source code directory is not (I've got some unversioned/untracked files that I want TM to backup).


Looking for other parallels: What version of MacBook Pro do you have? (model identifier in the System Profiler) I'm assuming your running Snow Leopard? Do you have any Lion partitions in your system?


Thanks,

Stephen

Jul 4, 2012 2:33 AM in response to PierrickL

Hi Stephen, hi Pondini,


So, I reformat my mac, and reinstalled everything. On the plus side: the computer is working much faster, I got rid of plenty of unused app! TM also is working faster and I don't get the "Node requires deep traversal:/ reason:contains changes|must scan subdirs|fsevent|" message anymore...


However, TM is still worrysome: it still takes 15 minutes to backup about 1GB, and 28 for 8GB...

Everything goes smoothly until it starts copying the files, then it takes forever (seems like anyway), with a lot of pauses in the transfer...


So all in all, it's better but still needs improvement! Any idea would be most welcome!


Cheers


Pierrick

Jul 4, 2012 4:57 AM in response to PierrickL

PS: funny thing is that it always seems to backup approximatively the same amount of files see the TMBuddy message below (backup done in 16min 50 sec):

"Starting standard backup

Backing up to: /Volumes/TMSauv/Backups.backupdb

No pre-backup thinning needed: 2.65 GB requested (including padding), 125.74 GB available

Waiting for index to be ready (100)

Copied 99997 files (3.3 GB) from volume HDDmac.

No pre-backup thinning needed: 1.59 GB requested (including padding), 122.37 GB available

Copied 99996 files (37.2 MB) from volume HDDmac.

Starting post-backup thinning

No post-back up thinning needed: no expired backups exist

Backup completed successfully."

And it is quite similar for the previous backups...


Cheers

Jul 4, 2012 7:59 AM in response to PierrickL

PierrickL wrote:


PS: funny thing is that it always seems to backup approximatively the same amount of files see the TMBuddy message below (backup done in 16min 50 sec):

Now that's something we've seen on a very few occasions; extremely slow backups, with very large numbers of files but small total data backed-up.


Are you running 10.6.8? If so, in some of those cases, reinstalling Snow Leopard (OSX only; not erasing anything), and installing the 10.6.7 "combo" (not going back to 10.6.8) seemed to solve it.


Very odd, but might be worth a try.

Jul 9, 2012 11:35 AM in response to Pondini

I am also having a problem with Time Machine backups taking way to long. Running 10.6.8. Incremental backups with no applications running take anywhere from 10 to 25 minutes. My guess is there is a problem with FSEvents as there are numerous directories being backed up on the increment for no reason (folders/files not modified).



For instance the following directory is backed up every hour even though the directory or the 1 file in the directory are not showing as modified. The file in the directory is not re backed up, only the directory:



/Volumes/MacintoshHD/Applications/iTunes.app/Contents/Resources/English.lproj/Ap pleTVSetup.nib



ls -le@ shows



drwxrwxr-x 3 root admin 102 Jun 22 23:55 AppleTVSetup.nib



Today is July 9 and it backs up this directory every hour.



Here are some other facts I have gathered while trying to figure out what is going on:



• Incremental backup took 11 minutes

• No applications were running

• load average .18 (before backupd started running)

• Spotlight disabled on TM backup drive

• Only 3 other backup sets existed on backup drive

• Total new items backed up 7911

7136 were directories

70 actual files

705 symlinks

• Total MB backed up: 7.43



• Lots of Directories being backed up incrementally even though modification date does shows much earlier than even the first backup.



Are there any command line tools to query FSEvents?



B

Jul 9, 2012 11:49 AM in response to bernuli

bernuli wrote:


I am also having a problem with Time Machine backups taking way to long...


Thanks for contributing your experience!

bernuli wrote:


Are there any command line tools to query FSEvents?


I've been experimenting with fs_usage, but it's quite the firehose (see also sc_usage). This got me thinking though and the FSEvent API isn't that complicated -- it wouldn't be that hard to write a little monitoring app (if an example doesn't already exist). But something else caught my eye as I skimming that page:

Note: Because disks can be modified by computers running earlier versions of Mac OS X (or potentially other operating systems), you should treat the events list as advisory rather than a definitive list of all changes to the volume. If a disk is modified by a computer running a previous version of Mac OS X, the historical log is discarded.

For example, backup software should still periodically perform a full sweep of any volume to ensure that no changes fall through the cracks.


I do have AFP sharing turned on for a few directories, so it's possible some of our older Macs are touching something (or at least convincing my system that it needs to check). Turning that off now to see if anything changes...


Thanks,

Stephen

Jul 9, 2012 12:16 PM in response to Stephen Bash

I hacked up a perl script to analyze the TM backup sets. Outputs the following:


rowCounterinodetypemodemtimectimebytesMBytesfilenameOnBackuporiginalMtimeoriginalCtimeoriginalBoriginalMBoriginalFile
8492038483d68932012-06-22-23552012-07-09-11241020.000102/Volumes/gtech/Backups.backupdb/mbp/2012-07-09-112715/MacintoshHD/Applications/i Tunes.app/Contents/Resources/English.lproj/AppleTVSetup.nib2012-06-22-23552012-06-22-23551020.000102/Volumes/MacintoshHD/Applications/iTunes.app/Contents/Resources/English.lproj/Ap pleTVSetup.nib



The perl code is a bit of a mess but here it is:


#! /usr/bin/perl -w

use strict;



#### 1.1 now using perl's lstat instead of backticks.

#### 1.2 Will compare all directories in Backups.backupdb.

#### 1.3 Trim down @laterListing after building to save memory.

#### 1.4 Store each index in a zip file on disk to save RAM and have something to reference later.

#### 1.5 Added support for spaces in file paths.

#### 1.6 added columns for file type, mode ctime for backup file and ORIGINAL mtime ctime bytes and file name

#### 1.6 added columns for file type, mode and ctime





#### Define global variables.

my $scriptName = "compareTMdirectories-1.6.pl";

my $outputDirectory = "$ENV{HOME}/Desktop/FolderCatalogs";



print "Running $scriptName at " . `date`;



die "Specify root folder. IE'/Volumes/LaCie1TB/Backups.backupdb/rmbp'\n" unless $ARGV[0];

my $rootFolder = "$ARGV[0]";

die "That is not a folder.\n" unless (-d $rootFolder) ;

$rootFolder =~ s/ /\\ /g;

$rootFolder =~ s/\/$//;



#### Create output Directory. Here is where we will write text files.

unless (-d "$outputDirectory") {

mkdir "$outputDirectory";

}

sleep 1;

die "Could not create output directory '$outputDirectory'.\n" unless (-d "$outputDirectory");





#### Look in directory specified at run time and get list of dirs to search.

my @dirList = glob "$rootFolder/*";

## Escape spaces in full path and file name

#for (@dirList) { s/ /\\ /g; }



#print "$rootFolder\n";

#print @dirList;exit;





#### Build an index of each TM backup set.

## Loop (in reverse order) through list of directories from glob.

## Build index file of each dir in glob, outputting to .Z.

my $i = scalar(@dirList)-1;

while ($i >= 0) {

my $dirToIndex = $dirList[$i];

$i = $i - 1;

#print "'$dirToIndex'\n";

next if ($dirToIndex =~ /Latest/); ## Ignore Latest symlink.

#next if (-e "$dirToIndex"); ## If index already exists, do not reindex, skip to next one.


#$dirToIndex =~ s/ /\\ /g;



#### Retrieve top level directory name to use for title of .Z output file.

my $dirIndex = returnFileName($dirToIndex);

$dirIndex .= ".Z";


## Add volume name to file name in order to differentiate between different TM volumes.

my $bupSetNamePrepend = $dirToIndex;

my @pathSegments = split /\//, $bupSetNamePrepend;

$bupSetNamePrepend = $pathSegments[2];

$dirIndex = "$bupSetNamePrepend-$dirIndex";


# print "'$dirToIndex' '$dirIndex' \n";

#print "'$outputDirectory/$dirIndex'\n";

#next;

## Check to see if output index file exists

if (-e "$outputDirectory/$dirIndex") {

print "Output file '$outputDirectory/$dirIndex' already exists. Skipping find on $dirToIndex\n";

} else {

my $findExec = "find -x \"$dirToIndex\" -ls | gzip --best - > \"$outputDirectory/$dirIndex\"";

print "Indexing using '$findExec'\n";

my $timerStart = time;

system ($findExec) == 0 || die "Could not find.\n";

my $elapsedTime = time - $timerStart;

$elapsedTime = sprintf "%.2f", $elapsedTime / 60;

print "Index of $dirToIndex took $elapsedTime minutes.\n";

#undef $timerStart,$elapsedTime;


#open (OUTPUTHANDLE, ">", "$outputDirectory/$bupSetName") or die "Failed to create output file \"$outputDirectory/$bupSetName\".\n";





}


#### Get list of all index files and compare later to earlier.

my @zList = glob "$outputDirectory/*.Z";

## Escape spaces in file name.

for (@zList) { s/ /\\ /g; }



my $i = scalar(@zList)-1;

while ($i > 0) {

my $later = $zList[$i];

$i = $i - 1;

my $earlier = $zList[$i];



my $compareFileName = returnFileName($later) ."_" . returnFileName($earlier) . ".txt";

next if (-e "$compareFileName");


print "Comparing\n";

my $timerStart = time;

$compareFileName =~ s/\\ / /g;

&compareFolderIndexes($later,$earlier,$compareFileName);

my $elapsedTime = time - $timerStart;

$elapsedTime = sprintf "%.2f", $elapsedTime / 60;

print "Compare time was $elapsedTime minutes.\n\n";

}





}





exit;













################################################################################ ###########

## Subroutines below, used in above.



sub returnFileName {

#### Retrieve directory name to use for title of output file.

my $fullPath = $_[0];

$fullPath = reverse($fullPath);

$fullPath =~ s/\/.+$//;

$fullPath = reverse($fullPath);

return $fullPath;

}





sub compareFolderIndexes {

## Get files to compare.

my $later = $_[0];

my $earlier = $_[1];

my $bupSetName = $_[2];


if (-e "$outputDirectory/$bupSetName") {

#print "File exists, skipping '$outputDirectory/$bupSetName'\n";

return;

}


print "Comparing $later to $earlier.\n";

open(COMPAREHANDLE, ">", "$outputDirectory/$bupSetName") or die "Could not open output file '$outputDirectory/$bupSetName'\n";

print COMPAREHANDLE "rowCounter\tinode\ttype\tmode\tmtime\tctime\tbytes\tMBytes\tfilenameOnBackup\t originalMtime\toriginalCtime\toriginalB\toriginalMB\toriginalFile\n";


#### Get hash of inodes in earlier directory.

my %earlierHash;

open(EARLIERHANDLE, "-|", "zcat $earlier");

while (<EARLIERHANDLE>) {

m/(\d+)/;

$earlierHash{$1} = 1;

#print $1 . "\n";



}

close(EARLIERHANDLE);

print "hash built\n";


#### Read though later index and see if we match any inodes to earlierHash.

open(LATERHANDLE, "-|", "zcat $later") or die "Could not open $later.\n";

my $rowCounter;

while (<LATERHANDLE>) {

s/^\s+//;

m/^(\d+).+? (\/.+$)/;

my $laterInode = $1;

my $laterFileName = $2;

## Remove symlink verbage from find -ls output

$laterFileName =~ s/( -> .+$)//;

#my $laterLine = "$laterInode\t$laterFileName";


if ($earlierHash{$laterInode}) {

} else {

$rowCounter ++;

#print "statting $laterFileName\n";

my (undef,$ino,$mode,undef,undef,undef,undef,$size,undef,$mtime,$ctime,undef,undef ) = lstat($laterFileName);

my $sizeMB = $size / 1000000;

## Format mtime.

my ($sec,$min,$hour,$mday,$mon,$year,undef,undef,undef) = localtime($mtime);

$year += 1900;

$mon += 1;

if ($mon =~ /^\d$/) {$mon = "0$mon"};

if ($mday =~ /^\d$/) {$mday = "0$mday"};

if ($hour =~ /^\d$/) {$hour = "0$hour"};

if ($min =~ /^\d$/) {$min = "0$min"};


## Format ctime.

my ($secC,$minC,$hourC,$mdayC,$monC,$yearC,undef,undef,undef) = localtime($ctime);

$yearC += 1900;

$monC += 1;

if ($monC =~ /^\d$/) {$monC = "0$monC"};

if ($mdayC =~ /^\d$/) {$mdayC = "0$mdayC"};

if ($hourC =~ /^\d$/) {$hourC = "0$hourC"};

if ($minC =~ /^\d$/) {$minC = "0$minC"};


## Get mode into TYPE and MODE.

my $type = substr $mode, 0, 1;

$mode =~ s/^\d//;

if ($type eq "1") {$type = "d";}

if ($type eq "3") {$type = "f";}

if ($type eq "4") {$type = "l";}



my $laterFileStat = "$rowCounter\t$ino\t$type\t$mode\t$year-$mon-$mday-$hour$min\t$yearC-$monC-$mda yC-$hourC$minC\t$size\t$sizeMB\t$laterFileName";


## Repeat stat on original file name. (Redundant code I know)

my $originalFileName = $laterFileName;

$originalFileName =~ s/^\/Volumes\/(.*?)\/(.*?)\/(.*?)\/(.*?)\//\/Volumes\//;



## First check to see if file exists.

if (-e "$originalFileName") {

(undef,$ino,$mode,undef,undef,undef,undef,$size,undef,$mtim e,$ctime,undef,undef) = lstat($originalFileName);

$sizeMB = $size / 1000000;

## Format mtime.

($sec,$min,$hour,$mday,$mon,$year,undef,undef,undef) = localtime($mtime);

$year += 1900;

$mon += 1;

if ($mon =~ /^\d$/) {$mon = "0$mon"};

if ($mday =~ /^\d$/) {$mday = "0$mday"};

if ($hour =~ /^\d$/) {$hour = "0$hour"};

if ($min =~ /^\d$/) {$min = "0$min"};


## Format ctime.

($secC,$minC,$hourC,$mdayC,$monC,$yearC,undef,undef,undef) = localtime($ctime);

$yearC += 1900;

$monC += 1;

if ($monC =~ /^\d$/) {$monC = "0$monC"};

if ($mdayC =~ /^\d$/) {$mdayC = "0$mdayC"};

if ($hourC =~ /^\d$/) {$hourC = "0$hourC"};

if ($minC =~ /^\d$/) {$minC = "0$minC"};

if ($type eq "4") {$type = "l";}


## Get mode into TYPE and MODE.

$type = substr $mode, 0, 1;

$mode =~ s/^\d//;

if ($type eq "1") {$type = "d";}

if ($type eq "3") {$type = "f";}


$laterFileStat = $laterFileStat . "\t$year-$mon-$mday-$hour$min\t$yearC-$monC-$mdayC-$hourC$minC\t$size\t$sizeMB\ t$originalFileName";

}

print COMPAREHANDLE $laterFileStat;

print COMPAREHANDLE "\n";




}



}

print COMPAREHANDLE "Completed compare of $later to $earlier at " . `date`;

print COMPAREHANDLE "EOF";

close(COMPAREHANDLE);

close(LATERHANDLE);

#exit;



}

Jul 12, 2012 7:35 PM in response to Stephen Bash

Stephen Bash wrote:

bernuli wrote:


Are there any command line tools to query FSEvents?


I've been experimenting with fs_usage, but it's quite the firehose (see also sc_usage). This got me thinking though and the FSEvent API isn't that complicated -- it wouldn't be that hard to write a little monitoring app...


I finally got around to piecing together a little tool that listens to FSEvents (I also discovered others have written wrappers in Python and Ruby for those interested). In the process of researching the FSEvent API I discovered that Time Machine records the FSEvent IDs in its log file (.Backup.log in each dated directory). So I modified my program to take a first and last event ID (based on two consecutive backups) to see if any incriminating events existed between backups (sorry, I should mention that the IDs are meant to be persistent, so you can ask the system for events in the past).


Well... On my home machines (where TM has always worked and not been a hassle) my program works great and I can see all kinds of events. On my office machine (which started this discussion), it doesn't work because EVERY backup log has the line:


Gathering events since 9223372036854775807.


which if you look closely is 2^63-1. Now in actual fact the FSEvent IDs are defined as "UInt64" (that is unsigned 64-bit integer) so they should be able to go to 2^64-1, but it's still interesting number...


That being said, my program does work for monitoring live events, so I may set it up and let it run for a while to see if I can observe any "

EventIdsWrapped
" events or other incriminating behavior.


Thanks,

Stephen

Jul 16, 2012 6:02 AM in response to Stephen Bash

Stephen Bash wrote:


That being said, my program does work for monitoring live events, so I may set it up and let it run for a while to see if I can observe any "

EventIdsWrapped
" events or other incriminating behavior.


So I left my monitor running all weekend and it didn't detect any incriminating events (actually no "flagged" events at all). That leads me to believe Time Machine has some bad state stored (hence the 2^63-1 "last event"), but I'm not quite sure what to do about that. I can format the backup drive (not my first choice), but I'm guessing TM stores some state on the system drive. I'll keep poking around.


Stephen

Jul 16, 2012 7:51 AM in response to Stephen Bash

Still no answers, but a little more digging shows most of the Time Machine metadata is stored in the extended file attributes on the backup volume (was writing this when Pondini replied! Thanks!):


$ xattr -l /Volumes/Time\ Machine\ Backups/Backups.backupdb/warp/2012-07-13-165249
com.apple.backup.SnapshotNumber:
00000000  32 32 30 36 00                                   |2206.|
00000005
com.apple.backup.SnapshotVersion:
00000000  31 00                                            |1.|
00000002
com.apple.backupd.SnapshotCompletionDate:
00000000  31 33 34 32 32 31 32 37 36 39 32 32 36 30 30 36  |1342212769226006|
00000010  00                                               |.|
00000011
com.apple.backupd.SnapshotStartDate:
00000000  31 33 34 32 32 31 32 30 39 35 36 34 31 38 36 35  |1342212095641865|
00000010  00                                               |.|
00000011
com.apple.backupd.SnapshotState:
00000000  34 00                                            |4.|
00000002
com.apple.backupd.SnapshotType:
00000000  31 00                                            |1.|
00000002
$


I can confirm the SnapshotCompletionDate is the exact value that shows up in the backup log as "Date of Previous snapshot". But what I'm really interested in is the last FSEvent ID, which is stored the next level down:


$ xattr -l /Volumes/Time\ Machine\ Backups/Backups.backupdb/warp/2012-07-13-165249/Snow\ Leopard/
com.apple.backupd.SnapshotVolumeLastFSEventID:
00000000  31 38 31 35 38 36 34 31 37 34 33 32 32 31 34 32  |1815864174322142|
00000010  35 32 37 33 00                                   |5273.|
00000015
$


To my surprise, that FSEvent ID looks valid (i.e. not 2^63-1; FWIW the most recent events as of this morning are about 18158641743223610342). So I guess something in the TM logic is creating the "bad" 2^63-1 number. For the gory details, here's dtruss running on backupd as in the region of interest:


open("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-16-090407.inProgress/.dat45aa.003\0", 0xA02, 0x1B6)   = 4 0
close(0x4)   = 0 0
rename("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-16-090407.inProgress/.dat45aa.003\0", "/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-16-090407.inProgress/.Backup.364137318.114736.log\0" = 0 0
chmod(0x1006EFDB0, 0x180, 0x0)   = 0 0
open("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-16-090407.inProgress/.Backup.364137318.114736.log\0", 0x1, 0x0)   = 4 0
fstat64(0x4, 0x1006EFDF0, 0x0)   = 0 0
write(0x4, "2012-07-16-09:15:18 - Starting backup\n\nPrevious snapshot:\n\t/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249\n\n\0", 0x82)   = 130 0
write(0x4, "Date of Previous snapshot: 1342212769226006\n\n\0", 0x2D)   = 45 0
getdirentriesattr(0x5, 0x1006E4EA0, 0x102000000)   = 1 0
geteuid(0x1006EB620, 0x1006EBBD8, 0x0)   = 0 0
close(0x5)   = 0 0
lstat64("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/.Backup.log\0", 0x1006F0120, 0xFC080)   = 0 0
lstat64("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/.exclusions.plist\0", 0x1006F0120, 0x7FFF71289650)   = 0 0
lstat64("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x1006F0120, 0x7FFF71289650)   = 0 0
getxattr("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x10017CB00, 0x0)   = 37 0
getxattr("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x10017CB00, 0x10017C630)   = 37 0
lstat64("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x1006F02B0, 0x7FFF71289650)   = 0 0
lstat64("/private/var/db/.TimeMachine.NeedsFullScan\0", 0x1006F0300, 0xFC080)   = -1 Err#2
getxattr("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x10017D060, 0x0)   = 21 0
getxattr("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x10017D060, 0x10017CDA0)   = 21 0
getxattr("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x10017D340, 0x0)   = 37 0
getxattr("/Volumes/Time Machine Backups/Backups.backupdb/warp/2012-07-13-165249/Snow Leopard\0", 0x10017D340, 0x10017CD70)   = 37 0
geteuid(0x7FFF709E8260, 0x0, 0x0)   = 0 0
write(0x4, "Gathering events since 9223372036854775807.\n\0", 0x2C)   = 44 0


I can confirm the /private/var/db/.TimeMachine.NeedsFullScan file does not exist on my system. Based on the sizes the last few getxattr calls are requesting the SnapshotVolumeLastFSEventID and SnapshotVolumeFSEventStoreUUID respectively (first with a NULL target to get the size, and then with a correctly sized buffer passed as the target). I have the full dtruss output, but prior to this TM just appears to be looking through the backup volume for snapshots.


Stephen

Time Machine always requires deep traversal?

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.