Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Terminal Shell Task/Coding Issue

Since El Capitan no longer has the Secure Empty Trash in Finder, I sought out alternative via Terminal, however, once the task has been initiated in Terminal shell, it deletes my 200mb file so fast (6-10seconds) I don’t believe its performing the task of securely emptying the trash with a 7 pass overwrite, nor does make the delete sound upon deletion completion.


1: My hard drive is a disk drive (not SSD or Partial disk drive)


2: Latest OS X


3: Please don’t explain SSD to me (I’ve done my research, prior to El Capitan)


4: I don’t want to do a single pass - want 7 or 35 Secure Empty Trash.


5: I am not going to use file vault lol.



Heres variations of what I’ve tried - why doesn’t rm work for me?


srm -m draggedfile


srm -rfv -z


srm -rf -m


srm draggedfile


srm -rfv -m draggedvolume


srm -rfv -v draggedvolume


srm -rfv -m /path/to/file-or-folder


rm -rfv -m (says: rm: illegal option -- m)


Should I try Sudo?


Why is it deleting so fast????


Is Terminal app corrupted? Disk Utility’s doesn’t mention any issues.

MacBook Pro, OS X El Capitan (10.11.1)

Posted on Nov 2, 2015 7:17 PM

Reply
17 replies

Nov 2, 2015 8:11 PM in response to hiccup

Try

/usr/bin/time -pl srm ...


change the srm options and observe which options seem to do the most work. "voluntary context switches" seems to be proportional the amount of work. I suspect this is because each time srm does a write, ti is going into a wait state until the I/O is complete which is basically a voluntary context switch.


For me the most work happened with neither -s nor -m specified.


I suggest testing with a large file that you keep making a copy of so that you are testing erasing the same sized file each time.


I guess I should also mention that srm is only going to erase the information in the file. It will not erase any information that is an a replaced sector should the disk decide a sector was no longer reliable. It will not erase any file information that is a scratch file that a program use used with the information to process it. srm will not erase any cache information created while using the information in the file. srm will not erase any information that my have been written to page/swap files because the OS needed to move information out of RAM to disk because of other demands on RAM. If an editor you used to work on the file decides to write a new file and then rename the new file to the original name, as part of the rename the file system will just release the original file's storage to the file system free list, and srm will never touch that.


You already mentioned you are not using an SSD, so you are not subject to the fact that srm on an SSD just shortens the life of the SSD without touching any of the original data which is just sitting in the SSD hardware.


srm is not a fully reliable way to protect information. Depending on how paranoid are about your information, you should really reconsider the FileVault so that delete files (even scratch, cache, renamed) are just random bits.

Nov 2, 2015 9:07 PM in response to BobHarris

Thank you for offering a potential solution, sadly, it still goes alarmingly fast, max about 5 Seconds regardless of variation of coding.


I'm aware srm doesn't provide the most secure for of file erasure, that would be smashing the hard drive to fine dust with a hammer.


I really want to avoid FileVault, if srm is working for others at very slow speeds, why isn't it working for me at all? I won't stop till I find a solution for Secure Empty Trash via Terminal. dropping -m and -s is problematic no? the both instruct certain tasks, that ensures that the file is securely erased with a 7 pass encryption.


Any other coding ideas? how about Sudo srm -rf -m or Sudo srm -m?

Nov 3, 2015 4:49 AM in response to hiccup

hiccup wrote:


Any other coding ideas? how about Sudo srm -rf -m or Sudo srm -m?


As Bob suggested, test on a large file (i.e. a GB) (that you don't need to use sudo, you can use sudo later after you find out why it's erasing so fast)

Also, srm will print a % completed as it is erasing, are you seeing this?


User uploaded file


Try using srm -m first (you can use the recursive option later.)

Nov 3, 2015 6:29 AM in response to hiccup

Well /usr/bin/time -pl srm ... should have given you some comparison information between doing a simple rm vs an srm vs an srm -m vs and srm -s that will give you an indication of how much work was being done.


You can try the following against rm, srm, srm -s, srm -m


In one Terminal session issue the following command:

sudo fs_usage | grep rm >tmp.rm

In another Terminal session issue the command

cp test.file tmp.tmp

rm tmp.tmp

Go back to the first terminal session and Control-C to abort the fs_usage.


Now start a new

sudo fs_usage | grep srm >tmp.srm

in the another Terminal session

cp test.file tmp.tmp

srm tmp.tmp

go back to the first and Control-C the fs_usage


Next

sudo fs_usage | grep srm >tmp.srm_m

in the other session

cp test.file tmp.tmp

srm -m tmp.tmp

Control-C the fs_usage


finally sudo fs_usage | grep srm >tmp.srm_s

cp test.file tmp.tmp

srm -s tmp.tmp

Control-C fs_usage


Now you should have 4 files of system call trace data for the rm command, the srm command (35 pass), the srm -m (7 pass), the srm -s (simple overwrite).


Now compare the 4 files to see how much extra work is done for each kind of delete.

Nov 3, 2015 11:54 AM in response to hiccup

If you need to zero, unused space in your filesystem, you can create a large file then erase it. That's how apple used to do it anyway.


Here is one way, from terminal.

dd if=/dev/zero of=zero bs=1024k count=100


and


dd if=/dev/random of=zero bs=1024k count=100


count is the number of input blocks

bs is input and output block size

Nov 8, 2015 5:25 PM in response to BobHarris

Hi guys, I've tried a bunch of files sizes and different codes, its still suspiciously fast, for example a 1 gig zip file using /usr/bin/time -pl srm (35pass) takes 5 minutes, I recall it taking a lot longer when when it was still in Finder in the previous OS X. /usr/bin/time -pl srm -m goes so fast just under 2 minutes.

For a 1 gig file how long does it take you?? (I know it depends on how many files and what kind of files, just want to know if theres an average)

Nov 8, 2015 7:00 PM in response to hiccup

For a 1 gig file how long does it take you?? (I know it depends on how many files and what kind of files, just want to know if theres an average)

A) I have a SSD, so

A.1) the srm command will not securely erase a file on an SSD, it will just keep writing to SSD revectored physical sectors to my logical sectors

A.2) it would needlessly shorten the life of my SSD.


B) I use FileVault, so when I delete a file, it is just a bunch of random bits to begin with.


C) I understand why secure erase does not guarantee the data is really erased.

Nov 8, 2015 7:57 PM in response to hiccup

It's a lot faster than I'd have imagined. I did only 100m. Got these numbers.


mac $ dd if=/dev/random  of=zero  bs=1024k  count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 7.336857 secs (14291897 bytes/sec)
mac $ ls -l zero 
-rw-r--r--  1 mac  staff   100M Nov  8 22:52 zero
mac $ \time -lp  srm -siv  zero 
Remove zero? y
removing zero
done
real         6.56
user         0.45
sys          0.09
   1609728  maximum resident set size
         0  average shared memory size
         0  average unshared data size
         0  average unshared stack size
       402  page reclaims
         0  page faults
         0  swaps
         0  block input operations
        97  block output operations
         0  messages sent
         0  messages received
         0  signals received
       119  voluntary context switches
       179  involuntary context switches
mac $ dd if=/dev/random  of=zero  bs=1024k  count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 7.368458 secs (14230603 bytes/sec)
mac $ \time -lp  srm -miv  zero 
Remove zero? y
removing zero
done
real        18.17
user         0.91
sys          0.20
   1609728  maximum resident set size
         0  average shared memory size
         0  average unshared data size
         0  average unshared stack size
       402  page reclaims
         0  page faults
         0  swaps
         0  block input operations
        87  block output operations
         0  messages sent
         0  messages received
         0  signals received
       725  voluntary context switches
       652  involuntary context switches
mac $

Nov 8, 2015 9:08 PM in response to rccharles

Looks about the same.


When SET was on Finder menu it usually took about 15 minute for so, not 5 minutes, something odd's going on, I me its simply to fast to write a 35 pass on 1 GB file, even the 7 pass /usr/bin/time -pl srm -m should take longer than just under two minutes.


I don't no what other coding options there are, maybe I need to reinstall El Capitan and see if Terminal was corrupted on the original install. Seems a bit drastic for such a tiny app.

Nov 9, 2015 6:15 AM in response to hiccup

hiccup wrote:


When SET was on Finder menu it usually took about 15 minute for so, not 5 minutes, something odd's going on, I me its simply to fast to write a 35 pass on 1 GB file, even the 7 pass /usr/bin/time -pl srm -m should take longer than just under two minutes.



I wouldn't compare the time to Finder.

Try copying the 1G with cp, then multiply by 7 and then compare the time to srm -m

Nov 9, 2015 7:15 AM in response to rccharles

rccharles

  1. 119 voluntary context switches
  2. 725 voluntary context switches

A voluntary context switch can be caused by making a system call that results in waiting for I/O to complete.


The first erase was a single pass of zeros (119) context switches

The second erase was a medium 7 passes of random data (725) context switches, which is 6 times as many.


In other words it took about 100 context switches to write a single pass of zeros, and 700 context switches to write 7 passes of random data. Now some of the context switches may have been for getting the random data, but I'm guessing most was to write 7 times over the file.


However, the bottom line is that srm is not insuring your data is securely erased. It is just doing a lot of I/O hoping it is catching all of your bits.


And if you get a new Mac with an SSD, you will need to find some other way to insure your data is secure, because srm is not going to do anything good for an SSD, except shorten its life and get you to spend more money when it needs to be replaced early (well good for someone else, such as OWC or Apple that will get your money 🙂).

Nov 9, 2015 7:42 AM in response to hiccup

I don't no what other coding options there are, maybe I need to reinstall El Capitan and see if Terminal was corrupted on the original install. Seems a bit drastic for such a tiny app.

It is unlikely anything is corrupt. And the 'srm' command is totally independent from the Terminal app. The Terminal app just draws a window and passes keyboard input to the shell or program running under the shell, and it displays the output from the shell or the program running under the shell.


And while it is possible the Finder used 'srm' under the covers, it is also possible the Finder rolled its own secure erase an that 'srm' is either more efficient, or it is less effective (for example not forcing the random data out of the operating system file system cache between each pass, so that it ends up just overwriting the memory cache and not the file).


You can download the open source 'srm' command and study how it works.

Nov 14, 2015 5:53 PM in response to BobHarris

How can I use the second code after the first code, when the first code deletes and removes file? is there a way to execute both codes at once? I have a feeling that Apples Finder coded executed more than one task to deletion, hence the time difference.


/usr/bin/time -pl srm -m


followed by


rm -rP /path/to/file-or-folder


Overwrite the contents before the deletion (from the terminal);

rm -rP /path/to/file-or-folder

Where r is to recurse over the folders and P will overwrite their contents


OR should I do this:


/usr/bin/time -pl srm -m


diskutil secureErase freespace 2 /Volumes/DRIVENAME

I really wonder just what Apple used! its sad they've started crippling Disk Utilities (they first removed 35pass in Mavericks - no one seemed to notice, now its gone completely) and Finder, even those the are literally millions still using Hard drives. When SSD drives become as cheap as a 2T HD I'll adopt.

Terminal Shell Task/Coding Issue

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.