R C-R wrote:
As has been mentioned several times, the the backing store is the various swapfile* files, by default in /private/var/vm/, just as the dynamic_pager man page says. But it should be obvious to anyone that (just as you noted) the "Page outs" & "Swap used" size numbers are not the same. Thus, "Swap used" can't possibly be a measure of the amount of data paged out, if for no other reason than "Swap used" is much larger than "Page outs," which is a cumulative measure of all the 4 KB sized page outs since startup. Even if "Swap used" measured just the amount of changed data in each 4 KB page that was paged out (like say just 20 changed bytes in some 4 KB page), it would still have to be smaller than the page out measure if it really represented paged out data.
In fact, if you check /private/var/vm/ with the Finder or any other directory tool you will see that "Swap used" is not even normally the same as the total size of all the swapfile* files.
You want it smaller? I make it smaller! Et Voilà!
14.843 page outs (64MB), 832KB of swap used. How did I make it? I started a virtual machine, I waited for it to raise the swap used value to 130MB and then I logged out. After that the cumulative page outs obviously didn't change and the swap used went back to less than 1MB. I did it without even thinking!
This is what it is stored in /var/vm:
$ ls -l /var/vm
total 6815744
-rw------T 1 root wheel 3221225472 20 Set 10:12 sleepimage
-rw------- 1 root wheel 67108864 20 Set 14:31 swapfile0
-rw------- 1 root wheel 67108864 20 Set 18:22 swapfile1
-rw------- 1 root wheel 134217728 20 Set 18:22 swapfile2
Which matches the 256MB total size reported in iStat Menu.
How does it happen that everytime I try something it fits with my analisys and it always conflicts with everything you say?
You miss the fact that the pages are not written one by one in the backing store, but they are first clustered, when possible, in groups of 4, all adjacent. So when there is a page out we can actually get up to 4 pages written in the backing store. This is what the source codes (yes, I can read the source codes. They are the only real documentation) say:
/*
* vm_pageout_cluster:
*
* Given a page, queue it to the appropriate I/O thread,
* which will page it out and attempt to clean adjacent pages
* in the same operation.
Where the 4 pages MAX per cluster comes from an header file that now I can't find.
As we know "to clean" a page also means to page it out in the backing store if dirty. It's interesting to notice how a very similar concept (and function) has been implemented in BSD Lite:
/*
* Attempt to pageout as many contiguous (to ``m'') dirty pages as possible
* from ``object''. Using information returned from the pager, we assemble
* a sorted list of contiguous dirty pages and feed them to the pager in one
* chunk. Called with paging queues and object locked. Also, object must
* already have a pager. */
void
vm_pageout_cluster(m, object)
When there are no dirty pages it stops. That's the reason why after the first page outs the "swap used" is lower than the multiple of 4 of the page outs number * 4KB.
That is why my orginal numbers that were:
Page outs: 255.707 (about 900 MB)
Swap Used: 1.84GB
make sense since I took the picture just after the kernel started paging out and was still paging. Not like now, to be clear. The swap use is
page outs * 4KB < swap used < 4 * page outs * 4KB
The total size os the swap files is 3GB > 1.84GB the value of swap used. All makes sense.
Honestly, does it take so much to try to think that it may not work as you believe?