R C-R wrote:
Consider that it isn't as simple as just Active Memory -> Inactive Memory -> Free Memory. It usually (we hope) is Active Memory -> Inactive Memory -> (soft fault) -> Active Memory -> Inactive memory, perhaps repeating several times before a page must be freed up on a hard fault to make room for a new active page.
Sorry, but Active Memory -> Inactive Memory -> Free Memory is what the pageout daemon does. Inactive Memory -> (soft fault) -> Active Memory is the pagein algorithm. The first is well described in the source code I checked. It also should do it in two steps: it first balance the Active and Inactive memory making the Inactive memory about half of the Active memory (it's a bit more complex. The Inactive memory target is also function of the Speculative Memory) and then it goes on freeing memory from the inactive memory queue.The condition to trigger it is written in the piece of code I posted:
if ((vm_page_free_count < vm_page_free_min) ||
((vm_page_free_count < vm_page_free_target) &&
((vm_page_inactive_count + vm_page_speculative_count) < vm_page_inactive_min)))
thread_wakeup((event_t) &vm_page_free_wanted);
Even more interesting there is a section considering the option that the dynamic pager is not running. In that case, if an Active memory page is a candidate to become inactive, if the page is dirty it gets put back into the Active queue, with the pageout deamon keeping scanning the Active memory until if finds a page that is not dirty. Which explains the ugly degradation in performances when the dynamic pager (the virtual memory) is disabled.
Another thing, that finally explains why OS X may write in the backing storage while we still see plenty of Free memory in Activity Monitor or the likes: the free memory relevant to the kernel is not the one we are used to, i.e. the free memory reported in AM, but the one reported by vm_stat. This is the formula:
Free Memory (AM) = Free Memory + Speculative Memory
The maximum lower limit for free memory before to request extra pages, i.e. the value above which the pageout algorithm will NOT be called is 2000 pages, 8MB.
I made few tests and for the first time all numbers started to make sense. After in vm_stat the free memory went below 2000 pages the Active and Inactive memory have been balanced with the proportion mentioned (from 3-4:1 to about 2.1) and some pageouts occurred growing the backing storage from 0. But AM was still showing 800MB or so of Free Memory. Obviously the Speculative Memory was keeping the rest of RAM shown as free in AM. This is shown in here:
free active spec inactive wire faults copy 0fill reactive pageins pageout
23171 266678 274528 72453 149456 2587 1 1993 0 12658 0
1744 247058 268286 70677 198294 1395 2 361 0 48222 0
1744 246783 217320 70677 249657 2656 0 1564 186 46901 0
1744 245288 171390 71071 296880 1923 0 222 318 43037 13
1744 246029 118976 74631 344752 2660 1 1497 1 43615 0
2000 246549 78129 81932 377369 4136 0 1464 505 29601 107
[...]
1920 247681 18846 125092 392628 348 0 283 0 79 0
Where the last line shows the status after the pageout daemon stopped.
In other words the Speculative memory, which can (and does) keep unnecessary data read in advance from the disk can cause the kernel to page out. All of this happens in a concurrent fashion. So while the kernel is paging out another process can request (or release) memory, invoking the pagein process.
Again, this is what the source code says, and sadly that is not shown in Activity Monitor at all. The free memory reported can be considered a fake number at this point. It is not documented either, since the Speculative memory, which is a big player in the memory management module is not mentioned anywhere. Which, as we know, lead already to far too much confusion.