2010.11.03 - funny memory

today at work i was testing our application for memory leaks and other memory-related issues and came across interesting side effect of memory handing mechanism in linux. though it's correct, it looks at least surprising when you haven't seen that before. it appears that kernel nowadays perform lazy memory deallocation. consider following code:

#include <vector>
#include <iostream>
#include <boost/shared_array.hpp>
#include <cstring>
using namespace std;
using namespace boost;
int main(void)
  vector< shared_array<char> > v;
    // (1)
    cout<<"size: ";
    size_t size;
    cout<<"(re)allocating to "<<size<<"MB"<<endl;
    // (2)
    for(size_t i=v.size(); i<size; ++i)
      shared_array<char> tmp(new char[1024]);
      memset( tmp.get(), 0xF0, 1024 );
    // (3)
    random_shuffle( v.begin(), v.end() );
    // (4)
    if( size<v.size() )
    // (5)
    if( size==0 )
      vector< shared_array<char> > tmp;
    // (6)
    cout<<"size is "<<v.size()/1024<<"MB"<<endl<<endl;
  return 0;

this is a sample program that loops forever allocating given amount of memory (in MB), in 1kB chunks. user gives amount of memory to allocate interactively. it goes like this:

  1. get size of memory to use (explicitly - remember that there are counting pointers in back ground as well!) from user
  2. expand vector if it is too small
  3. set the order of elements to be random (this introduces extra entropy when deallocating part of memory only); this step is optional
  4. truncate out memory, if too much is allocated.
  5. special case - if size is to be zero, ensure vector is empty
  6. display current vector's size (to see if it works)

now compile and run this application on one terminal and open (h)top on the second, so that you'll be able to see them both. assuming you have 1GB of memory, ~200MB taken by the system (say: 800MB is free), try following sequence of allocations, taking look at (h)top after each step:

  1. 50
  2. 500
  3. 5
  4. 500
  5. 0

in the first run memory usage increases - good, we expected that. in second the same happens – so far, so good. the surprise is 3rd step, when memory usage usually does NOT drop! when you set it back to 500MB size it's still on the same level. now decreasing it to 0 (all memory is freed in program - remember our 'if')… usually does not free memory still. after some time, and some more (de)allocations memory will stabilize at the “real” level. confusing, huh? wait – there's more. now open one more terminal and run out application on it too. try sequence of allocating 500MB and then going back to 0MB on first terminal ((h)top still shows huge memory usage) and now do the same thing on the second terminal. since previous instance freed memory it should be available, right? in my case - on my machines kernel killed application because memory run out!

this is visible side effect of memory handling mechanism. funny, though quite surprising at first. keep in mind that what (h)top shows is now how much memory application uses at that very moment, but instead how much memory system has currently assigned to it.

blog/2010/11/03.txt · Last modified: 2013/05/17 19:08 (external edit)
Back to top
Valid CSS Driven by DokuWiki Recent changes RSS feed Valid XHTML 1.0