best way to measure time
I am comparing serial and parallel solutions for the same algorithms and I was thinking what is the best way to measure the running times? I was using something like:
$ time ./helloworld
but I am not sure this is the best way. is it more useful to use <ctime> and only measure the loops running time? I know there is overhead in memory allocations/transfers and in threads creating so what would you consider a most beneficial (and fair) measuring?