Simulation Performance Metrics with time

I was trying to get a metric for the performance penalty of simulating with and without a model parameter.  The parameter was believed to be responsible for a performance difference of 10% to 15%, but after looking at the sim data more I became intrigued.  Since no one had really deep dived into this and shown more than a few data points on performance, it was time to get serious about simulation performance comparisons.

The first thing to do is not to use a simulation farm.  The data variation between runs was far to great - different machines were used, different loads.  Of course, you have to pick a particular machine and do everything from there to reduce variability.

First Try Using the "date" Command

The next problem was how to measure the time.  In my previous simple experiments I would write a simple script like the below.

date
COMMAND > /dev/null
date

Which would yield something like the below on the command line:

Sun Mar  2 13:43:24 CST 2014
Sun Mar 2 13:44:26 CST 2014

Then, I would get out my calculator and find it took 62 seconds to run the command.  This was okay for quick profiling.  But, I needed to run a lot more and compare dozens of data points; doing the above would take a long time and possibly introduce error.  And since there is so much variability in runtime, even on a single workstation, I wanted to run each simulation four times with the same parameters to try to get a more conclusive number.

Better Approach Using the "time" Command

A coworker recommended using the command "time".  The command "time" is a built in UNIX command that is made to measure clock time, among other metrics, of commands issued.  The simplest use case is below.

> time sleep 1

real 0m1.017s
user 0m0.001s
sys 0m0.002s

Which tells us that to execute the "sleep 1" command it took 1.017 seconds to run.  Which is just about perfect.  The other metrics are for how much cpu time was spent on the command.

But we can get even more advanced than that, we can can tell "time" to format the data in a way that is more useful for collecting metrics.  The example below will print out the word "real: " followed by the number of seconds the command took.  (The default is to use a time format that includes hours minutes and seconds, which is harder to convert for use in a spreadsheet program).

/usr/bin/time --format="real: %e" sleep 1

real: 1.00

We can then expand on that idea and also tell the command to use an output file "t" and to append the results.

/usr/bin/time --format="real: %e" --output=t --append sleep 1

In the last two examples, you might notice I have to use the version of "time" that is located under "/usr/bin" that is because "csh" has its own internal version of "time" that does not have the formatting options.  Also, on Apple Mac OSX the version of "time" that is used doesn't have formatting options - so you are stuck with the simpler use of "time".  Ubuntu and Redhat use the GNU version of "time" which has a lot more options than the alternatives.

From here, you can make more complicated scripts to queue up a runs and create a csv file like the below by filling in the format field the values of $PARM and $NCMDS passed into the simulation.

PARM, NCMDS, SEC
0, 10, 193.50
0, 10, 174.96
0, 10, 173.46
0, 10, 170.84
1, 10, 208.40
1, 10, 207.62
1, 10, 209.10
1, 10, 208.77
...

Final Analysis

The final data after being collected across a number of runs, averaged, and then plotted looks like the below.  It shows that there is a performance benefit for shorter transaction tests, but longer tests don't see a benefit - a more nuanced answer than a simple single percentage.

Doing this with "date" or trying to figure out the simulations by hand would have taken a lot more time to do.

Percentage Improvement of PARM=0 over PARM=1 Versus Number of Transactions

Percentage Improvement of PARM=0 over PARM=1 Versus Number of Transactions