Advanced Rails - Building Industrial-Strength Web Apps in Record Time

(Tuis.) #1
Rails Optimization Example | 163

There is one last change we must make. By default, all Rails environments except pro-
duction have a log level of:debug. We want to set the benchmarking environment’s log
level to:info, so that we don’t confound the benchmarking results with I/O issues from
heavy log traffic. Add the following line toconfig/environments/benchmarking.yml:


config.log_level = :info

Running the benchmark


Now we can run the benchmarks. We use the Railsbenchperf_runcommand, which
takes as an argument the number of requests to make on each trial. We specify the
benchmark frombenchmarks.ymlwith the-bm=option, and we specify 20 trials with
theRAILS_PERF_RUNS variable:


$ RAILS_PERF_RUNS=20 perf_run 100 -bm=searches_create
benchmarking 20 runs with options 100 -bm=searches_create

perf data file: ./perf_run.searches_create.txt
requests=100, options=-bm=searches_create

loading environment 2.43529

page request total stddev% r/s ms/r
searches_create 44.70490 0.4030 2.24 447.05

all requests 44.70490 0.4030 2.24 447.05

Railsbench can benchmark multiple actions during the same run. We
are only benchmarking one action here, so we can just look at the
searches_createline in this table for our information. The last line is
just a summary.

The data shows us that we averaged 2.24 requests per second (447.05 milliseconds
per request). Each of 20 trials involves 100 requests, so the mean runtime for each
trial was 44.705 seconds. The standard deviation for that figure was 0.4030% of the
mean, or 0.180 seconds. (Thus, 95% of the trials should fall between 44.34 seconds
and 45.06 seconds, two standard deviations away from the mean.)


perf_runsaves its raw data for this run inperf_run.searches_create.txt, and we will
feed that file into other utilities for analysis. Between each benchmark, we store this
file away for comparison across the different versions under test.


Interpreting the results


Railsbench includes a utility calledperf_compthat will compare results between dif-
ferent runs ofperf_run. When run with two arguments (the two data files to be
compared), it will give a summary of each and a comparison between them. Here is
the comparison between the first benchmark (the control) and the second (with the
Listing#location cache improvement):

Free download pdf