How to test web server performance

During the development of a web application, it’s useful to know what kind of load it can hold. Today the application’s speed is one of the most important indicators and it is worth taking it seriously. A slow running application can push your users away.

To check how much your application is ready for production, you can use some utilities to test your server performance. Slow request processing speed can also help to identify incorrect server settings or scripts.

We'll cover 2 tools for understanding server-side performance: Apache Benchmark (ab) and Siege.

Apache Benchmark

Apache Benchmark (ab) is a single-threaded command-line tool used to measure the performance of HTTP web servers. Originally developed for testing the Apache HTTP server, it is mainly suitable for testing any web server. For example, you can compare the performance of Apache and Nginx servers and decide which server is more productive in your case.

ApacheBench is preinstalled in many modern distributions. To check it just type in the console:


Otherwise install the apache2-utils package, which includes ab:

sudo apt-get install apache2-utils

The syntax of the command is:

ab [options] [http[s]://]hostname[:port]/path

Let's send 200 requests to, where 20 of them will be sent at the same time:

ab -c 20 -n 200

-c - number of concurrent requests
-n - number of request

After some time we'll get results:

Server Software:        ECS
Server Hostname:
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256,2048,128
TLS Server Name:

Document Path:          /
Document Length:        1270 bytes

Concurrency Level:      20
Time taken for tests:   5.148 seconds
Complete requests:      200
Failed requests:        0
Total transferred:      324345 bytes
HTML transferred:       254000 bytes
Requests per second:    38.85 [#/sec] (mean)
Time per request:       514.812 [ms] (mean)
Time per request:       25.741 [ms] (mean, across all concurrent requests)
Transfer rate:          61.53 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      345  379  14.8    381     419
Processing:   112  125   4.7    125     137
Waiting:      112  125   4.7    125     137
Total:        460  504  19.0    508     552

Percentage of the requests served within a certain time (ms)
  50%    508
  66%    515
  75%    517
  80%    521
  90%    526
  95%    531
  98%    537
  99%    541
 100%    552 (longest request)

Note that ab sends requests as fast as possible.

The main indicators of site performance can be considered Requests per second, Time per request, as well as the response speed of the fastest and slowest response. You should also remember about the geographic distance of the tested web server.

Apache Benchmark supports many options.

For example, you can send POST request with a specific file and headers:

ab -p file.json -T application/json -H 'Authorization: PASS' -c 20 -n 200

-p - specifies the file
-H - authorization headers
-T - content-type headers
-c - number of concurrent requests
-n - number of request

Or set multiple cookies:

ab -c 20 -n 200 -H "Cookie: one_cookie=cookie_value1; two_cookie=cookie_value2"

All available options you can see below:

Options are:
    -n requests     Number of requests to perform
    -c concurrency  Number of multiple requests to make at a time
    -t timelimit    Seconds to max. to spend on benchmarking
                    This implies -n 50000
    -s timeout      Seconds to max. wait for each response
                    Default is 30 seconds
    -b windowsize   Size of TCP send/receive buffer, in bytes
    -B address      Address to bind to when making outgoing connections
    -p postfile     File containing data to POST. Remember also to set -T
    -u putfile      File containing data to PUT. Remember also to set -T
    -T content-type Content-type header to use for POST/PUT data, eg.
                    Default is 'text/plain'
    -v verbosity    How much troubleshooting info to print
    -w              Print out results in HTML tables
    -i              Use HEAD instead of GET
    -x attributes   String to insert as table attributes
    -y attributes   String to insert as tr attributes
    -z attributes   String to insert as td or th attributes
    -C attribute    Add cookie, eg. 'Apache=1234'. (repeatable)
    -H attribute    Add Arbitrary header line, eg. 'Accept-Encoding: gzip'
                    Inserted after all normal header lines. (repeatable)
    -A attribute    Add Basic WWW Authentication, the attributes
                    are a colon separated username and password.
    -P attribute    Add Basic Proxy Authentication, the attributes
                    are a colon separated username and password.
    -X proxy:port   Proxyserver and port number to use
    -V              Print version number and exit
    -k              Use HTTP KeepAlive feature
    -d              Do not show percentiles served table.
    -S              Do not show confidence estimators and warnings.
    -q              Do not show progress when doing more than 150 requests
    -l              Accept variable document length (use this for dynamic pages)
    -g filename     Output collected data to gnuplot format file.
    -e filename     Output CSV file with percentages served
    -r              Don't exit on socket receive errors.
    -m method       Method name
    -h              Display usage information (this message)
    -I              Disable TLS Server Name Indication (SNI) extension
    -Z ciphersuite  Specify SSL/TLS cipher suite (See openssl ciphers)
    -f protocol     Specify SSL/TLS protocol
                    (SSL2, TLS1, TLS1.1, TLS1.2 or ALL)


Siege is similar to ab in many ways, but also has a number of interesting features. We will look at 3 examples.

First, install the utility:

sudo apt-get install siege

A simple example:

siege -b -c 10 -r 20

-c - send 10 competitive requests
-r - number of repetitions of the test
-b - option allows to run a test without delay for throughput benchmarking

In other words, we make 20 times 10 competitive requests to the webserver without delay. Requests will go even if there are no responses from the server.

The results of the test:

** SIEGE 4.0.4
** Preparing 10 concurrent users for battle.
The server is now under siege...
Transactions:		         200 hits
Availability:		      100.00 %
Elapsed time:		       12.78 secs
Data transferred:	        0.12 MB
Response time:		        0.63 secs
Transaction rate:	       15.65 trans/sec
Throughput:		        0.01 MB/sec
Concurrency:		        9.92
Successful transactions:         200
Failed transactions:	           0
Longest transaction:	        0.68
Shortest transaction:	        0.57

Transactions - number of requests from all users.
Elapsed time - total duration of testing.
Data transferred - total amount of data transmitted by all simulated users. It includes both request bodies and their headers.
Response time - average time for which the server managed to respond to the client.
Transaction rate - average number of requests that the server managed to process per second.
Throughput - average amount of data transferred every second from the server to users.
Concurrency - number of simultaneous connections at which the server responds without delay.
Successful transactions - the number of requests the server answered.

Let's look at another example:

siege -d 5 -c 10 -t 60s

-d - delay in seconds between sending requests. By default, it's equal to 3 and means that requests will be sent at random intervals from 1 to 3 seconds. We set 5 seconds, i.e. requests will be sent at random intervals between 1 and 5 seconds.
-t - limits the time to run the test and has priority over the -r option. When specifying a value, you can use the suffixes 's', 'm' and 'h' to determine the time in seconds, minutes and hours.

And the last most interesting example, which in some way allows us to simulate the behavior of a real user by sending requests to different pages and making different intervals between requests:

siege -d 5 -c 10 -t 60s -i -f ~/urls.txt

-f - set the path to the text file with a list of URLs that must be visited during the testing process
-i - forces to take the URL from the file randomly

SIEGE 4.0.4
Usage: siege [options]
       siege [options] URL
       siege -g URL
  -V, --version             VERSION, prints the version number.
  -h, --help                HELP, prints this section.
  -C, --config              CONFIGURATION, show the current config.
  -v, --verbose             VERBOSE, prints notification to screen.
  -q, --quiet               QUIET turns verbose off and suppresses output.
  -g, --get                 GET, pull down HTTP headers and display the
                            transaction. Great for application debugging.
  -p, --print               PRINT, like GET only it prints the entire page.
  -c, --concurrent=NUM      CONCURRENT users, default is 10
  -r, --reps=NUM            REPS, number of times to run the test.
  -t, --time=NUMm           TIMED testing where "m" is modifier S, M, or H
                            ex: --time=1H, one hour test.
  -d, --delay=NUM           Time DELAY, random delay before each requst
  -b, --benchmark           BENCHMARK: no delays between requests.
  -i, --internet            INTERNET user simulation, hits URLs randomly.
  -f, --file=FILE           FILE, select a specific URLS FILE.
  -R, --rc=FILE             RC, specify an siegerc file
  -l, --log[=FILE]          LOG to FILE. If FILE is not specified, the
                            default is used: /var/log/siege.log
  -m, --mark="text"         MARK, mark the log file with a string.
                            between .001 and NUM. (NOT COUNTED IN STATS)
  -H, --header="text"       Add a header to request (can be many)
  -A, --user-agent="text"   Sets User-Agent in request
  -T, --content-type="text" Sets Content-Type in request
      --no-parser           NO PARSER, turn off the HTML page parser
      --no-follow           NO FOLLOW, do not follow HTTP redirects


Based on the test results, you can approximately evaluate the server performance. You need also remember that a lot depends on the type of pages we are checking, their content, the geographic distance of the tested web server and other factors.