1 Star2 Stars3 Stars4 Stars5 Stars
Loading ... Loading ...

By Dmitri Tikhanski Is your application, server or service is fast enough? How do you know? Can you be 100% sure that your latest feature hasn’t triggered a performance degradation or memory leak?

The only way to be sure is by regularly checking the performance of your web or app. But which tool should you use for this?

In this article, I’m going to review the pros and cons of the most popular open source solutions for load and performance testing.

Chance are that most of you have already seen this page: It’s a great list of 53 of the most commonly used open source performance testing tools. However, some of these tools are limited to only HTTP protocol, some haven’t been updated for years and most aren’t flexible enough to provide parametrization, correlation, assertions and distributed testing capabilities.

Given the challenges that most of us are facing today, out of this list of 52, I would only consider using the following four:





So these are the four that I’m going to review here. In this article, I’ll cover the main features of each tool, show a simple load test scenario and an example of the reports. I’ve also put together a comparison matrix at the end of this report – to help you decide which tool is best for your project ‘at a glance’ .

The Test Scenario and Infrastructure

For the comparison demo, I’ll be using simple a HTTP GET request by 20 threads with 100 000 iterations. Each tool will be sending requests as fast as it can.

The server (application under test) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 8Gb
OS: Windows Server 2008 R2 x64
Application Server: IIS 7.5.7600.16385

The client (load generator) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 4Gb
OS: Ubuntu Server 12.04 64-bit
Load Test Tools:

Grinder 3.11

Gatling 2.0.0.M3a

Tsung 1.51

JMeter 2.11

1. The Grinder

The Grinder is a free Java-based load testing framework available under a BSD-style open source license. It was developed by Paco Gomez and is maintained by Philip Aston. Over the year, the community has also contributed with many improvements, fixes and translations.

The Grinder consists of two main parts:

The Grinder Console – This is GUI application which controls various Grinder agents and monitors results in real time. The console can be used as a basic IDE for editing or developing test suites.

Grinder Agents – These are headless load generators; each can have a number of workers to create the load

Key Features of the Grinder:

TCP proxy – records network activity into the Grinder test script
Distributed testing – can scale with the increasing number of agent instances
Power of Python or Closure combined with any Java API for test script creation or modification
Flexible parameterization which includes creating test data on-the-fly and the capability to use external data sources like files, databases, etc.
Post processing and assertion – full access to test results for correlation and content verification
Support of multiple protocols

The Grinder Console Running a Sample Test

Grinder Test Results:


The Gatling Project is another free and open source performance testing tool, primarily developed and maintained by Stephane Landelle. The Grinder Gatling also has a basic GUI – limited to test recorder only. However, the tests can be developed in easy-readable/writable domain-specific language (DSL).

Key Features of Gatling:

HTTP Recorder

An expressive self-explanatory DSL for test development


Produces higher load by using an asynchronous non-blocking approach

Full support of HTTP(S) protocols & can also be used for JDBC and JMS load testing

Multiple input sources for data-driven tests

Powerful and flexible validation and assertions system

Comprehensive informative load reports

The Gatling Recorder Window:

An Example of a Gatling Report for a Load Scenario


Tsung (previously known as IDX-Tsunami) is the only non-Java based open source performance testing tool in today’s review. Tsung relies on Erlang so you’ll need to have it installed (for Debian/Ubuntu, it’s as simple as “apt-get install erlang”). The development of Tsung was started in 2001 by Nicolas Niclausse – who originally implemented a distributed load testing solution for Jabber (XMPP). Several months later, support for more protocols was added and in 2003 Tsung was able to perform HTTP Protocol load testing.

It is currently a fully functional performance testing solution with the support of modern protocols like websocket, authentication systems, databases, etc.

Key Features of Tsung:

Distributed by design

High performance. Underlying multithreaded-oriented Erlang architecture enables the simulation of thousands of virtual users on mid-end developer machines

Support of multiple protocols

A test recorder which supports HTTP and Postgres

OS monitoring. Both the load generator and application under the test operating system metrics can be collected via several protocols

Dynamic scenarios and mixed behaviours. The flexible load scenarios definition mechanism allows for any number of load patterns to be combined in a single test

Post processing and correlation

External data sources for data driven testing

Embedded easy-readable load reports which can be collected and visualized during load

Tsung doesn’t provide a GUI – for test development or execution. So you’lll have to live with the shell scripts, which are:

Tsung-recorder – a bash script which records a utility capable of capturing HTTP and Postgres requests and creates a Tsung config file from them

Tsung – a main bash control script to start/stop/debug and view the status of your test – a Perl script to generate HTML statistical and graphical reports. It requires the gnuplot and Perl Template library to work. For Debian/Ubuntu, the commands are:
– apt-get install gnuplo
– apt-get install libtemplate-perl
The main tsung script invocation produces the following output:

Running the test:

Querying the current test status:

Generating the statistics report with graphs can be done via the script:

Open report.html with your favorite browser to get the load report. A sample report for a demo scenario is provided below:

A Tsung Statistical Report

A Tsung Graphical Report

Apache JMeter

Apache JMeter is the only desktop application from today’s list. It has a user-friendly GUI, making test development and debugging processes much easier.

The earliest version of JMeter available for download is dated the 9th of March, 2001. Since that date, JMeter has been widely adopted and is now a popular open-source alternative to proprietary solutions like Silk Performer and LoadRunner. JMeter has a modular structure, in which the core is extended by plugins. This basically means that all the implemented protocols and features are plugins that have been developed by the Apache Software Foundation or online contributors.

Key Features of JMeter:

Cross-platform. JMeter can be run on any operating system with Java

Scalable. When you need to create a higher load than a single machine can create, JMeter can be executed in a distributed mode – meaning one master JMeter machine will control a number of remote hosts.

Multi-protocol support. The following protocols are all supported ‘out-of-the-box’: HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP

Multiple implementations of pre and post processors around sampler. This provides advanced setup, teardown parametrization and correlation capabilities

Various assertions to define criteria

Multiple built-in and external listeners to visualize and analyze performance test results

Integration with major build and continuous integration systems – making JMeter performance tests part of the full software development life cycle

The JMeter Application With an Aggregated Report on the Load Scenario:

The Grinder, Gatling, Tsung & JMeter Put to the Test

Let’s compare the load test results of these tools with the following metrics:

Average Response Time (ms)

Average Throughput (requests/second)

Total Test Execution Time (minutes)

First, let’s look at the average response and total test execution times:

Now, let’s see the average throughput:

As you can see, JMeter has the fastest response times with the highest average throughout, followed by Tsung and Gatling. The Grinder has the slowest times with the lowest average throughput.

Features Comparison Table

And finally, here’s a comparison table of the key features offered to you by each testing tool:

The Grinder
Console Only
Recorder Only
Test Recorder
TCP (including HTTP)
HTTP, Postgres
Test Language
Python, Clojure
Extension Language
Python, Clojure
Java, Beanshell, Javascript, Jexl
Load Reports
CSV, XML, Embedded Tables, Graphs, Plugins





Host monitoring
Yes with PerfMon plugin

Python knowledge required for test development & editing
Reports are very plain and brief

Limited support of protocols
Scala-based DSL language knowlegde required
Does not scale

Tested and supported only on Linux systems.
Bundled reporting isn’t easy to interpret

More About Each Testing Tool

Want to find out more about these tools? Log on to the websites below – or post a comment here and I’ll do my best to answer!

The Grinder –

Gatling –

Tsung –


Home Page:

JMeter Plugins:

Blazemeter’s Plugin for JMeter:

On a Final Note…

I truly hope that you’ve found this comparison review useful and that it’s helped you decide which open source performance testing tool to opt for. Out of all these tools, my personal recommendation has to be JMeter. This is what I use myself – along with BlazeMeter’s Load Testing Cloud because of its support for different JMeter versions, plugins and extensions.


Você também pode querer ler

Comments are off for this post.