Hotwire Tech Blog

Scribes from Hotwire Engineering


At Hotwire we have a long history of using Performance testing tools to certify production release candidates. LoadRunner was used for more than 10 years to run 50+ complex scenarios and sign-off release candidates.

In 2015 Performance Testing team migrated all Performance test scripts to Gatling.

We have achieved multiple benefits by this migration:

  • Started using Scala DSL instead of LoadRunner UI
  • Implemented real-time monitoring with Graphite and Grafana
  • Replaced Windows load generators with RHEL7 and moved them to AWS
  • Increased number of load generators and enabled CI for performance testing of services
  • Saved $$$ on LoadRunner licenses ($200k+ in estimated savings)

However, with the shift towards micro services architecture and CI, we got increased demand for continuous performance testing. But the limited number of load generators on EC2 nodes became a bottleneck:

  • Having several dedicated load generators was not enough during high-peak hours when multiple pipelines are running
  • At times when the demand is low (lets say overnight) nodes are idle and we keep paying for them
  • We have to pick a load generator manually each time or create a script to do it
  • We need to have network ACLs configured the same way on all load generators


To alleviate the above mentioned bottlenecks, we decided to look into containerization approach in order to create a scalable load generation farm where we can start many tests simultaneously and free resources once tests are complete. The solution was to run the Gatling tests on docker containers running on the AWS EC2 Container Service.

This article will describe the use of the technologies that were a part of our solution.


Gatling is an open-source load testing framework based on Scala, Akka and Netty. The software is designed to be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications.” – Wikipedia

The maven plugin for gatling is used to run the gatling scenarios. Let’s have a look at how the maven plugin has been configured (in pom.xml) for one of our Gatling projects:

<em>  &lt;dependency&gt;</em>
<em>    &lt;groupId&gt;io.gatling.highcharts&lt;/groupId&gt;</em>
<em>    &lt;artifactId&gt;gatling-charts-highcharts&lt;/artifactId&gt;</em>
<em>    &lt;version&gt;X.Y.Z&lt;/version&gt;</em>
<em>    &lt;scope&gt;test&lt;/scope&gt;</em>
<em>  &lt;/dependency&gt;</em>
<em>  &lt;dependency&gt;</em>
<em>    &lt;groupId&gt;io.gatling&lt;/groupId&gt;</em>
<em>    &lt;artifactId&gt;gatling-core&lt;/artifactId&gt;</em>
<em>    &lt;version&gt;X.Y.Z&lt;/version&gt;</em>
<em>  &lt;/dependency&gt;</em>
<em>  &lt;dependency&gt;</em>
<em>    &lt;groupId&gt;io.gatling&lt;/groupId&gt;</em>
<em>    &lt;artifactId&gt;gatling-http&lt;/artifactId&gt;</em>
<em>    &lt;version&gt;X.Y.Z&lt;/version&gt;</em>
<em>  &lt;/dependency&gt;</em>
<em>  &lt;sourceDirectory&gt;${project.basedir}/user-files/&lt;/sourceDirectory&gt;</em>
<em>  &lt;plugins&gt;</em>
<em>    &lt;plugin&gt;</em>
<em>      &lt;groupId&gt;net.alchim31.maven&lt;/groupId&gt;</em>
<em>      &lt;artifactId&gt;scala-maven-plugin&lt;/artifactId&gt;</em>
<em>      &lt;version&gt; </em><em>3.2.2&lt;/version&gt;</em>
<em>    &lt;/plugin&gt;</em>
<em>    &lt;plugin&gt;</em>
<em>      &lt;groupId&gt;io.gatling&lt;/groupId&gt;</em>
<em>      &lt;artifactId&gt;gatling-maven-plugin&lt;/artifactId&gt;</em>
<em>      &lt;version&gt; </em><em>2.1.7&lt;/version&gt;</em>
<em>      &lt;configuration&gt;</em>
<em>        &lt;simulationsFolder&gt;${project.basedir}/user-files/simulations/&lt;/simulationsFolder&gt;</em>
<em>        &lt;simulationClass&gt;simulations.Me&lt;/simulationClass&gt;</em>
<em>      &lt;/configuration&gt;</em>
<em>    &lt;/plugin&gt;</em>
<em>  &lt;/plugins&gt;</em>
<em>&lt;/build&gt; </em>



Simply put “Docker is an open source software that makes using Linux containers easier by automating the deployment of applications inside software containers”. The following images draws a contrast between the architecture of bare metal servers,  virtual machines and containers.

Docker Definitions
Docker Container Docker containers wrap up a piece of software in a complete filesystem. The containers have everything the software needs to run: code, runtime, system tools, system libraries. This guarantees that it will always run the same, irrespective of the environment it is running in.
Docker Image

A Docker image is a read-only template which is created during the build phase of Docker. Docker containers are created from Docker images.Each image consists of a series of layers.

Docker makes use of union file systems to combine these layers into a single image. Every image starts from a base image, for example a base Ubuntu image. You can also use your won images as the basis for a new image.

Docker Repository A Docker Repository is a namespace that is used to store a Docker Image
Docker Registry Docker Registry is where Docker Images can be stored, and eventually accessed.
Dockerfile  A Dockerfile is a text based script that contains instructions and commands for building the image from the base image. Each instruction creates a new layer in the image.


Example: Dockerfile

Here is a example Dockerfile from one of the Gatling projects:

<em>FROM maven:3.2-jdk-7</em>
<em>RUN mkdir -p /usr/src/app</em>
<em>WORKDIR /usr/src/app</em>
<em>ADD . /usr/src/app</em><em> </em>
<em>ENTRYPOINT mvn -B gatling:execute</em>

The above code sets the base image to be “maven” tagged with 3.2-jdk-7. This image will be downloaded and will be used to run the subsequent instructions. The RUN step will run the mkdir command in a new layer on top of the current image and will commit the results. The image resulting from this step will be used as the basis for the next instruction.

“WORKDIR” sets the working directory for the RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. Next we use the ADD instruction to copy the files and directories from the current project directory to /usr/src/app on the container filesystem.

The last ENTRYPOINT instruction configures the container to be run as an executable and will run this command when it boots up. If you have multiple ENTRYPOINT directives, only the last one in the file will have effect.

Building the Docker Image:

To build a new docker image we use the docker build command which needs a Dockerfile and a context. A context refers to the files located at the given PATH or URL. It expects the Dockerfile to be present in the directory from which the docker build command is being run.

<em>docker build --tag="&lt;your_docker_registry&gt;/gatling:latest" .</em>

The context for this command is the current directory. Here the repository name is <your_docker_registry>/gatling and the tag is latest. We can attach multiple tags to an image. For eg :
<em>docker build --tag="&lt;your_docker_registry&gt;/gatling:latest" --tag="&lt;your_docker_registry&gt;/gatling:v1.0" .</em>


Pushing the image to the registry:

To push a newly built image to the docker registry we use the docker push command. At Hotwire we use a Docker Registry that is hosted internally.

<em>docker push </em><em>&lt;your_docker_registry&gt;</em><em>/gatling:latest</em>



EC2 Container Service (ECS) is a highly scalable container management service provided by Amazon. It fully supports docker containers. You run your application containers on a managed cluster of Amazon EC2 instances.

AWS Definitions
Cluster A cluster is a pool of the AWS EC2 instances in a particular Region that are available to run tasks on. Each cluster can have multiple instance types and sizes. The EC2 instances that are a part of the cluster has an Amazon ECS agent running on it.
Task Definition It’s an app definition in JSON. Fields in the file define the image for each container, convey memory and CPU requirements, and also specify the port mappings that are needed for the containers in the task to communicate with each other.
Service An Amazon ECS service allows you to run and maintain a specified number of instances of a task definition simultaneously.
Container A container that was created as part of a task.


For our implementation of the performance team’s Docker container on ECS, we used tasks instead of service as the performance tests are short lived processes which end once the test is over.

Create Task Definition

The task definition can be created using the Web UI or using the AWS ECS cli. We use the register-task-definition to create a new task definition. The syntax is as follows:

<em>aws ecs register-task-definition --cli-input-json file://&lt;path_to_json_file&gt;/pt-v2-me.json</em>

The JSON will be of the format:
<em>  "containerDefinitions": [</em>
<em>    {</em>
<em>      "name": “pt-v2-me”,</em>
<em>      "image": "&lt;your_docker_registry&gt;/pt-v2-me:latest",</em>
<em>      "cpu": 200,</em>
<em>      "environment": [</em>
<em>                    {</em>
<em>                        "name": "BASE_URL",</em>
<em>                        "value": ""</em>
<em>                    },</em>
<em>                    {</em>
<em>                        "name": "TOTAL_DURATION",</em>
<em>                        "value": "10"</em>
<em>                    }</em>
<em>      ],</em>
<em>      "memory": 512,</em>
<em>      "essential": true</em>
<em>    }</em>
<em>  ],</em>
<em>  “family”: “pt-v2-me”</em>


Update Task Definition

Once the Docker image is updated and pushed to the Docker registry, the next step in our implementation is to update the task definition, with the new version of the Docker image. Whenever an update is done to the task definition, a new version is created for it in AWS ECS. This update is done using the AWS ECS cli. A code snippet from our script:

<em>#Parameters to the script</em>

<em># Download the current task definition in json format and store it in a temporary file</em>
<em>aws ecs describe-task-definition --task-definition $task_name &gt; /tmp/${task_name}.json</em>

<em>existing_image=`cat /tmp/${task_name}.json | jq -r '.taskDefinition.containerDefinitions[0].image'`</em>

<em>if [ $deploy_image != $existing_image ]; then</em>

<em>  # Update the task json file with the new image name</em>
<em>  `sed –i ‘’ "s/$(echo $existing_image | sed -e 's/[\/&amp;]/\\&amp;/g')/$(echo $deploy_image | sed -e 's/[\/&amp;]/\\&amp;/g')/g" /tmp/${task_name}.json`</em>

<em>  #Extract the information from the json file and save them into variables. We will need this info to make the cli request to update the task definition. We are using <a href="">jq</a> to parse the json file</em>
<em> family=`cat /tmp/${task_name}.json | jq ''`</em>
<em> volumes=`cat /tmp/${task_name}.json | jq '.taskDefinition.volumes'`</em>
<em> container_definitions=`cat /tmp/${task_name}.json | jq '.taskDefinition.containerDefinitions'`</em>

<em>  #Create the json request for the update</em>
<em>  req_json="{\"volumes\":$volumes,\"containerDefinitions\":$container_definitions,\"family\":$family}"</em>
<em>  echo $req_json &gt; /tmp/$task_name.json.req</em>

<em>  #Update the task definition</em>
<em>  echo "Updating Task Definition"</em>
<em>  aws ecs register-task-definition --cli-input-json file:///tmp/$task_name.json.req &gt; /tmp/$task_name.json.resp</em>



Run the task

We use the aws ecs cli to run the task:

<em>aws ecs run-task --cluster &lt;cluster_name&gt; --task-definition &lt;task_definition_name&gt; --overrides &lt;variables_to_override&gt; --output text</em>

The override parameter supplies override values for variables that have been setup as a part of the task definition. For example, “BASE_URL” and “TOTAL_DURATION” defined in the code snippet above. Its value will be of the type:
<em>overrides="\"{\\\"containerOverrides\\\": [{\\\"name\\\": \\\"&lt;task_definition_name&gt;\\\",\\\"environment\\\": ["$options_list"]}]}\""</em>
<em>where options_list is set to key value pairs</em>
<em>options_list="{\\\"name\\\": \\\"key\\\",\\\"value\\\": \\\"value\\\"}"</em>



Jenkins jobs have been created to call the above functionality that allow the Gatling tests to be run on AWS EC2 container service. A screenshot of the Jenkins job that runs the task on the ECS cluster:

Screen Shot 2016-07-06 at 1.25.50 PM





  1. Better utilization of compute resources resulting in a lot of cost savings.
  2. High performance as we can start, stop, create thousands of containers in seconds
  3. Using AWS EC2 Container Service adds the advantage of having integration to other AWS services.
  4. The solution offers scalability as we can add instances to the cluster in times of high demand and shrink as the demand reduces.
  5. The same containers can be run on AWS or on premise making the solution highly portable.



Leave a Reply

Your email address will not be published. Required fields are marked *