blog single gear

New Performance Testing Tool added to the Cloud Foundry Incubator

At the April Cloud Foundry Advisory Board meeting the Performance Acceptance Test (PAT) project, contributed by IBM, was accepted as a new Cloud Foundry incubator project. The idea behind PAT is that there should be a super easy way to test performance of Cloud Foundry installations so that any intended improvements can be proven, and regressions can be caught before the new code goes live.

High level overview of the PAT tool

High level overview of the PAT tool

PAT was originally created at the start of 2014 when it was noticed that a previous load testing tool project called Stac2 had gone stale, leaving an important gap in the Cloud Foundry CI/CD story that needed filling. IBM shared an early preview of the PAT tool through the cloudfoundry-community repository and has been developing it with input of ideas from the performance architects working on IBM’s Cloud Foundry based Bluemix project and other interested parties in the CF community.

So let’s take a closer look at what is in PAT today, and what is planned for the future.

PAT can be run in three modes (command line, server and browser based UI), although not all functions are available across all three just yet. The command line mode is the one which is designed with a view to usage as part of a CI/CD process in mind. Server mode allows PAT to be run as a web server which allows the user to interact with it through a browser based UI and see graphical displays of the results. If required, PAT can also be run as a Cloud Foundry application.

The main concept of PAT is the experiment. An experiment defines the nature of the load to generate that will stress the Cloud Foundry installation, such as the set of CF commands to be executed (eg. log-in, target a space, push an application), how many times these should be run and how much of this should be done concurrently. Another choice is whether to execute the commands via the CF CLI or via the Cloud Controller REST API.

Longer running experiments can be created by specifying a repetition interval and a time after which to stop the experiment. Results get displayed in the console or web UI depending on what mode you are running in, and can be written out either to a CSV file or a Redis database.

Parameters can be passed in by specifying them directly on the command line, or by putting them in a YAML file and pointing to that instead. This is a convenient way to capture the set up of multiple experiments that you may want to use over and over as part of your benchmarking work.

The figure below shows the typical console output when running in CLI mode. The command used to kick this off was

pat -workload=gcf:push -iterations=5 -concurrency=1

Experiment results in the PAT tool console

Experiment results in the PAT tool console

This screen was captured after two of the five iterations had been completed. The output CSV at the end of this run looked as follows:

Experiment results can be written out as a CSV

Experiment results can be written out as a CSV

This is just an extract – the full CSV also contains breakdown stats for each individual operation in the workload too. The times are stated in nano-seconds.

When running in server mode, the UI will display a bar graph of workload durations by iteration, and a throughput line graph that shows the number of operations per second that are completing, for each operation type. In the graph shown below, the workload was specified to be to perform a REST login, target and then a push. A throughput line is displayed for each.

PAT server mode allows experiment results to be viewed graphically

PAT server mode allows experiment results to be viewed graphically

We are actively enhancing PAT, and work underway includes:

  • make the full range of PAT functions available across all modes of operation
  • simulate peaks and troughs of usage by varying the number of concurrent workers in an experiment
  • sharing and comparison of experiment results
  • improve error handling.

What would be great would be for as many people as possible to get hold of PAT, try it out and to start giving us feedback and code contributions. With this in mind, here is our call to action:

  1. Download the code from Github, and install according to the
  2. Target a new space called ‘pats’ in your environment using CF.
  3. Run this standard experiment
    • -workload=gcf:push -concurrency=10 -iterations=100
  4. Share your results here

By starting to share observations on CF performance we are seeing by running PAT against different installations, we can work together to make PAT the performance tool that the whole community wants, and hopefully accelerate it on towards becoming a CF core project.

Who knows – we may even uncover some insights that will be helpful in making improvements to Cloud Foundry performance!