Skip to content

Creating Benchmark

egor-romanov edited this page Sep 20, 2022 · 3 revisions

Creating Benchmark

This wiki will show how to write terraform and k6 scripts to set up infrastructure and run the benchmark.

Analyzing your app usage

First of all, you should understand what the typical load profile for your app is.

For example, if your app is an auth service and you know that you have to be ready to handle spikes when:

  • a ton of users are signing in,
  • visit two pages on the web app that triggers a few more requests to your auth service during 15 seconds,
  • the total duration of this may be up to 30 minutes.

Then you should probably mimic this load profile in your benchmark. Running all the exact requests that a single user does in the k6 script. Then you can tweak the number of virtual users, a duration that will be able to show potential spikes in resource consumption, and so on.

const credentials = `${username}:${password}`

const encodedCredentials = encoding.b64encode(credentials)
const options = {
  headers: {
    Authorization: `Basic ${encodedCredentials}`,
  },
}

const loginRes = http.get(`http://example.com/basic-auth/`, options)

const authToken = loginRes.json('access')
check(authToken, { 'logged in successfully': () => authToken !== '' })

const responses = http.batch([
  ['GET', 'example.com/public/link/1/', null, { tags: { name: 'PublicLink' } }],
  ['GET', 'example.com/public/link/2/', null, { tags: { name: 'PublicLink' } }],
  ['GET', 'example.com/public/link/3/', null, { tags: { name: 'PublicLink' } }],
])

let URL = 'example.com/public/link/my'
const payload = { name: 'New name' }
const res = http.patch(URL, payload, {
  headers: {
    Authorization: `Bearer ${authToken}`,
  },
})

const isSuccessfulUpdate = check(res, {
  'Update worked': () => res.status === 200,
  'Updated correctly': () => res.json('name') === 'New name',
})

Running locally

After you have determined the load profile and created a benchmark script or scripts, it is time to run the benchmark locally to ensure everything works as expected. You may want to add a few debug messages to check that everything is correct.

If everything is fine, you can now prepare everything to automate the running of your benchmark.

Automate everything

Let's automate the job of running the benchmark.

Starting from the template

Let's start with the template:

  • clone the latest Supabench version from GitHub
  • go to the supabench repo: cd ./supabench
  • find and copy template repo: cp ./examples/template ./examples/my-service/my-benchmark
  • cd to your benchmark folder: cd ./examples/my-service/my-benchmark

Template structure

The template has the following major parts:

  • terraform main to run terraform modules
  • terraform setup module to run the System Under Test (SUT) infrastructure
  • terraform script module to run loader infrastructure and k6 load scripts
  • k6 folder with load scripts
template
│   main.tf
│   variables.tf
│
└───modules
│   │
│   └───setup
│   │   │   main.tf
│   │   │   variables.tf
│   │
│   └───script
│       │   main.tf
│       │   variables.tf
│       │   entrypoint.sh.tpl
│
└───k6
    │   common.js
    │   summary.js
    │   load.js
    │   Makefile

Main

The Main is an entry point for the terraform to be applied. It just runs modules in the modules folder.

You should add any variables that are required by the modules. To run the SUT infrastructure or for the loader infrastructure, or k6 scripts itself.

Some variables are predefined in the template: those provided by Supabench. Also, some variables are predefined and may be used if you will use the same loader provisioning as in the template. And a few example variables that should be changed to ones related to your service and benchmark.


Setup module

The Setup module is the module that runs the SUT infrastructure. The provisioning may vary depending on how your service is running in production. If your service runs in Fly, you may want to look at the Realtime example under the examples folder. If your app runs on an ec2 instance, try using the AWS terraform provider.


Script module

The Script module is the module that provides loader infrastructure and runs the k6 load scripts.

  • First, it will provide the loader instance in AWS.
# creating ec2 instance that will be used to generate load
# Most likely, you will not need to change it
resource "aws_instance" "k6" {
  ami                    = var.ami_id
  instance_type          = var.instance_type
  vpc_security_group_ids = [var.security_group_id]
  subnet_id              = var.subnet_id

  key_name = var.key_name

  tags = {
    terraform   = "true"
    environment = "qa"
    app         = var.sut_name
    creator     = "supabench"
  }
}
  • Then, it will ssh into the provided instance.
# uploading k6 scripts and running k6 load test
resource "null_resource" "remote" {
  # ssh into the instance, you likely won't need to change this part
  connection {
    type        = "ssh"
    user        = var.instance_user
    host        = aws_instance.k6.public_ip
    private_key = var.private_key_location
    timeout     = "1m"
  }
  • Next, uploads k6 scripts to the instance.
    • You will also need to specify custom vars for the entrypoint.sh script.
  # upload k6 scripts to remote instance; you likely won't need to change this part
  provisioner "file" {
    source      = "${path.root}/k6"
    destination = "/tmp"
  }

  # upload entrypoint script to a remote instance
  # specify your custom variables here
  provisioner "file" {
    destination = "/tmp/k6/entrypoint.sh"

    content = templatefile(
      "${path.module}/entrypoint.sh.tpl",
      {
        # add your custom variables here
        some_var        = var.some_var
        sut_token       = var.sut_token
        sut_url         = var.sut_url

        duration        = var.duration

        # don't change these
        testrun_id      = var.testrun_id
        benchmark_id    = var.benchmark_id
        testrun_name    = var.testrun_name
        test_origin     = var.test_origin
        supabench_token = var.supabench_token
        supabench_uri   = var.supabench_uri
      }
    )
  }
  • Also, we need to set up some environment variables:
  # set env vars
  provisioner "remote-exec" {
    inline = [
      "#!/bin/bash",
      # add your env vars here:
      "echo \"export SOME_VAR='${var.some_var}'\" >> ~/.bashrc",
      "echo \"export SUT_TOKEN='${var.sut_token}'\" >> ~/.bashrc",
      "echo \"export SUT_URL='${var.sut_url}'\" >> ~/.bashrc",
      # don't change these:
      "echo \"export RUN_ID='${var.testrun_id}'\" >> ~/.bashrc",
      "echo \"export BENCHMARK_ID='${var.benchmark_id}'\" >> ~/.bashrc",
      "echo \"export TEST_RUN='${var.testrun_name}'\" >> ~/.bashrc",
      "echo \"export TEST_ORIGIN='${var.test_origin}'\" >> ~/.bashrc",
      "echo \"export SUPABENCH_TOKEN='${var.supabench_token}'\" >> ~/.bashrc",
      "echo \"export SUPABENCH_URI='${var.supabench_uri}'\" >> ~/.bashrc",
    ]
  }
  • Finally, it will run the k6 load scripts.
  # run k6 load test, you likely won't need to change this part
  provisioner "remote-exec" {
    inline = [
      "#!/bin/bash",
      "source ~/.bashrc",
      "sudo chown -R ubuntu:ubuntu /tmp/k6",
      "sudo chmod +x /tmp/k6/entrypoint.sh",
      "/tmp/k6/entrypoint.sh",
    ]
  }
  • Let's also have a quick overview of the entrypoint.sh script:
    • It will download the latest version of go and add it to PATH
    • Build k6 with plugins, here you can add extra plugins if you want
    • Run telegraf to collect metrics from the k6 load test and upload them to prometheus
    • Add ENV variables
    • Run the k6 load test by running the make command
#!/bin/bash

# update golang and make sure go is in path
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.19.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin

# build k6 with xk6 plugins; you may add some extra plugins here if needed
export K6_VERSION='v0.37.0'
~/go/bin/xk6 build --output /tmp/k6/k6 \
  --with github.com/jdheyburn/[email protected] \
  --with github.com/grafana/xk6-sql@659485a

# run telegraf to collect metrics from k6 and host and push them to prometheus
telegraf --config telegraf.conf &>/dev/null &

# go to k6 dir and run k6
cd /tmp/k6 || exit 1

# leave these as is. Supabench will pass it, and it is needed to upload the report.
export RUN_ID="${testrun_id}"
export BENCHMARK_ID="${benchmark_id}"
export TEST_RUN="${testrun_name}"
export TEST_ORIGIN="${test_origin}"
export SUPABENCH_TOKEN="${supabench_token}"
export SUPABENCH_URI="${supabench_uri}"

# this is the place to add your variables required by the benchmark.
export SOME_VAR="${some_var}"
export SUT_TOKEN="${sut_token}"
export SUT_URL="${sut_url}"

# make command from the k6 folder to run the k6 benchmark; you can add some extra vars here if needed
# Leave testrun_name as it is passed to the k6 command to add a global tag to all metrics for grafana!
make run \
  duration="${duration}" \
  testrun="${testrun_name}"

k6 folder

k6 folder has a few general-purpose files:

  • common.js with some helpers you may or may not need while building your scripts.
  • summary.js with a function to upload summary to supabench.
  • Makefile to run the k6 load test, you may add some custom vars. You should not remove this file or its parts cause this helps to split reports in grafana better later; it is better to extend it.

And the load.js file where you can put your script.

But take a look at the load.js file to see how you can build your scripts and see comments there, as there are some mandatory parts like:

// export handleSummary from sumary.js to upload the report to Supabench
export { handleSummary } from './summary.js'

And some optional parts that you may find helpful.

Testing terraform

After you have finished configuring your terraform, you can run everything together locally by running the following command:

terraform init
terraform apply

Don't forget to run destroy after removing everything you have created to run the benchmark.

terraform destroy

If everything is good and working as expected, you may now upload the benchmark files to Supabench.

Uploading to Supabench

  • If you haven't created a benchmark yet, you should do so. Please refer to the Running Benchmark doc.
  • Create an archive of your benchmark folder ./examples/my-service/my-benchmark.
  • Then go to Collections / Secrets in the Admin UI and upload the archive to the Secrets related to my-benchmark.
  • Specify your variables under the Vars field in JSON format.
{
  "some_var": "some value",
  "sut_url": "https://my-service.com",
  "sut_token": "my-token",

  "instance_type": "t2.micro",

  "some_sut_related_var": "some value",
  ...
}

Now you can run your benchmark via an API or User UI.

Next steps

If you want to set up Supabench locally, go to Setup

If you have any issues understanding the terms, you may refer to Terminology.

Clone this wiki locally