Contact Us

Request Demo

Contact Us Request Demo
Return to Enterprise Automation Blog

Hypertester: Running Automated Tests Against Dynamic Environments

February 14 2022

3 min read

Automation is in Hyperscience’s DNA and automated testing as a software testing technique has been integrated into our Engineering testing processes for years. Simply put, automated testing is used to test and compare actual outcome with expected outcome. This can be achieved by writing test scripts or using various automation testing tools.

Generally speaking, automation testing is used to automate repetitive tasks and/or other testing tasks which are tedious to perform manually. It is also used for scalability, as we want to be able to test a multitude of scenarios that would be impossible to manually test.

How have tests run in the past? 

For multiple tests, we are solely relying on manually created environments in the AWS cloud. As you can see from the schema below, a tester had to interact with AWX (the open source version of Ansible Tower), which  was executing tests on a fleet of dedicated EC2 instances. In contrast, we wanted to make use of generic continuous integration means in order to manage environments (creation/deletion) and to execute tests in an iterative manner. It’s also worth noting that we want to replace AWX with Gitlab wherever possible.

From a design perspective, all of our environments have a test runner, an instance which is specifically provisioned so that it can serve for executing particular tests against a particular environment.

More details on functional tests

Functional test environments have been organized in pools. The function test execution parallelization approach (see schema below) relies on a service running on a dedicated server which dispatches test suites among the environments within the pool. An internally developed Webapp has been created to facilitate the user interaction. Some notable drawbacks of this approach include:

  • Environment management happens manually, and often the pool of environments don’t have enough workers (ad-hoc spin-up of environments through IaC (Terraform))
  • Similarly, we have been performing environment maintenance (i.e. one environment needs to be upgraded so we run our toolset to do so)
  • The work is split between engineering tiers where DevOps creates the environments and QA manages the tests and the testing pipeline
  • QA’s main focus should be the tests execution and results, rather than maintenance of the testing pipeline
  • Heavy dependency on a continuous deployment tool (AWX) for launching the tests, as well as parameterizing scheduled runs on it
  • This is a mono-testing-framework pipeline (i.e. the pipeline addresses functional tests cases but requires reimplementation for other types of tests – load/capacity, upgrade, performance, etc.)

More details on load/capacity tests

For load/capacity tests in particular, we identified some notable drawbacks in how we had been calculating CPU and storage requirements. There has been an urgent need to improve the accuracy of our formulas and calculation mechanics.

Some notable drawbacks of manually managing the infrastructure include:

  • Solving load issues for large scale environments was a cumbersome job as it had to be done manually
  • Load testing was enormously time-consuming (at least three different people were engaged in different activities – one with the infra management, one with the tests execution, one with reading the results and taking valuable actions)

The need for E2E automation

The need for E2E automated infrastructure management for testing has been missing for some time. So we had to come up with the means of not creating testing environments ourselves, but rather have something manage this for us.

This was how Hypertester was born.

Hypertester

We developed Hypertester in order to facilitate automated tests against dynamically created environments.

We saw value in developing and maintaining our own tool, as it provided us with the flexibility of adjusting it to our needs.

For us, Hypertester is the tool that glues together the environment creation and its provisioning, as well as the tests execution and (eventually) the environment destruction.

How does it work?

Hypertester works with specification files which are YAML documents defining all the details about a test run. Every hypertester framework tightly couples this spec file with a Terraform stack which manages the environment to spin up.

Logically speaking, a hypertester user defines a specification file. This file contains the description of the user’s intent, i.e. what type of environments to run tests on, markers to keep or to delete the environment following the test execution, from which branch to run the tests from, which version of the product to be deployed on the environment, which tests to execute, and any extra scripts to run.

Here is a snippet of a spec file for on-prem functional tests:

To elaborate on the above:

  • control_plane – manages the lifecycle of the environment
  • variables – a globally defined dictionary of variables; can also exist elsewhere
  • environments – strictly specifies the environment definitions part of this test
  • test_execution – identifies the scripts to push onto the environment’s test-runner instance and to execute them
  • environment_template – this section is a description of the integration with Terraform; without this part the Terraform stack, manipulating the environment won’t work
  • test_runner_provisioning – description of which Ansible playbook to use, as well as if any extra variables are to be used

As a follow-up, Hypertester in conjunction with Gitlab takes care of managing the environments and running the tests independently from one another (c.f. the image below).

One of the strengths of Hypertester is that it is able to forward all input and output variables to the spun up environments. This way, a user can natively specify any variable kwarg and it will be propagated to the dynamically spun up environment. To distinguish these, they are prefixed with the IN_ prefix. Similarly, output variables from Terraform are also forwarded to the environment,and are prefixed with the OUT_ prefix. There is also a third set of variables, those that are internal to Hypertester and for which it makes no sense to give control to the user to change them. These are prefixed with HT_.

How do we trigger it?

As Hypertester “lives” inside our Gitlab it was implemented with this thought in mind.

We use the Hypertester Docker image in order to spawn multiple children pipeline jobs which the number is based on the number of environment definitions that is found in the testing frameworks’s specification file.

All testing framework definitions (specification file, terraform code, test-execution scripts) are made available in a dedicated Gitlab repository. It’s important to note that Hypertester’s source code does not live within this very same repo, rather its built image is used in the Gitlab CI definition of this repo.

There are two ways to trigger a pipeline that we have put in place:

  1. Through the use of a python script which wraps the Gitlab CI functionality and only takes as input specific arguments destined to tell Hypertester which particular test type to be launched. The script has been sourced in the dedicated Gitlab repo close to all the testing framework definitions.
    Using the script is as simple as copy pasting the below snippet and adjusting it to one’s needs:

  1. Through scheduled runs for which we rely on the Gitlab built-in schedules feature.

To supply multiple test suites in our Gitlab repo, we have also introduced the test_suites dir in it. This helps us source control the various tests we have in place per testing framework as well as eases triggering the tests for our users.

From the specification file snippet from the previous section one can extract the environment component into a specific test suite and could refer to it upon calling the trigger script (via an argument). The test suite definition file can either be pushed, stored, and used from the repo or can be provided from a local file, for example:

  • my_file.yml test suite file’s content:

  • Triggering the pipeline from the user’s terminal would look like:

  • --testrun-spec-file – specifies the relative path to the spec.yml file from the my-test-branch branch
  • --local-test-spec-extension-file – specifies the local my_file.yml that has the env definitions (and possibly extra variables)

Integrating Hypertester with Gitlab has allowed us to actually merge these YAML files into a final one from which Gitlab launches the children jobs.

What types of tests have been integrated?

Hypertester was built with modularity and flexibility in mind. For example, if we wanted to test the functionality of product X and tomorrow of Y, Hypertester can do so. The only extra preparation this will require is adjusting the tests to  product Y.

The number of automation tests that Hypertester can deal with is virtually limitless as long as the testing framework integration with Hypertester is in place.

What this means is that no matter the tests we want to execute, as long as we have a strict implementation of the required IaC integrations for a predefined set of environments, Hypertester will provide results. Regardless if the tests are functional, load/capacity, performance, or upgrade tests to name but a few, Hypertester will be able to execute them.

In the previous section, discussed our biggest testing framework implementations (functional and load/capacity tests) before and after Hypertester. This is to say that running automated tests was possible before Hypertester, however, maintaining the testing environments’ lifecycle was a tedious job.

The test_execution element in the specification file Hypertester provides its users with the functionality not only to execute tests but also a multitude of other scripts. Owing to this, the variety of scripts to run are numerous, the users can specify what results to collect, where to store them, can automatically scale up and down an autoscaling group (ASG), can poke RDS instances for detailed information about specific queries to name but a few.

Hypertester simply allowed the full E2E automation testing by introducing the empty puzzle pieces to offer a complete dynamic management of the environments.

What types of platforms can the tests be run on?

Hypertester is flexible on many levels. Not only can one test multiple products,but it can alsotarget environments running on different platforms.

For instance, supporting a product for Debian and RHEL requires that the product is tested on both types of environments. Hypertester can do so. As the tool is quite extensible, provided that the Terraform stack does have the required definitions in place, it only takes some very small changes in the environment definitions in the spec file to test for this:

  • Specification file changes

  • Required implementations in the Terraform stack

For larger changes, such as manipulating Kubernetes managed environments, more than just the above has to be done. Manipulating on-prem like infrastructures we do through using the Terraform AWS provider.

However, Hypertester is not limited to on-prem-like platforms only. Its extensibility allows it to target infrastructure based in various Cloud providers or Kubernetes. For those later ones, more development has to be done by using the existing respective Terraform providers.

What about the costs?

A serious con that hasn’t been discussed so far in having to maintain manually the pool of environments in AWS was that when they were idle (i.e. no tests were scheduled to be run on them) we were still paying for them. And the more tests we had to run, against more platforms simply meant a larger AWS bill.

Ultimately, introducing an automated way to spin-up environments simply meant quicker resolution time for us the DevOps team. However, for some particular cases, engineers required the testing environments to be left up and running (collecting results, manual interaction with the env for the sake of some debugging).Through a control plane defined in the specification file we actually allowed them to keep an environment “alive”.

The problem with the above is obvious, the more engineers onboard with Hypertester the more environments spun-up, also the higher the chance of engineers to demand Hypertester to keep their environments and not to destroy them. Hence, a larger AWS bill.

To overcome this,  we had to come up with a mechanism which takes care of forgetting stale environments’ resources in place and involuntarily paying for assets with no added value. Our mechanics were simple, we decided to isolate all automation tests within the same AWS account and label all resources in this account with expiration_date tags. This label is automatically constructed based on the value of the retention_period variable defined in the specification file and is put into action when lifecycle_policy is set to destroy_on_expire. In parallel, we schedule a nuke pipeline to run every now and then and to monitor if any stale resources are present.

The nuke pipeline that we have in place relies mainly on the aws-nuke project. In its configuration file, we specify the target resource-types for nuking in our particular account dedicated for automation testing. What is key in our implementation based on the expiration_date tag is the below filter configuration:

For a set of resources we make use of the dateOlderThan type which by definition from the official documentation is:

The identifier is parsed as a timestamp. After the offset is added to it (specified in the value field), the resulting timestamp must be AFTER the current time.

However, as aws-nuke does have its limitations (execution time, missing tags support, …) for a particular list of resources we have put in place our own script nuke script to target their removal. We have made some contributions to the Public aws-nuke project, as well.

We have also “given more power” to the Hypertester users by allowing them to forcefully destroy an environment. As we trust our developers and QA users, we know that they actually own the testing environments that they spin up. For faster resolution and ease of management, we also allow them to trigger per-env destructions in order to lower the common to everyone AWS bill.

Summing it all up

In this blog post, we covered how the need for Hypertester has arisen at our company. We also explored how automation testing has occurred in the past before introducing some of the disadvantages this had. We also compared how Hypertester managed to solve those, in particular the way how it can dynamically manage environments and also its flexibility and ease of operation for its end users.

Hypertester set in motion the E2E automated infrastructure management for testing. Throughout the past five years, there has been a major increase in the market for roles such as DevOps engineers and automation testers and developing Hypertester has combined skills from both sectors.

Hypertester wouldn’t be reality if it hadn’t been for the support and guidance of Vitali Baruh whose seniority has been of great help not only for this project but for many other ones, too.

If maintaining and extending Hypertester’s capabilities sounds like fun, you might be a good fit to some of our departments so it might be worth checking our open Engineering positions here?

Stoimen is a DevOps Engineer located in our Sofia office. Connect with him on LinkedIn.