Running Docker Compose with Testcontainer for Spock Tests

Improving team collaboration by eliminating the toil of manually executing scripts before and after integration tests run.

Introduction.

In this blog, I share how we eliminated the toil associated with manually starting and stopping integration test resources - databases & message queues - exposed by Docker Compose by automatically starting and stopping Docker Compose via Testcontainers whenever tests run.

Overview of Docker Compose Approach.

Our microservices are based on Spring Boot and use either Amazon DynamoDB or PostgreSQL for datastore, and AWS SNS and SQS combo for messaging.

For tests, we use Spock Framework in conjunction with Docker Compose for the integration resources. Below is a typical docker-compose.yml as defined in one of our projects.

services:
  localstack:
    image: localstack/localstack:0.14
    environment:
      - SERVICES=dynamodb,sqs,sns
      - DEFAULT_REGION=eu-central-1
    volumes:
      - ${PWD:-.}/infra/local:/docker-entrypoint-initaws.d
    ports:
      - "4566:4566"

Here we have LocalStack exposing emulations of Amazon DynamoDB, SNS and SQS and all these services will run on port 4566. To achieve this one has to execute the following docker-compose command.

docker-compose -f docker-compose.yml up localstack &

With the above setup, when integration tests execute, the services exposed by Compose will be reachable on port 4566. These services will run indefinitely until explicitly and manually stopped. This is achieved by executing the following docker-compose command.

docker-compose down

In between the above-mentioned docker-compose up and down commands, one can run integration tests as many times as they want without having to start and/or stop these services manually.

Problem.

Our different microservices have different needs for the integration resources:

  • They use different databases; Amazon DynamoDB or PostgreSQL.

  • They have different database schemas.

  • They have bespoke dependencies not needed in other projects, e.g. one project needs Elasticsearch while another does not.

  • They even use different "emulators" for integration resources, e.g. one project makes use of GoAws instead of LocalStack for AWS resources.

As a result, each project has a different docker-compose setup. This makes it difficult to switch between projects during the day. Whenever one switches, one needs to remember to stop the dependencies for the other project and start the relevant dependencies for the new project. In the majority of these cases, when we run the new project, it takes a failure related to the dependencies to remind one that they are using the wrong dependencies. This discourages team members from switching projects during the day.

Solution.

To overcome the cognitive load associated with manually starting and stopping Docker Compose, we turned to Testcontainers' Docker Compose Module.

This module is intended to be useful on projects where Docker Compose is already used in dev or other environments to define services that an application may be dependent upon.

Approach.

The biggest consideration for the chosen approach was, how can we introduce Testcontainers Docker Compose Module with minimal effort.

  • Reduce the radius blast of the changes that introduce Testcontainers. Is it possible to achieve this capability without updating every test spec?

  • Simplify the implementation to promote the reusability of the solution. Can we port the implementation to other projects with minimal disturbances and with speed?

  • Easy to understand. Can one find the implementation in one file and understand the gist of the solution?

Implementation.

Spock Framework allows writing custom extensions. In this case, we implemented a global extension that starts the bespoke set of services specified in a docker-compose.yml file.

class BootstrapDockerComposeTestcontainer implements IGlobalExtension {

    @Override
    void visitSpec(SpecInfo spec) {
        if (spec.isAnnotationPresent(IntegrationSpec.class)) {
            BootstrapDockerCompose.RUN
        }
    }

    private enum BootstrapDockerCompose {
        RUN;
        BootstrapDockerCompose() {
            new DockerComposeContainer(new File("docker-compose.yml"))
                .withExposedService("localstack_1", 4566, Wait.forListeningPort())
                .withLocalCompose(true)
                .start()
        }
    }
}

The above snippet of code translates to; whenever a running spec is annotated @IntegrationSpec, start and expose the identified services as defined in docker-compose.yml. @IntegrationSpec is an existing custom annotation we were already using to identify such tests.

To ensure that the Docker Compose Container is executed only once, the container is started in the constructor of a private enum. This is a well-known approach, for introducing thread-safe singletons in Java, from Joshua Block's seminal book Effective Java. Also note, the containers are only started when they encounter the first @IntegrationSpec.

What's New?

What does this afford us? Let me answer from my personal experience of working on projects based on this new approach.

It has now become extremely cheap to switch projects during the day and experiment. When I am reviewing a valuable PR, I can now provide informed feedback because I can:

  • Run "what-if" scenarios on concerned branches using the tests. For example:

    • Deliberately commenting out lines of code to understand the effect.

    • Change input values to tests to understand the effect.

    • Re-implement logic of interest to test or match my understanding. This is mostly throw-away code; for my education.

  • Write test cases on the branch that prove or disprove my hypothesis, before providing feedback.

The capability of running a different project on my local machine with minimal disruption has drastically improved the quality of feedback I provide to my colleagues. I am now easily providing informed, actionable and relevant feedback.

Conclusion.

This was not a step-by-step guide on how to orchestrate Docker Compose services using Testcontainers. This blog shares (i) a high-level description of a common problem and how it was solved, considering specific needs (ii) and how a seemingly trivial improvement had a compound effect on how I collaborate.