BLOG

Modern Web Development Setup – The Testing Environment (1/2)

Testing and Deployment Overview

Before setting up the needed environments, it’s good to define the high level targets.

Deployment and Testing Overview

The general idea in this setup is to make the deployments and testing automated where feasible, yet keep the system simple enough to implement and maintain. The target is to set up the system in such a way that acceptance testing and production environment updates are triggered manually even if the deployments are automated.

Development Environment

The local development environment is where the development is done. Created pull-requests, merges and other actions trigger jobs for the CI system.

Version Control System

The version control system keeps track of the source code and other content changes and triggers CI jobs based on configured rules.

Continuous Integration

Component specific unit and API tests are run by the continuous integration service. The CI will also build and push container images into a container registry, if the tests are passed, and deploy new components to the testing environment. The target is to also trigger the automated acceptance tests to see if the changes affect the system as a whole.

Container Registry

The container registry stores and shares the container images built by the CI service.

Testing Environment

The testing environment is the first environment where the whole system runs in a similar manner as in production. The environment is meant for various testing activities and the system state isn’t automatically reset when the components are updated.

Acceptance Testing Continuous Integration

This environment is meant for running automated acceptance tests. The system starts from the initial state for every test round.

Acceptance Testing Environment

The environment for performing additional manual acceptance tests before the service is accepted for the production deployment.

Production Environment

The environment where the service is available for the end users.

The target for the development and deployment architecture is set for now. Later in the acceptance test related articles we’ll see how close to the target we actually get ;).

Testing Environment Setup

In the following chapters we’ll take a few new web services into use. Now it’s a good time to create accounts for Kontena, UpCloud, Docker Hub, GitLab and Shippable before continuing forward.

UpCloud and Kontena Setup

The setup will be based on Kontena Container and Microservices Platform and hosted in UpCloud IAAS cloud platform.

So let’s install the Kontena CLI application. For macOS the installation procedure is very straightforward: we’ll download the latest stable Kontena CLI release, open the downloaded .pkg file, and follow the installer guidelines to complete the installation.

After the installer program completes, we check the installed Kontena version and log in into Kontena cloud:

It’s possible to use already existing ssh-keys, but here we create a new pair of keys to be used with the UpCloud servers and set the key file name to id_rsa_upcloud:

These two commands do quite a lot for us; we’ll end up having two CoreOS-based servers in the UpCloud: one server is the master and the other one is the node. The master is used for controlling the node(s) and all of our application containers will run in the node server.

With the used parameters we’ll create the servers into Frankfurt with the default pre-configured option (1xcpu, 1 GB ram & 30 GB, at the time of writing), but these parameters can be modified.

Let’s venture forth by creating the master server:

And then the node server:

Initially Kontena CLI will keep reminding us about server using a certificate signed by an unknown authority. It’s possible to set SSL_IGNORE_ERRORS=true environment variable, set a proper certificate for the Kontena master, or copy the default self-signed server certificate into ~/.kontena/certs/<master_ip_address.pem file from the master server. We’ll go with the last option:

Just to be sure Kontena CLI is working now properly, let’s give it a go:

After the servers have been created, we can access them via ssh as a root user if needed:

Docker Image Registry Config

Docker Hub is used for hosting the created docker images in these articles. For these articles we created articleprojects organisation and front-service, app-service and auth-service repositories under it.

Once the setup is done, let’s log in to docker:

Dockerfiles

The Dockerfiles we defined in previous article won’t work outside of the local development environment. We need to define new files for the production environment.

Dockerfile.prod for the Front Service:

Compared to the development version, the major differences are:

  • Source files are copied into the container
  • Only production module dependencies are installed
  • Containers are run by the node user instead of the root

The Dockerfile.prod files for App and Auth Services are almost identical and available in GitLab:

 

Smoke Test Deployment

We’ll do a smoke test run to make sure the basic scripts are working and we’re able to run our containers in the testing environment.

Let’s begin by defining the smoke test deployment descriptor for Kontena:

With the file above we defined the following things:

  • Service stack including the used services
  • Environment variables from the Kontena Vault
  • To connect services to the Kontena load balancer we need to pass some values via environment variables and link the services with the load balancer service
  • Volumes for the database services: volumes aren’t necessarily needed in the testing environment, but to keep the environments consistent we’ll use them also here

Speaking of the volumes, we need to define them before the stack can be installed:

In the testing, acceptance testing and production environments we’re going to refer to the images using their tagged version numbers instead of using tags like latest. So let’s build the components and tag them with version 1:

Let’s also write the used image version numbers and domain names into Kontena Vault to make them available for the deployment descriptor:

We’ll use example.com here for presentation purposes, but any domain will do as long as you have access to its DNS records. We set A records for app.example.com, auth.example.com and example.com names to point to our node server ip address and a CNAME record to direct www.example.com to example.com. Depending on the set TTL and other factors it usually takes from a few minutes to a few hours for the changes to take effect.

We also need to set some configs for the PostgreSQL database services. Here we’ll use postgres as a username, database_prod as a database name and unique strong passwords for both services:

Now, let’s give it a try:

In case everything went fine, and the DNS changes are in place, all the services should be available via their respective domains. It’s good to make sure everything is working well at this point, before continuing to SSL/TSL certificate configuration.

SSL/TSL Certificates via Let’s Encrypt

Kontena provides built-in integration with Let’s Encrypt for easy certificate management. Let’s start by registering an email address. Let’s Encrypt will connect the given email to the domain and send reminders about expiring domain certificates to the address:

The domain authorisation is DNS-based so we need to tweak the DNS records once again. Since Let’s Encrypt doesn’t support wild card certificates we need to do the authorisation for all four domains (example.com, www.example.com, app.example.com and auth.example.com).

We’ll repeat the following authorisation part for each domain and add a TXT record using the name and content information the commands return.

After completing the above steps for all four domains we’ll request a single combined certificate:

Kontena automatically creates the following vault keys with the cert-related content so we don’t need to handle the cert using any files:

  • SSL_LOAD_BALANCER_CERT_PRIVATE_KEY
  • SSL_LOAD_BALANCER_CERT_CERTIFICATE
  • SSL_LOAD_BALANCER_CERT_BUNDLE

For our Kontena load balancer, the SSL_LOAD_BALANCER_CERT_BUNDLE is the vault key we need. To enable the SSL certificate we need to pass the bundle as a SSL_CERTS secret. To force the http -> https redirect we use KONTENA_LB_CUSTOM_SETTINGS env variable. We also enable 443 port for the load balancer. Here is the config at this point in GitHub:

So let’s give it a try:

If things went as they should, our services are now properly equipped with https-connection features. We need to check that all three services are available via their https-addresses and we also need to make sure the http->https redirection works by trying out unsecured http-addresses. Good!

Let’s Encrypt certificates will expire in three months and at the moment Kontena won’t handle the update automatically. However I’ve understood that the people at Kontena are thinking about implementing the support for the http-based domain validation model, which would allow automatic certificate updates.

Anyways, the certificate update can be done by redoing the authorisation via DNS, and then running the kontena certificate get –secret-name… again to update the vault variables with new certificates. Once the certificates are updated the Kontena load balancer should take the updated certificates into use automatically.

Basic Auth

Since the testing environment is basically available for any internet user, we’re going to set a basic auth to give it some protection against unwanted visitors. Depending on the situation, better protection may be needed, but this is a good start.

In this example we set user1:pass1234 credentials for Front, App and Auth Services:

We enable the basic auth feature by passing the credentials to the services as KONTENA_LB_BASIC_AUTH_SECRETS secret. Also encrypted passwords are supported, more information about those are available in the Kontena load balancer documentation.

Our Kontena configuration for testing environment is now ready – and available in GitLab – so let’s give it a shot and make sure the basic auth is working as expected:

 

Continuous Integration Service Setup

Continuous Integration (CI) Overview

The Idea is to run component specific tests for every master branch change, for front and back end service code repositories. If the code passes the tests and other requirements, the system builds new images, pushes them into Docker Hub, and commands Kontena to update the testing environment service stack with the latest containers.

Depending on the component, CI tests will contain unit tests, API tests, test coverage checks and ESLint coding convention checks.

GitLab Integration

A GitLab access token is needed for granting the Shippable access to the GitLab repositories and to automatically trigger CI-builds when merges, commits and pull-requests happen. The access token can be created in User Settings -> Access Tokens menu in GitLab. With the access token it’s possible to enable the integration in Shippable via Account Settings -> Integrations.

After creating the integration, the synced projects and repositories have to be selected in Shippable. By clicking the Subscriptions hamburger menu in the top left corner the list of available Git repo groups/projects/repos becomes visible. After selecting a group from the list, related available repositories are shown in the screen. By clicking Enable for a repository, the integration between Shippable and GitLab is enabled for it. If expected Git-projects aren’t visible in the Subscriptions list, the Sync button in Account Settings may solve the issue.

Docker Hub Integration

Docker Hub credentials are needed for enabling the Shippable integration. Integration can be added via Account Settings -> Integrations and creating a new Docker integration. The integration must be also enabled for each repository group or specific repositories.

Shippable Configurations

Let’s start by creating a never-expiring token for accessing Kontena Master CLI, since we’ll need it in the following shippable.yml scripts:

Next we define the shippable.yml configuration for the Front Service:

 

The configuration file defines following things:

  • The CI environment is a Node.js environment with node version 6.10.2
  • The script is run only for master branch related activities
  • A few visible environment variables as well as secret environment variables are set
  • Npm test runs the tests for the Front Service
  • Ruby (for  Kontena CLI) and Kontena CLI are installed
  • In case the tests were successful and the CI run isn’t a pull-request related run, the container is built and pushed to Docker Hub, and the new container is deployed to the testing environment
  • In the integrations section the integration with Docker Hub is defined so that Shippable knows to automatically use Docker Hub credentials.
  • Notification e-mails are disabled

With the secret environment variables, it’s quite safe to store Shippable secrets in Git, since the encryption key is Shippable user or project group specific. It seems to be possible to use the same secrets with projects belonging to the same repository group, but not with projects belonging to another group. The secrets can be set via repository group Settings -> Encrypt menu.

For the App and Auth Services we’ll create very similar shippable.yml configurations:

The only major difference compared to the Front Service is the added PostgreSQL service which is needed for the API tests.

Kontena also provides a HTTP API interface, but based on the discussion on the Kontena Community Slack channel the CLI is the recommended way to access the master from CI environments; in the end the API calls would only replicate the already existing CLI functionality in a more complicated way.

Everything should now be in place for trying out the Shippable builds, so let’s commit and push those shippable.yms files into GIT-repositories and Shippable should start running CI for them. We don’t have much to test just yet, but we’ll add more code and test tools in coming articles.

GIT strategy and Setup

There are various strategies and processes for managing branching and merging in GIT repositories. I’m not going very deep into the subject here, but something along the line of Git FlowGitHub Flow or GitLab Flow are the most common ones based on my experiences.

In the article setup we’re going to keep things simple and configure a light weight workflow resembling the GitHub Flow with a few key principles in mind:

  • One constant branch: master
  • Content that is merged to master is meant to be released immediately or very soon
  • Direct pushes into master are forbidden, only merges via pull-requests are allowed
  • Merges to master are possible only after successful CI builds
  • Master is automatically tested when a merge operation is triggered

In coming articles we may complement the workflow with acceptance testing and production environment related details.

Enforcing Workflow Rules

It’s very straightforward to configure simple workflow rules in GitLab. The following settings are set for Front, App and Auth Service repositories via Settings:

General tab:

  • Activate merge request approvals is checked; the details like approvers and required approval count can be set to meet different needs
  • Only allow merge requests to be merged if the pipeline succeeds is checked

Repository tab:

  • Master branch is set to be protected, merge permissions are given to Developers + Masters and push permissions to No one

Trial Run

The CI should now be in place so it’s time to see if the system is working as expected. Let’s create a new feature branch for the Front Service repository and change the Howdy there! text a bit:

Then we’ll commit the change and make a pull-request to the master in GitLab. At this point we see that Shippable CI is triggered to make sure the proposed PR won’t break the tests. We don’t have any tests in place yet, so the CI job should run through successfully.

Once the pull-request related CI job is ready, we’ll merge the PR into the master. This will trigger another CI run which should also pass, push the Docker image into Docker Hub and update the testing environment.

Once the CI job is finished we’ll check the Hi there! text is also visible via the browser.

Summary

We’ve now setup a testing environment together with the CI integration. We’ve also set some Git rules to guide us in the upcoming development endeavours. There are still quite a few things to be done until we’re ready to deploy the services into the production environment, but step by step we’re getting there.

In the next Testing Environment (2/2) article we’ll take a closer look into component and system level testing and do some test planning.

References

Git

Kontena