Modern Web Development Setup – The Local Development Environment

Alrighty, time to get something concrete done! But let’s check a few things before creating the first versions of the service components.

Conventions, Practices & Configurations

Coding Conventions

I’m using Atom code editor and ESLint with Airbnb’s React coding conventions. There are other convention alternatives also available, but with I’ve been happy with these after a few minor modifications.

The setup is needed for Front, App and Auth Services. In macOS the following commands in terminal will do the magic and install the module along with needed dependencies:

For Atom, linter, linter-eslint and language-babel packages provide the needed features – and the default language-javascript package can be disabled – but then again it’s possible to spend days just configuring Atom.

Configs for Next.js Services

For the Next.js-based Front Service, the following ESLint configurations are used:

The .eslintrc above will take Airbnb’s coding conventions as a base, but relax them slightly. Jest env is also set here for upcoming Jest test suite needs.

With the .eslintignore below the soon needed Sequelize related files and folders as well as the configuration and Shippable related folders are ignored:

Configs for Node.js Services

For App and Auth Services, the ESLint .eslintrc & .eslintignore configs are slightly different. For example, the console print commands are allowed, since at this point a proper error and log management solution isn’t in place. These details will be addressed in the coming articles.

Project Structures

This belongs to a similar category with coding conventions; it’s just good to agree on a project structure and use it with all your projects to make DevOps and CI to work smoothly in the future.

I’ve used role-based project structures in the past, but after trying out feature-based project structures while implementing these example applications, I’ve decided that this felt more natural. Therefore the example services are made using feature-oriented project structures.

Next.js Front Service:

Node.js App and Auth Services:

It’s also natural for the project structure to evolve during the project, especially if your service ends up becoming large-scale and popular. But then again, if the individual services are kept relatively simple, the project structure may not become an issue.

Component Configuration

As The Twelve-Factor App third rule states the target is to use environment variables for configuring the containers in various environments to allow changes to be made quickly in different environments. In the end, it’s easier to change an environment variable and redeploy it, instead of building new docker images, for instance. As an exception to this, config files will be used for setting up the local development environment.

Domain Use

Instead of using direct ip-addresses, the system is setup in such a way that domain names can be used in all environments to keep the environments consistent. Domains are also required for proper domain SSL certificates. In this case, self-signed certificates will be used in the local development environment.

In these articles, is used for making the local environment setup a bit more straightforward. is a free DNS solution which can be utilised to access proper(ish) domain names like which resolves to This way it’s possible to avoid the need to setup special domain names for development environments. domains also work with self-signed certificates. In cases where it’s not possible – for security or other reasons – to rely on external services, the preferred host names can be set via /etc/hosts file.

Service Components Implementation

Sources for the components are available and updated to the repositories together with the released articles. The projects must be cloned so that they are parallel with each other in the file system to make the deployment scripts in the Deployment-Scripts project work in the local development environment:

For the database components, the environment variables are enough for configuring the official PostgreSQL Docker images.

Alpine based Docker images are used where possible mostly due to their smaller size. Personally, I also like the idea of simple and minimised containers, but my knowledge of Alpine images is quite superficial. I haven’t run into any troubles so far, so I’ve been happy to use them when available.

Requirements for the development environment:

Later on, a plethora of node modules will be installed via npm. Most likely, yarn would work as well; the CI scripts would need to be modified accordingly, in that case.

Disclaimer: I’m using macOS Sierra and Docker for Mac and it’s possible some things won’t work out-of-the-box with other operating systems.

Reverse Proxy and Load Balancer

Let’s start from the Proxy and Load Balancer component which will be based on HAProxy. There is no need to do any coding for it, but the configs have to be customised for the development environment needs. Due to consistency reasons, it’s good to configure the SSL termination also for the local development environment. The docker-compose system will be configured in such a way that the individual containers can also be connected directly without a proxy and without a secure connection, but all connections via a proxy will function in a similar way as in production.

We’ll start by adding the into the root folder of an empty project for the Proxy Service:

The created image is based on the official HAProxy Alpine build and the only changes are config file changes and added self-signed certificate.

To create the needed HAProxy config we’ll need the following ./docker/haproxy/haproxy.cfg file:

Skipping over the details, the config defines following things:

  • SLL termination: traffic from the HAProxy to the back end servers continues unencrypted
  • Traffic to the app and auth subdomains is directed into respective App and Auth Services while the rest goes into the default Front Service
  • Http traffic is redirected to https for all services

To generate acceptable certificates for the latest Chrome versions, subject alternative name fields have to be used. For compatibility reasons, It’s also good to use one of your DNS names as the CN:

In addition, to create the required certificate, the certificate has to be set as trusted for the system in the macOS Keychain to make Chrome happy:

  • Open the Keychain Access
  • Drag and drop the previously created cert file into Keychains->System category
  • Once the cert is in the Keychain, double click it and in the Trust section, set the cert to be always trusted

Front Service

For Front Service, we use NPM, so let’s start by initialising the project:

Default values are fine, but it’s good to write something for the description and author fields.

To setup the Next.js we need a few modules:

For showing some text we create the pages folder and add index.js into it:

To run the Next.js service properly we need to modify the package.json file’s scripts section to contain following:

Docker related files

For development purposes, let’s make a simple container. Next.js automatically provides hot code reloading when the local project folder is later linked into the container’s src folder using Docker Compose. With the following file we install the node modules, expose port 3300 and start the server with npm run start:dev.

And .dockerignore:

And the rest

Lets add a .gitignore file into the project root:

We also setup up the .eslintrc and .eslintignore files according to the Coding Conventions chapter guidelines.

With Shrinkwrap, we can avoid possible issues regarding modules changing dependencies. Shrinkwrap will create npm-shrinkwrap.json and lock down the versions of the modules’ dependencies, so that later on in different environments, the result of npm install should stay the same. So let’s take the npm shrinkwrap into use:

With the following command, we should be able to start the service and access it via

App Service

With the App Service we start with npm init like with the Front Service project. After that, we create index.js which starts a restify server without too many bells or whistles:

Lets also install the node-dev module as a dev dependency. With node-dev, source code changes will restart the server and load the changes, which is especially useful when running code via containers:

To create a simple server, we’ll add the following index.js file into the project root folder:

Lets also slightly edit the scripts part of the package.json:

After these modifications we should be able to use the npm test for running tests successfully, even if there are no real tests yet. We can also try out the server with the npm run start:dev or npm start commands:

Docker related files

Also for this service the development is quite simple. With it, we install the node modules, expose port 3000, and start the server with npm run start:dev. Node-dev provides automatic service restarting when the code is changed and related to this, we’ll later link the local project folder into the container’s src folder using Docker Compose.

And .dockerignore:

And the rest

Used .gitignore file:

We also setup the .eslintrc and .eslintignore files according to the Coding Conventions chapter guidelines.

And lets not forget npm shrinkwrap:

Auth Service

At this point the Auth Service is almost the same as App Service. So we create the Auth Service just like the App Service with the following differences:

  • The name of the service in index.js:5 to Auth Service
  • The ports of the service in index.js:3 and to 3100

Just to confirm the setup went fine we also try to run the service:

Deployment Scripts

This repository contains the scripts for deploying the system in various environments.

Docker Compose is used for running the multi-container service in the local environment. The following docker-compose.yml configures the needed details for building and running the containers:

Without going into details the script defines the following things:

  • Volumes needed for the database services to persist the data and make it available for other services for back-up purposes, for example
  • All the started services are defined
  • Dockerfile configs are set to build the services. Note: all the component repositories are expected to be located in parallel to the deployment-scripts repo.
  • Open ports are defined for most of the services to allow direct access to them
  • Local project directories are mapped into containers src directories for Front, Auth & App Services to allow them to access local files and enable them to restart or reload file changes automatically
  • Few environment variables are passed to configure the database containers

Now everything should be in place to try out the basic development environment as a whole. The container setup can be built and run with the following command:

All these domains should now respond properly and the http -> https redirection should also be in place:


During the development the docker-compose can be left running in one terminal window, but with certain parameters it’s possible to run the containers also in the background.


Yay, the basic project skeleton is finally ready! Quite a lot of details were skipped on purpose in this article to keep the focus on the high level development environment idea and to reduce the length of the articles. This style will also continue in the upcoming articles. If you have any questions about the content, just post a message and I’ll do my best to answer.

The next part of the series will be about setting up a Kontena-based testing environment into UpCloud together with CI integration. The big picture of testing, development and deployment will be also defined.


Resources I found useful while writing the first parts of the article series:

Docker and Containers


Project Structures