Alrighty, time to get something concrete done! But let’s check a few things before creating the first versions of the service components.
Conventions, Practices & Configurations
Coding Conventions
I’m using Atom code editor and ESLint with Airbnb’s React coding conventions. There are other convention alternatives also available, but with I’ve been happy with these after a few minor modifications.
The setup is needed for Front, App and Auth Services. In macOS the following commands in terminal will do the magic and install the module along with needed dependencies:
1 2 3 4 5 |
( export PKG=eslint-config-airbnb; npm info "$PKG@latest" peerDependencies --json | command sed 's/[\{\},]//g ; s/: /@/g' | xargs npm install --save-dev "$PKG@latest" ) |
For Atom, linter, linter-eslint and language-babel packages provide the needed features – and the default language-javascript package can be disabled – but then again it’s possible to spend days just configuring Atom.
Configs for Next.js Services
For the Next.js-based Front Service, the following ESLint configurations are used:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
{ "extends": "airbnb", "rules": { "func-names": ["error", "never"], "comma-dangle": ["error", { "arrays": "always-multiline", "objects": "always-multiline", "imports": "always-multiline", "exports": "always-multiline", "functions": "never" }], "react/jsx-filename-extension": [1, { "extensions": [".js"] }], "import/no-extraneous-dependencies": ["error", {"devDependencies": ["**/*.test.js"]}] }, "env": { "jest": true } } |
The .eslintrc above will take Airbnb’s coding conventions as a base, but relax them slightly. Jest env is also set here for upcoming Jest test suite needs.
With the .eslintignore below the soon needed Sequelize related files and folders as well as the configuration and Shippable related folders are ignored:
1 2 3 4 5 6 7 8 9 |
# /node_modules/* and /bower_components/* ignored by default # Ignore files models/index.js shippable/* **/*.json config/* db/* |
Configs for Node.js Services
For App and Auth Services, the ESLint .eslintrc & .eslintignore configs are slightly different. For example, the console print commands are allowed, since at this point a proper error and log management solution isn’t in place. These details will be addressed in the coming articles.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
{ "extends": "airbnb", "rules": { "func-names": ["error", "never"], "no-console": ["error", { allow: ["log", "warn", "error"] }], "comma-dangle": ["error", { "arrays": "always-multiline", "objects": "always-multiline", "imports": "always-multiline", "exports": "always-multiline", "functions": "never" }], "import/no-extraneous-dependencies": ["error", {"devDependencies": ["**/*.test.js"]}] }, "env": { "jest": true } } |
1 2 3 4 5 6 7 8 9 |
# /node_modules/* and /bower_components/* ignored by default # Ignore files models/index.js shippable/* **/*.json config/* db/* |
Project Structures
This belongs to a similar category with coding conventions; it’s just good to agree on a project structure and use it with all your projects to make DevOps and CI to work smoothly in the future.
I’ve used role-based project structures in the past, but after trying out feature-based project structures while implementing these example applications, I’ve decided that this felt more natural. Therefore the example services are made using feature-oriented project structures.
Next.js Front Service:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
. |-- config | |-- development.json |-- pages | |-- index.js | |-- ... |-- src | |-- feature | | |-- FeatureActions.js | | |-- FeatureActions.test.js | | |-- FeatureActionTypes.js | | |-- FeatureActionTypes.test.js | | |-- FeatureContainer.js | | |-- FeatureContainer.test.js | | |-- FeatureReducer.js | | |-- FeatureReducer.test.js | | |-- ... | |-- another-feature | | |-- AnotherFeatureActions.js | | |-- ... |-- package.json . |
Node.js App and Auth Services:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
. |-- app | |-- lib | | |-- helper-module | | | |-- index.js | | | |-- index.test.js | |-- routes | | |-- index.js | | |-- login | | | |-- index.js | | | |-- post.js | | | |-- post.test.js | | | |-- ... | | |-- protected | | | |-- index.js | | | |-- logout | | | | |-- post.js | | | | |-- post.test.js | | | | |-- index.js | | | | |-- ... |-- config | |-- development.json | |-- component1 | | |-- component1.json |-- db | |-- sequelize | | |-- migrations | | |-- seeders |-- models | |-- sequelize | | |-- index.js | | |-- model1.js |-- tests | |-- api.test.js |-- index.js |-- package.json . |
It’s also natural for the project structure to evolve during the project, especially if your service ends up becoming large-scale and popular. But then again, if the individual services are kept relatively simple, the project structure may not become an issue.
Component Configuration
As The Twelve-Factor App third rule states the target is to use environment variables for configuring the containers in various environments to allow changes to be made quickly in different environments. In the end, it’s easier to change an environment variable and redeploy it, instead of building new docker images, for instance. As an exception to this, config files will be used for setting up the local development environment.
Domain Use
Instead of using direct ip-addresses, the system is setup in such a way that domain names can be used in all environments to keep the environments consistent. Domains are also required for proper domain SSL certificates. In this case, self-signed certificates will be used in the local development environment.
In these articles, xip.io is used for making the local environment setup a bit more straightforward. Xip.io is a free DNS solution which can be utilised to access proper(ish) domain names like 127.0.0.1.xip.io which resolves to 127.0.0.1. This way it’s possible to avoid the need to setup special domain names for development environments. Xip.io domains also work with self-signed certificates. In cases where it’s not possible – for security or other reasons – to rely on external services, the preferred host names can be set via /etc/hosts file.
Service Components Implementation
Sources for the components are available and updated to the repositories together with the released articles. The projects must be cloned so that they are parallel with each other in the file system to make the deployment scripts in the Deployment-Scripts project work in the local development environment:
- https://gitlab.com/article-projects/proxy-service
- https://gitlab.com/article-projects/front-service
- https://gitlab.com/article-projects/app-service
- https://gitlab.com/article-projects/auth-service
- https://gitlab.com/article-projects/deployment-scripts
For the database components, the environment variables are enough for configuring the official PostgreSQL Docker images.
Alpine based Docker images are used where possible mostly due to their smaller size. Personally, I also like the idea of simple and minimised containers, but my knowledge of Alpine images is quite superficial. I haven’t run into any troubles so far, so I’ve been happy to use them when available.
Requirements for the development environment:
- Node and node package manager, LTS (6.10.2) is used in the described setup
- Docker for the used OS
Later on, a plethora of node modules will be installed via npm. Most likely, yarn would work as well; the CI scripts would need to be modified accordingly, in that case.
Disclaimer: I’m using macOS Sierra and Docker for Mac and it’s possible some things won’t work out-of-the-box with other operating systems.
Reverse Proxy and Load Balancer
Let’s start from the Proxy and Load Balancer component which will be based on HAProxy. There is no need to do any coding for it, but the configs have to be customised for the development environment needs. Due to consistency reasons, it’s good to configure the SSL termination also for the local development environment. The docker-compose system will be configured in such a way that the individual containers can also be connected directly without a proxy and without a secure connection, but all connections via a proxy will function in a similar way as in production.
We’ll start by adding the Dockerfile.dev into the root folder of an empty project for the Proxy Service:
1 2 3 4 5 6 7 8 9 10 11 |
# Image based on official haproxy alpine image FROM haproxy:1.7.5-alpine # Copy customized config file on top of the default config file COPY ./docker/haproxy/haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg # Copy ssl cert file into the container COPY ./docker/haproxy/cert.pem /usr/local/etc/haproxy/cert.pem # Haproxy starts properly without CMD via docker-compose |
The created image is based on the official HAProxy Alpine build and the only changes are config file changes and added self-signed certificate.
To create the needed HAProxy config we’ll need the following ./docker/haproxy/haproxy.cfg file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
global daemon maxconn 512 tune.ssl.default-dh-param 2048 defaults timeout connect 5000ms timeout client 50000ms timeout server 50000ms option forwardfor option http-server-close stats enable stats uri /stats stats realm Haproxy\ Statistics frontend www-in bind *:80 bind *:443 ssl crt /usr/local/etc/haproxy/cert.pem mode http redirect scheme https if !{ ssl_fc } default_backend server-www acl sub_domain_1 hdr_end(host) -i app.127.0.0.1.xip.io acl sub_domain_2 hdr_end(host) -i auth.127.0.0.1.xip.io use_backend sub-server-app if sub_domain_1 use_backend sub-server-auth if sub_domain_2 backend server-www mode http server nginx1 front_service:3300 check http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } backend sub-server-app mode http server node1 app_service:3000 check http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } backend sub-server-auth mode http server node2 auth_service:3100 check http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } |
Skipping over the details, the config defines following things:
- SLL termination: traffic from the HAProxy to the back end servers continues unencrypted
- Traffic to the app and auth subdomains is directed into respective App and Auth Services while the rest goes into the default Front Service
- Http traffic is redirected to https for all services
To generate acceptable certificates for the latest Chrome versions, subject alternative name fields have to be used. For compatibility reasons, It’s also good to use one of your DNS names as the CN:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
proxy-service$ mkdir -p docker/haproxy/ proxy-service$ openssl req \ -newkey rsa:2048 \ -x509 \ -nodes \ -keyout proxy_key.pem \ -new \ -out proxy_ca.pem \ -subj '/CN=127.0.0.1.xip.io/O=Basic Project/C=FI' \ -reqexts SAN \ -extensions SAN \ -config <(cat /System/Library/OpenSSL/openssl.cnf \ <(printf '[SAN]\nsubjectAltName=@alt_names') \ <(printf '\n[alt_names]') \ <(printf '\nDNS.1=127.0.0.1.xip.io') \ <(printf '\nDNS.2=*.127.0.0.1.xip.io')) \ -sha256 \ -days 3650 proxy-service$ cat proxy_ca.pem proxy_key.pem > docker/haproxy/cert.pem |
In addition, to create the required certificate, the certificate has to be set as trusted for the system in the macOS Keychain to make Chrome happy:
- Open the Keychain Access
- Drag and drop the previously created cert file into Keychains->System category
- Once the cert is in the Keychain, double click it and in the Trust section, set the cert to be always trusted
Front Service
For Front Service, we use NPM, so let’s start by initialising the project:
1 2 |
front-service$ npm init |
Default values are fine, but it’s good to write something for the description and author fields.
To setup the Next.js we need a few modules:
1 2 |
front-service$ npm install --save next react react-dom |
For showing some text we create the pages folder and add index.js into it:
1 2 3 4 5 6 7 8 |
// ./pages/index.js import React from 'react'; export default () => ( <h1>Howdy there!</h1> ); |
To run the Next.js service properly we need to modify the package.json file’s scripts section to contain following:
1 2 3 4 5 6 7 |
"scripts": { "test": "echo \"Success: tests are passing\" && exit 0", "build": "next build", "start": "next start -p 3300", "start:dev": "next -p 3300" }, |
Docker related files
For development purposes, let’s make a simple container. Next.js automatically provides hot code reloading when the local project folder is later linked into the container’s src folder using Docker Compose. With the following Dockerfile.dev file we install the node modules, expose port 3300 and start the server with npm run start:dev.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
FROM node:6.10.2-alpine # Create app directory RUN mkdir /src WORKDIR /src # Install app dependencies ADD package.json /src/package.json RUN npm install --loglevel warn # Define an open port for the container EXPOSE 3300 # Defined in package.json CMD [ "npm", "run", "start:dev" ] |
And .dockerignore:
1 2 3 4 5 |
node_modules npm-debug.log .git .gitignore |
And the rest
Lets add a .gitignore file into the project root:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
# See https://help.github.com/ignore-files/ for more about ignoring files. # dependencies /node_modules # test and coverage reports /shippable # possible production build /.next # misc .DS_Store .env npm-debug.log* yarn-debug.log* yarn-error.log* |
We also setup up the .eslintrc and .eslintignore files according to the Coding Conventions chapter guidelines.
With Shrinkwrap, we can avoid possible issues regarding modules changing dependencies. Shrinkwrap will create npm-shrinkwrap.json and lock down the versions of the modules’ dependencies, so that later on in different environments, the result of npm install should stay the same. So let’s take the npm shrinkwrap into use:
1 2 |
front-service$ npm shrinkwrap |
With the following command, we should be able to start the service and access it via https://juhakarna.fi.xip.io:3300:
1 2 |
front-service$ npm run start:dev |
App Service
With the App Service we start with npm init like with the Front Service project. After that, we create index.js which starts a restify server without too many bells or whistles:
1 2 |
app-service$ npm install restify --save |
Lets also install the node-dev module as a dev dependency. With node-dev, source code changes will restart the server and load the changes, which is especially useful when running code via containers:
1 2 |
app-service$ npm install node-dev --save-dev |
To create a simple server, we’ll add the following index.js file into the project root folder:
1 2 3 4 5 6 7 8 9 10 |
const restify = require('restify'); const SERVER_PORT = 3000; const server = restify.createServer({ name: 'App Service' }); server.listen(SERVER_PORT, () => { console.log(`Server running on: ${JSON.stringify(server.address(), undefined, 2)}`); }); |
Lets also slightly edit the scripts part of the package.json:
1 2 3 4 5 6 |
"scripts": { "start": "node index.js", "start:dev": "node-dev index.js", "test": "echo \"Success: tests are passing\" && exit 0" }, |
After these modifications we should be able to use the npm test for running tests successfully, even if there are no real tests yet. We can also try out the server with the npm run start:dev or npm start commands:
1 2 3 4 5 6 7 |
app-service$ npm start Server running on: { "address": "::", "family": "IPv6", "port": 3000 } |
Docker related files
Also for this service the development Dockerfile.dev is quite simple. With it, we install the node modules, expose port 3000, and start the server with npm run start:dev. Node-dev provides automatic service restarting when the code is changed and related to this, we’ll later link the local project folder into the container’s src folder using Docker Compose.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
FROM node:6.10.2-alpine # Create app directory RUN mkdir /src # Install app dependencies ADD package.json npm-shrinkwrap.json /src/ # Install app dependencies RUN cd /src && npm install --loglevel warn WORKDIR /src # Define an open port for the container EXPOSE 3000 # Defined in package.json CMD [ "npm", "run", "start:dev" ] |
And .dockerignore:
1 2 3 4 5 |
node_modules npm-debug.log .git .gitignore |
And the rest
Used .gitignore file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
# See https://help.github.com/ignore-files/ for more about ignoring files. # dependencies /node_modules # test and coverage reports /shippable # local config /config/development.json # misc .DS_Store .env npm-debug.log* yarn-debug.log* yarn-error.log* |
We also setup the .eslintrc and .eslintignore files according to the Coding Conventions chapter guidelines.
And lets not forget npm shrinkwrap:
1 2 |
app-service$ npm shrinkwrap |
Auth Service
At this point the Auth Service is almost the same as App Service. So we create the Auth Service just like the App Service with the following differences:
- The name of the service in index.js:5 to Auth Service
- The ports of the service in index.js:3 and Dockerfile.dev:15 to 3100
Just to confirm the setup went fine we also try to run the service:
1 2 |
auth-service$ npm run start:dev |
Deployment Scripts
This repository contains the scripts for deploying the system in various environments.
Docker Compose is used for running the multi-container service in the local environment. The following docker-compose.yml configures the needed details for building and running the containers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
version: '2' volumes: db_volume_app: driver: local db_volume_auth: driver: local services: proxy_service: build: context: ../proxy-service dockerfile: Dockerfile.dev ports: - 80:80 - 443:443 depends_on: - app_service - auth_service front_service: build: context: ../front-service dockerfile: Dockerfile.dev ports: - 3300:3300 volumes: - ../front-service:/src app_service: build: context: ../app-service dockerfile: Dockerfile.dev ports: - 3000:3000 volumes: - ../app-service:/src depends_on: - db_app auth_service: build: context: ../auth-service dockerfile: Dockerfile.dev ports: - 3100:3100 volumes: - ../auth-service:/src depends_on: - db_auth db_app: image: postgres:9.6.2 ports: - 5000:5432 volumes: - db_volume_app:/var/lib/postgresql/data environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: database_dev PG_DATA: /var/lib/postgresql/data/pgdata db_auth: image: postgres:9.6.2 ports: - 5100:5432 volumes: - db_volume_auth:/var/lib/postgresql/data environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: database_dev PG_DATA: /var/lib/postgresql/data/pgdata |
Without going into details the script defines the following things:
- Volumes needed for the database services to persist the data and make it available for other services for back-up purposes, for example
- All the started services are defined
- Dockerfile configs are set to build the services. Note: all the component repositories are expected to be located in parallel to the deployment-scripts repo.
- Open ports are defined for most of the services to allow direct access to them
- Local project directories are mapped into containers src directories for Front, Auth & App Services to allow them to access local files and enable them to restart or reload file changes automatically
- Few environment variables are passed to configure the database containers
Now everything should be in place to try out the basic development environment as a whole. The container setup can be built and run with the following command:
1 2 |
deployment-scripts$ docker-compose up |
All these domains should now respond properly and the http -> https redirection should also be in place:
- 127.0.0.1.xip.io
- www.127.0.0.1.xip.io
- app.127.0.0.1.xip.io
- auth.127.0.0.1.xip.io
During the development the docker-compose can be left running in one terminal window, but with certain parameters it’s possible to run the containers also in the background.
Summary
Yay, the basic project skeleton is finally ready! Quite a lot of details were skipped on purpose in this article to keep the focus on the high level development environment idea and to reduce the length of the articles. This style will also continue in the upcoming articles. If you have any questions about the content, just post a message and I’ll do my best to answer.
The next part of the series will be about setting up a Kontena-based testing environment into UpCloud together with CI integration. The big picture of testing, development and deployment will be also defined.
Resources
Resources I found useful while writing the first parts of the article series:
Docker and Containers
- https://12factor.net/
- http://blog.cloud66.com/9-crtitical-decisions-needed-to-run-docker-in-production/
Project Structures
- https://marmelab.com/blog/2015/12/17/react-directory-structure.html
- https://blog.risingstack.com/node-js-project-structure-tutorial-node-js-at-scale/