Fixed Test Track Deployment Issue using Docker

Chris Board

27/08/20254 min read
Build In Publictest-track

For a long time, I’ve been wanting to explore Docker, but never had the time or a specific reason to look into it. However, as you may or may not know, we recently migrated Test Track from a 2 project, Vite for Frontend and Laravel for backend, to a single NextJS project to simplify our development and deployment process.

This was successful, except for one, fairly significant problem, building the project.

Before going into the changes, lets take a look at how it was deployed.

Test Track is currently hosted on some fairly small servers, a couple of VPS hosted in Digital Ocean with a 2 core CPU and 4GB RAM. Under normal operation the CPU is around 1-5% and RAM is around 30-50%.

The process was somewhat manual, in the sense that I would push the code changes to the main branch in GitHub, then SSH to both servers, do a git pull, followed by an npm run build and then restart the service.

It was the npm run build command that caused the problem as we would see CPU spike to around 90% and RAM, upwards of 80%.

This resulted in getting memory allocation errors during the build, and I would need to restart other services on these servers to free up some RAM to get the build to complete, and the build would be very slow.

I didn’t want to upgrade these two servers to have more CPU and RAM resource just to be able to complete the build when day to day there server utilisation is fine so I turned to docker to resolve the problem.

I have a TeamCity server that was primarily used for just building and creating a release folder of some c# backend apps that run on these servers so thought this server as it has a bit more CPU and RAM to play with, this server can create the build via docker and then push to a registry and then the production servers can just pull the image.

So I looked into Docker learnt how it works and then started working on converting the NextJS Test Track project to be containerised and run under Docker. I also wanted a GUI to manage the Docker containers as I didn’t want to worry about terminal commands as easier to get wrong and make mistakes with so I started looking at options around that. Therefore, there’s another server I have in Digital Ocean which is an internal server for some internal apps such as viewing alarms raised by apps on these servers, managing customers/users on Test Track such as creating new release notifications etc.

On this admin server, I used Portainer as the docker manager, and installed the Portainer agent on both production servers to manage each server individually. I did look into maybe using Kubernetes or Docker Swarm, but this isn’t really needed at this stage of the project and just added extra complication that’s not really needed at this time.

Now the process is, I push to main, or pull request from a feature branch to main, TeamCity auto detects the change, and pulls the latest code from GitHub, runs the Docker build, and pushes the Docker image to the registry hosted by Digital Ocean.

Then to Deploy, I can login to Portainer, select the first server and tell it to recreate-re-pull the image and then repeat the step on the other server.

This massively simplifies the deployment process, saves time as well as not waiting for the build to happen twice and resolves the issue with the memory exhausted during the build process on the production servers.

What’s your thoughts, would you have gone a different way, let me know in the comments.

Test Track

Are you a developer or involved in Quality Assurance Testing or User Acceptance Testing, you might be interested in Test Track

A simple and affordable test planning and management solution.