Docker enables us to ship the software anywhere without worrying compatibility issues
Compatibility is one of the deployment issues. That issue arises when some feature implementation requires a package that turns out that package only available locally. While in a production environment, that package doesn’t available so the problem that’s stated like in meme could happen. How about we configure it directly in a production environment? The problem is, the configuration process itself is quite cumbersome especially when there are a lot of packages required. Is there any tool that can ship that working local environment packages configuration to the production environment so we don’t have to reconfigure the package on production? Well, why don’t use docker? In this article, I’ll explain what I know about docker since I don’t really how docker actually works. Later on, I’ll explain how my group decided not to use docker in our software architecture.
Docker At A Glance
Docker is basically a software shipping tool that ships software as a set of OS, packages, configurations, and environments in one bundle called an image. This kind of tool is called a container. By containerizing software, we can ship and run that software anywhere no matter how different the configuration is. We also don’t have to do additional configuration since the image has necessary configs and the container can run that image on top of any machine. So once the container is built from that software image, we can treat it as a virtual machine that’s preloaded with the custom operating system that could run the shipped software well.
Docker consists of several components:
- Docker image: This component contains information about dependencies, configurations, operating systems, etc that are needed to make the software work. This component serves as the blueprint to create a container.
- Docker container: Instance of one docker image
- Dockerfile: File that consists of several commands to build the docker image.
- Compose file: File that compiles several containers into one service unit.
Let's Build A Docker Container
The first step is of course to install docker. For windows, you need to enable WSL 2 first before you can use docker on your machine. You can check out this link to install docker-engine on your machine. Once you have installed it on your machine, the next step is to create a docker file in your project folder. For this example, I use a modified file compressor service code in my LAW second assignment. I made the modification on main.go so it becomes like this.
Here is an example of the docker file for that service.
Line 1 defines which base image is going to be used. Line 3 copies all the files on the directory where the Dockerfile is located to a specified filesystem in the container. Here, I defined the specified filesystem as /go/src/file-compressor. Once it finished the files-copying process, the next step is to set that specified filesystem directory as the current working directory on the container. Line 7 will install all dependencies to run a certain go program to the container. Line 8 will make the application run on port 8086 in the container. The final line will start the go program every time the container is started.
After you have created your Dockerfile, the next thing is to create a docker image based on that Dockerfile. To do that you can use this command
$ sudo docker build -t <container-image-name> .
$ sudo docker build -t file-compressor.
Notice that the -t option is used to give the image a name so you can recognize that image easily.
Once you have built the image using that command, the last step is to run that image by using the command
$ sudo docker run -p <host-ip> (optional):<host-port>:<container-port> <container-image-name>
$ sudo docker run -p 8086:8086 file-compressor
The p flag will map any request to host IP with host port will be directed into specified a container port. For that example, any request that is directed to localhost:8086 will be directed to port 8086 in the container. (Notes: if the host IP is not specified, it’ll be directed to localhost).
Voila! Your go application is finally run on localhost:8086.
If you want to stop the container you can these commands
#This will list down all container
$ sudo docker ps -a#Stop the container
$ sudo docker stop <CONTAINER_ID>
As you can see from the demo, the specified container is successfully stopped right after I executed the docker stop.
Why We Don’t Use Docker In Our Software Engineering Project
We don’t use docker in our software engineering project because so far we don’t encounter any compatibility issues whether in the development team machine or the Heroku environment. Secondly, our backend application & frontend application are deployed separately in Heroku. Actually, my friend Akbar decided not to use docker for deployment so he may know more about the reasons for this. So here’s our software architecture for paperless judicial process applications.
Here we use firebase as file storage because the Heroku storage isn’t enough to store many pdf files. The decision we want to send the file from the backend instead of the frontend is for security purposes and also makes the file upload process more controllable. We use monolithic architecture because we don’t really consider the scalability aspect since the user visit is plenty according to our client. Utilizing microservices means we need to add complex environments like monitoring & logging, etc to ensure our system reliable. There’s also overhead time cost for services communication. For these reasons, we decide not to use microservice for our software architecture.
That’s all from me about docker and our software architecture. I hope this article helps you to learn about docker.