A date with Docker (Swarm) before K8s

This post is about Docker (Containers), the notoriously disruptive technology that has taken the software and IT industry by storm last year, and continues to rock the foundation of old-style but most practiced ways of developing and running software.

Though I was introduced to Docker 2 years back, it was only recently that I started ‘playing’ with it, and I kind of love it. I believe developers can implement amazing use-cases stacking and weaving containers together, the extent of which is only limited by one’s imagination. It’s like creating a cocktail of software and tools that gives you an instant kick, because, with Docker, building, shipping, and running software is incredibly fast and surprisingly light and easy.

Sometimes it’s just not necessary to know every bit of things under the hood to make your ‘business’ applications run. Docker (images) does this perfectly well. The best part is, it’s just not development. Docker is increasingly becoming popular for production deployments with automated CI/CD pipelines.

With PaaS solutions like Cloud-foundry and Stackato (Helion) an HPE product using Docker as the default container and Microsoft, offering both Windows and Linux based container services through it’s Azure Cloud platform,  Docker is all set to play a pivotal role in every enterprise’s IT landscape. In fact, some compare it to what VMware was 10 years back as pioneers in the world of virtualization.

Some facts and figures of Docker’s rapid adoption can be found here – 8 surprising facts about Docker’s real adoption

The playground 

In the world of sleek and slim ultra-books, sometimes having a mobile workstation as a laptop by your side is a very good thing, despite its huge size and friends and colleagues giggling at almost a kilogram of the power adapter, occasionally mistaking it for a home inverter. In hind-sight, a bunch of Ubuntu VMs on any cloud platform would have been just fine for this use case.

The Use Case

So I decided to implement a simple use-case to evaluate the 3 fundamental needs of every application – compute, storage and networking, also explore the truth behind Docker‘s tagline build -> ship -> run. My use case is so common, that it could be easily called the “hello world” of DevOps.

The plan is simple – A ‘confident’ developer commits a change (as a fix for a bug identified in production), which is propagated through various quality gates and deployed to test (or prod) and made available for customers. In other-words an automated CI/CD pipeline using containers. Now we all know it’s a far-cry in the real enterprise today, many times because of reasons beyond technical.

No alt text provided for this image

Ingredients for the Cocktail

 Git-lab (SCM), Jenkins (CI), Sonar (Quality), Nexus (Distribution), Rundeck (CD), Tomcat (Application), a pinch of Swarm and Consul “image” ( for garnish ) all of which can be found here in the Docker Hub Repository

From the Docker world, the following tools are used to help realize the use-case.

  • Docker-Machine
  • Docker-Swarm
  • Docker-Engine
  • Docker-Compose
  • Docker Images ( pushed to Docker Hub ) 
  • DockerFile ( Optional to build an image from scratch

The solution

No alt text provided for this image

The essentials: Compute

No alt text provided for this image

Containers are the compute nodes that run on machines. Containers are instances of images just like objects are Instances of classes. You “spin” containers when you need them, and you can spin many containers from 1 image. Containers are extremely lightweight and you never think of it as more than a compute engine with ephemeral memory ( data vanishes when containers are removed ). Just sheer processing power. This gives us the freedom of destroying and creating containers at will, in Cloud’s terminology – elasticity

The swarm cluster decides where ( which machine ) to spin the Container, but this decision can be influenced using constraints and affinities. Container’s primary memory (RAM) is limited by the Machine’s secondary memory. 

The essentials: Network

No alt text provided for this image

There are various network types for containers, which are either local (to the machine or host ) or global to the entire swarm which can be multi-host ( across different machines) as used in this example. To implement any use case it’s important that Compute nodes are able to “talk” to each other in a secure way.

There are 2 levels of networking in this example. Machine-2-Machine and Container-2-Container. C2C connectivity happens across an “overlay” network driver, while M2M relies on the way hypervisor (Virtual box) is configured, like in this example there are 2 virtual network adapters NAT and Host-only 

  • VBox NAT is used for internet connectivity ( 172.x.x.x series )
  • VBox Host-Only is used to connect M2M ( 192.x.x.x series )
  • Swarm Overlay driver to connect C2C ( 10.x.x.x series ) 

The Container Ports is mapped to the machine available port to access it from Machine to do SSH and SCP.

The essentials: Storage

No alt text provided for this image

Containers have ephemeral memory, the life of which is only till the Containers are running. So it’s crucial to save the state to a more “persistent” store. Docker volumes come to our rescue for this. Docker volumes are typically an “R/W” mount on a machine file system which is “mapped” to a containers file system [ machine/var: /container/var]. As part of our use case implementation, a “Data Container” is used to map C2M, though it’s not mandatory. Data containers are an elegant way to store container data (ever from multiple Containers) on to the machine.

But it’s also interesting to know what data needs to be stored in ephemeral memory and what to permanent store. There is no thumb rule, but ideally, all data generated by the primary process (PID = 1) which is also called as application data is typically stored on volumes so that the state of the application is recovered after a reboot of the container or even machine. One of the other ways to store data (and persist across reboots) is to use a Docker commit subcommand, which saves the current state of Container into a higher version of the image (always tagged)

Docker images use a differential layered approach to storing images, this is an excellent way to store and also transfer only delta changes across machines and even network (or internet, a push to Docker Hub). This makes Docker pull and push extremely light and lightning-fast in a development cycle.

Docker Compose

No alt text provided for this image

Docker-Compose, the orchestrator helps weave containers together and build complex service dependencies for ex: app-db dependency. The services (containers) are defined in a simple YAML file called docker-compose.yml. It’s a really simple yet extremely powerful file, service definition attributes like the image to use, port mapping, volume mapping, machine affinity and constraints, environment variables to be used inside containers, the order of container spin, and networks. DockerCompose has an equivalent attribute for every Docker run command. This greatly reduces the need of running different containers individually. One can create an entire complex environment (incl. this example) from a single compose file. Now that’s called Infrastructure as code.

DockerFile

The images can also be built by using DockerFile, which has instructions to install the libraries, packages, and applications FROM scratch. Often this is considered the best way to create an image in the first place. The other option of course is to save the state of a running (or stopped) container to an image.

Why should you consider (Docker) Containers?

  • very fast and light-weight
  • scales wonderfully
  • promotes micro-services architecture
  • distributed computing made easy
  • massive collection of both official distribution and reworked user images in Docker Hub
  • LB and HA built into the Swarm cluster
  • a perfect tool to bridge the Dev-Ops gaps 
  • “highly secure” as per Docker Inc, but customer concerns (like image-scan) still valid.
  • fast adoption in the industry and a vibrant community of contributors.
  • Production-ready. Experts predict in the next 5 years 40% of customers globally will move to container-based solutions in production.

Leave a Reply

Your email address will not be published. Required fields are marked *