Docker Swarm is the containers orchestration solution built from the ground up and maintained by Docker. Through an additional layer on top of the Docker Engine and taking advantage of a subset of built-in commands, Swarm does provides to developers and infrastructure professionals a simplified way to accomplish something non-trivial at all – orchestrating containers within an actual cluster.

In the last post I wrote in here (on the blog, I mean), I discussed in-depth about how does Swarm works, and also, presented the process of creating / configuring a Swarm cluster from scratch, kinda preparing the blog’s readers for today’s post, where I intend to guide you through the manual process of containers orchestration using Docker Swarm. Thus, you’re following this along in a practical way, to be able to reproduce the examples presented from now on, I strongly recommend reading and performing the steps presented in the first post, which is available here.

The sample app

In order to distribute an application through over containers with Swarm we first need to have the application in place, right? So, let’s go after it then. Note, the focus here is not the application itself but the process of distributing it using Swarm. So let’s create a simple ASP.NET Core MVC application (I’m assuming you have .NET Core already installed and configured on your machine. If that ain’t the case, see how to get it done by following the steps outlined in here). Then, we will containerize it locally (I’m assuming you already have Docker installed and configured on your machine. If this is not the case, please follow the procedures outlined in here to obtain Docker). Then, we will push it out to the Azure Container Registry, and then, only then, do distribute over the cluster.

Creating the .NET Core Sample App

Nowadays, the simplest and fastest way to create a new ASP.NET Core application, is going after it either through the Command Prompt, Powershell or Bash. I’ve created my app executing the simple command line below.

dotnet new mvc -n DockerDemoApp

To get myself certified that the application was functional, I have navigated to project’s *.csproj file structure and ran the dotnet run command. The result can be seen through Figure 1.

Figure 1. The sample app up and running

I’m going to do perform two quick changes in the sample application for us to have some level of customization. They are:

1. First, I will do modify app’s title. For this, once in the project’s structure, I have navigated to the folder Views”> “Shared” and then, edited the file “_Layout.cshtml” adding below’s line into it.

<a asp-area="" asp-controller="Home" asp-action="Index" class="navbar-brand">Fabricio's App</a>

2. I have changed the port throughout kestrel traditionally listens requests. Instead of using the 5000 one, I will pick the 8080. This change must be performed into the “Program.cs” file within the BuildWebHost method. Your method now should be quite similar to the one presented by the peace of code below.

By executing the application once again you should be able to see something pretty similar to the Figure 2.

Figure 2. Application running on a different port

Dockerizing and publishing the app

Our next step is pretty clear – we need to make sure that this application is executable in a Docker container, right? Right. But first, we need to publish the sample app. This is because we’re going to use a specific (and therefore much lighter) .NET Core production-ready image. As you might imagine, there are other .NET Core images available for different purposes (dev, build, etc.), however, as our application is theoretically ready for production, we’re going to use this specific production image as basis for the container.

I used the command dotnet publish (dotnet publish -o ./publish) to get the app published locally. In my case, I have requested the .NET core publisher to have the files resulting from this process hosted in the “publish” directory at app’s root level, however, you can order the publisher to place these files whatever directory you prefer. Figure 3 presents the file and directory structure of the “publish” folder.

Figure 3. App published

Now that we have “what to containerize”, we can then move on towards to get it done. The first step on this regard is to “tell” the docker engine how to configure the container that will host our application, and as you may know, the primary element to get there is the famous file called “Dockerfile”. The peace of code below shows up that configuration.

Some explanation is necessary at this point. So here we go:

  • “Telling” docker-engine which image to default to.
  • “Notifying” docker-engine that it should create a folder named “app” in the container’s root directory, and also, that the same directory will be the default directory host for the app.
  • “Informing” docker-engine that during the building process it should copy the contents from “publish” directory into WORKDIR.
  • “Explaining” docker-engine that the application container should execute the command “dotnet” against the DemoDockerApp’s dll.
  • “Explaining” docker-engine that the port throughout requests will listen on the container is 8080.

Simple and straightforward, isn’t it? Now, as a next step, we can then generate the container itself. For this, we will use the docker build command (docker build -t docker-demo-app-prod.). If all went well, you will be viewing a screen similar to the one shown in Figure 4.

Figure 4. Building the Docker image for sample app

Now we’re ready to test the container, which runs the sample app locally. To do this, run the docker run command (docker run -p 8080:8080 docker-demo-app-prod). The result should be the application running from the container, as you can see through Figure 5.

Figure 5. Running up the container just built

Azure Container Registry (ACR)

Done! The application is built, containerized and running locally. Next step? Push this image out to either public or private online image registry. We must necessarily do this. Why? So that we can distribute this application at runtime when we use Swarm to orchestrate containers.

If you are being introduced to this concept of image repository/registry just now and you don’t know exactly what we are talking about, think of a code repository (GitHub, for example) where you do code updates, make versions and everything. It’s basically the same concept, except for the context of container images rather than code itself. Fortunately, there are some good options for hosting container images currently. The most popular is Docker Hub. Docker Hub is free and would fit perfectly for our needs here, however, in Docker Hub’s concept, the image is public (anyone can pull it off) and that’s not what we wanted (ok, I just created this premise 🙂).

So I chose to use a more robust and enterprise solution as my image repository – Azure Container Registry (ACR). Azure Container Registry is a managed image service based on open-source Docker Registry 2.0. In addition to creating and maintaining different versions for your containers, ACR will provide you with robust tools to ensure the privacy of the images being generated/consumed. If you would like to know more about ACR (and I recommend you do so), please follow this link.

Creating a new ACR

Towards to create a new ACR, I’ll be using Azure CLI. But this procedure could also be performed via Powershell or Azure Portal. The line of code that created my ACR is the one presented below. If all went well, you should have been seen a success message detailing the newly created feature, as shown on Figure 6.

az acr create --resource-group swarm --name swarmfabricio --sku Basic
Figure 6. ACR successfully created

As a next step, to make the process secure, you can enable an administrative user for your repositories. I did this through the command az acr update (az acr update -n swarmfabricio --admin-enabled true). Your login user will be your ACR name (in my case “swarmfabricio”) and your password will be automatically generated. To get it, type the command az acr credentials" (az acr credential show --name swarmfabricio --query "passwords[0].value").

Ok so now, we can push our image out to our private repository. To do this, I completed three steps (described below):

  • I’ve got myself authenticated into ACR service using the command docker login --username swarmfabricio --password {my password} swarmfabricio.azureacr.io
  • I did “tag” the local image to meet ACR’s taxonomy. I did it by executing the command docker tag docker-demo-app-prod: latest swarmfabricio.azureacr.io/docker-demo-app-prod:v1
  • Then, I have finally pushed the image out into ACR by performing the command docker push swarmfabricio.azureacr.io/docker-demo-app-prod:v1

Figure 7 presents the image already sitting on the ACR.

Figure 7. Image already sitting into ACR

Done. To make sure that everything was working properly, I ran the docker run command once again, but this time, on the image I had just pushed into Azure Container Registry. Guess what? Everything works perfectly, as you can see through the Figure 8.

Figure 8. Running the app locally from recently pulled image

We’re all set now. Let’s have fun doing some orchestration with Swarm.

Orchestrating containers with Swarm

Finally, we come to the point we are going to orchestrate the containers that will run the instances of our “DockerDemoApp” app. But before we get our hands dirty, we need to understand some fundamental concepts of working with Docker Swarm. Let’s quickly go through them first and then, practice!

For Docker Swarm to be able to orchestrate containers, there are four elements throughout which everything happens: Tasks, Services, Nodes, and Load Balancing. I briefly mentioned something about these concepts in the first post of this two-article series, however, today, let’s dig a little deeper into them to get the process going.

Nodes

In the context of a Swarm cluster, a node is a physical or virtual instance that runs the docker engine in Swarm mode. These instances can assume three roles: managers, workers, or both. Managers are responsible for controlling the entire container lifecycle, reconciling states, assigning workloads to elements below the cluster, and more. Workers are nodes that only receive workloads and process them. Nodes that assume both roles will always prioritize the manager role rather than the worker. Figure 9 below gives us an overview of a Swarm cluster with 3 node managers/workers and 3 node workers. Only 1 manager is acting exclusively as a manager. The other two nodes are ready to be managers if the active manager fails for some reason. While this does not happen, they act as workers only.

Figure 9. General view about a Swarm cluster

Tasks and Services

Quite pragmatically, we can say that a task is a single container execution environment that “responds” to the management of Swarm node managers. Unlike standalone Docker instances, tasks have no autonomy whatsoever. They have the default setting by the managers and will always execute what they have been assigned and done. A task will always be created and assigned to a worker node by the manager. Another important aspect about tasks is that a given task can never start executing in a node and ending in another. Swarm uses tasks as atomic workload distribution units, so in the context of Swarm, we will not talk about containers anymore – we’re going to talk about tasks running within a given node under a service. Figure 10 provides an overview of tasks being distributed across nodes.

Figure 10. Tasks being distributed all over the cluster

While tasks are the elements throughout containers are distributed across nodes, service is the element responsible for all environment-related definitions. As we will see below, when we create a service (and this is actually the only way for us to interact with the Swarm cluster) we do define, for instance, the number of replicas per node, the image that Docker Swarm must consider for deploying containers, the commands that will be performed within the containers itself, network layers, among many other configurations. Therefore, this distribution of tasks occurs later, when the service has already defined when everything should work. See the illustration in Figure 11 to make this concept clearer.

Figure 11. Tasks being distributed under service definition

Load Balancing

This is another fundamental concept when it comes to clusters. It is critical because, in the end, it is the primary element of load distribution among the tasks being performed on each node.

Each node within the Swarm cluster uses the ingress load balancing method to publicly expose the desired services to external access. What needs to be done, however, is to assign an available port value to the “PublishedPort” parameter. We can set this value on the moment we create the service itself. If we do not manually assign a value, Swarm will automatically assign an available port between 30000-32767.

If you are building your structure in a public cloud (like Azure), and want to make other services associated with your application communicate with your tasks running on the Swarm cluster, go for it. As I mentioned, each cluster node works with the external ingress model. Just access the correct “PublishedPort”.

Another important aspect related to Swarm is that it has an internal DNS component that automatically assigns to each running service to an entry in the internal DNS table. Why is it important? Because internally the load balancer uses these DNS names to distribute the loads among the different running services. Figure 12 will clarify this concept.

Figure 12. Load-balancers distributing traffic over tasks within the nodes

Now that we have all the concepts we need, we can move forward on working with Docker Swarm itself. Not coincidentally, Figure 12 closely reflects the Swarm cluster we will be working with, that is, three manager machines (at the same time workers) and three worker machines. The process for cluster creation and configuration has been already been covered in a previous post here on the site.

As mentioned earlier, all our interaction with the cluster will be through the node manager elected as the leader. In my case, I refer to node “node-manager-1”. Thus, the first thing to do is to access this environment. To avoid headaches going forward, I have already run the sudo su command towards to get administrator privileges within the environment as a whole. Figure 13 shows up my environment manager working properly.

Figure 13. Connecting to the manager and leader node

The first thing I’m going to do is to check how many services do I have running on my cluster. For this, I will use the command docker service ls. The return of executing this command can be seen in Figure 14.

Figure 14. Return from calling docker service ls

Ok. Our cluster is empty, that is, we have no services running and therefore we know that there is no task assigned to the nodes. Let’s start our activities here by giving the manager permission to pull our Docker image hosted on ACR, since our repository is private and protected by username and password. To overcome it, I will again use the docker login command. The result of this process can be seen through Figure 15.

Figure 15. Getting authenticated onto ACR

Next step? Create our first app distribution by defining a new service under Docker Swarm. To get there, I’m going to use below’s command-line. Figure 16 shows up the screen output after the execution of this command is completed.

docker service create --with-registry-auth --name servicev1 -p 8080:8080 --replicas 5 swarmfabricio.azurecr.io/docker-demo-app-prod:v1
Figure 16. Servicev1 successfully created

Before we proceed with some success checks, let’s first understand what exactly we’re doing with the command line used to create our initial service.

  • docker service create: We’re obviously ordering Swarm to create a new service definition.
  • –with-registry-auth: Since we are using a decentralized polling model, that means, images will be distributed across nodes that we don’t have access to them, we need to “tell” Swarm which repository access credentials it should possess to do so. By using this parameter, we are then passing on the same access credential used by the lead node manager.
  • –Name servicev1: We’re just assigning a label to the service definition being created.
  • -p 8080:8080: Remember when I said each service could have an open port for external communication? So, we are defining that, for this service, the port we are going to open up is 8080, which will be mapped to port 8080 for each of the tasks running on nodes. This way, the traffic will flows as follows:
    • The request arrives on port 8080 of service “servicev1”;
    • Following Swarm’s load distribution strategy (see Figure 12), this request is then directed to the also external load balancer of one of the nodes performing tasks generated by “servicev1”.
    • The NAT process is then completed with the request arriving at container port 8080 underneath the selected task to receive the load.
  • –Replicas 5: We are ordering Swarm to create 5 replicas (hence tasks) of this application. These five replicas will be distributed throughout the nodes. Who will evaluate “which task goes to which node” will be the lead manager.
  • swarmfabricio.azurecr.io/docker-demo-app-prod:v1: We are ordering Swarm to pull the application image from our Azure Container Registry.

Ready. Theoretically, our application is already being distributed. Let’s first test if it is working properly. Let’s then run the docker service ps command to check which nodes are running our 5 replicas. Thus, we can call any of the machines running the application through our browser and it should load.

The result of executing the previous command can be seen in Figure 17.

Figure 17. Containers being executed under Swarm cluster

In addition to showing us which nodes are actually running tasks (and therefore containers) with our application, screen feedback also gives us important information about “Desired State” versus “Current State”. This shows us the resilient nature of the Docker Swarm. When a task is created and allocated to one of the nodes, Swarm expects this task to execute correctly. If something goes wrong towards to prevent this state, Swarm is going to mark this task as “Exited”, indicating that “Current State” is different from “Desired State”. This will trigger Swarm’s behavior to generate a new task to replace the one that failed earlier. Cool, huh?

To make sure the application is working correctly, simply open your browser and “call” the application either by its DNS name (if available) or by the node’s public IP “:” port declared as open (in our case, 8080). Figure 18 shows the application working correctly.

Figure 18. App running normally under Swarm cluster

Scalating out the application

Let’s imagine that the application “DockerDemoApp” is a success and that the number of hits to it has been increasing rapidly. Besides, it is worth mentioning that we are in a cluster environment, which automatically leads us to think of high availability and “scalability”. Well, next I want to show the process of scaling containers using Docker Swarm.

Let’s say the people in charge of the environment where our app is running have identified that our responsiveness to the app has to, at least, double. That means we would need to go from 5 tasks all the way up to 10, right? Considering this scenario, I got back to “node-manager-1” and executed below’s command-line. The result generated by performing this command can be viewed in Figure 19.

docker service scale servicev1 = 10
Figure 19. Scaling out the sample app

Simple, fast and straightforward. We have doubled both processing capacity and responsiveness of our application. The application continues to respond on any of the listed nodes as long as it is called on port 8080.

Another important aspect is that the docker service scale command is just a shortcut to docker service update –replicas. Regardless of the command used here, the effect will be the same, that is, you will scale the processing power of your applications.

Updating the sample app

Of course, the more the application grows in a cluster environment, the more complex it is to maintain such a structure, right? In this context, updating application versions will certainly not be an easy task. The good news is that Docker Swarm has a simple and well-defined process for updating tasks by running instances of a given application already running. In this session, we will go through this process.

To demonstrate this ability, I’ve made some changes to the sample app (added a new main menu item and changed the footer information) that we are using and sent over this new version (push) to our ACR as v2. You can see it through the Figure 20. The process of image generation and upload to an online repository was already covered in this post when we were generating the first version of the sample application.

Figure 20. App’s new version in ACR

The next step then consists of updating our service already up and running with the new version of this application, right? For this, I used the command: docker service update --with-registry-auth --image swarmfabricio.azurecr.io/docker-demo-app-prod:v2 --update-parallelism 2 --update-delay 10s.

Let me explain what does this command precisely do:

docker service update: self-explanatory. We are ordering Docker Swarm to upgrade an active service.

–with-registry-auth: Since we are using a decentralized pulling model, that is, images will be distributed across other nodes and we don’t have access to them, we need to tell Swarm which repository access credentials it should use to do so. By using this parameter, we are passing on the same access credential used by the lead node manager.

–image swarmfabricio.azurecr.io/docker-demo-app-prod:v2: We are explaining to Swarm where it should fetch the image of the new version of the pulling application.

–update-parallelism 2: We are ordering Swarm to perform the update process in pairs, i.e., to update two tasks simultaneously. When it gets done, update two more, and so on.

–update-delay: We are informing Swarm to add a 1 second time window between a task pair update and another, that is, when a given update finishes, wait for 1 second and then start updating two more nodes.

The result of the application update process can be viewed in Figure 21. In the browser, when calling any of the nodes running the new version of the application, it will be possible to view the changes.

Figure 21. Application being updated

Done! Hopefully this article is going to help you out on your journey learning Docker Swarm.

Enjoy!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *