The development of web applications keep evolving on an very accelerated pace, and, as you may know, nowadays, one of the areas where this evolution is taking place is the front-end. Because we have fast internet connections, lovely development frameworks and extremely efficient and secure protocols to make the communication flows all the way out from the final user’s computer to remote services, is totally feasible to developers create beautiful, robust, wholly decoupled and, at the same time extremely responsible applications by building robust static front-ends.

Vue.js, React.js, Angular and, Aurelia are the exponent frameworks whereby developers has been building modern front-ends to support out “the new web apps”. As you may know, the beauty of these frameworks resides on the fact that by using only a strict combination of Javascript, HTML and CSS is possible to every front-end developer build up amazing and efficient UX that can communicate appropriately with services in the backend and using client’s computer power to get there (please, see the Figure 1).

Figure 1. Simple Single-page Application diagram

Architectural changes

Architecturally speaking, there’s no doubt – this front-end centric model brings some impactful changes, both on the way you develop the application itself and the way you host it, whether in the cloud or not.

In terms of hosting traditional web apps, we are used to seeing both front-end and back-end being hosted into the application server. It usually happens due the fact that the front-end side of these applications often time rely on server-side technologies/frameworks, like PHP, Ruby, ASP.Net Web Forms, Razor Web Pages, and so forth.

Let me be clear here: the traditional hosting model (just mentioned) works fine and shouldn’t be a problem if you have a good code being produced on top of those platforms and also, the right tools in place to keep up the underlying infrastructure up and running correctly. However, we can’t refuse the benefits brought by this new approach. Environment-wise, some of them are listed below.

  • Performance. Despite the fact that of being possible to have a front-end along with server components being performant, we need to mention that there are several components you should keep up in consideration to make it happens, and so forth, while one of the significant advantages of having “front-end centric” approach is that you’re delivering static content “only”. Thus, you could take advantage of services like CDN, for example, to accelerate the app’s content delivery. Also, because the processing has been pushed to client’s machine, you could save CPU processing time at the application server.
  • Scalability. It is not that simple scale-out front-ends tied on server-side components. There are several reasons for that, but the impactful one on my perspective would be the fact that these front-ends usually do much more than they suppose to and it necessary brings more complexity and dependency. The “front-end centric” approach could make your life easier on that regard by auto-scaling the environment based upon the crescent number of requests and pushing the content faster to the final user’s machine.
  • Isolation. One of the premises of this new model is that you should have the front-end isolated from your back-end, which is excellent. Among the benefits of having it, I would feature: testability of the components, focus on specialized skills, oriented to componentization and reusability, and so forth.
  • Cost-effective. Because you don’t need to maintain specialized server-side (IIS, Apache, Nginx, and such) up and running for the front-end piece, the overall architecture cost should be positively impacted. In Azure, for example, you could easily replace a Web App by a Static Websites feature of Storage Accounts. The reduction on that case would be around US$ 50.00/month depending on the hosting plan’s tier you have selected for the Web App itself.
  • DevOps ready. I know, I know, you can have your pipelines (build and release) effectively working for whatever kind of platform/code you want, however, it is pretty simple to build up a pipeline for TypeScript, for example which, at the end, pushes the static code generated to you target destination, huh?

How would I do it in Azure?

There are several ways for you to accommodate this kind of architecture on Azure, however; I’m going to describe from now on the one I believe to be the ideal scenario for it.

When it comes to deliver static content to users over the internet, there is a very well accepted, and implemented pattern by serious website infrastructures throughout the World, which consists on taking advantage of high-scalable CDN services (like Akamai, for example) linked to application servers in the back to dynamic cache contents in and serving it from there afterwards.

The approach which I’m going to explore from now is about the same, but relying on Azure Storage Accounts as hosting point for the static files (Static Websites) rather than spinning up a web server specifically for that. The architecture we’re going to deliver together is shown through the Figure 2.

Figure 2. High-level view about the solution to be implemented

The application itself is going to be a static website built on top of a framework called Jekyll; a handy tool built on top of Ruby that allows us to create rich static content by using a proper markup language mixed with either HTML or markdown. The static website then is generated in execution time by calling the simple command below. Jekyll’s website has very lovely documentation if you would like to go more in-depth on it.

bundle exec jekyll serve

Also, we’re are going to use Azure DevOps to build both build and release pipelines to dynamically push codes into the production environment, as you may have noted by observing Figure 2.

Creating a Static Website host

From now on, I’m assuming that you already have an Azure subscription up and running and also, you have already created a storage account. If you don’t have an Azure subscription, you can start for free by giving a click here. If you already have the subscription but don’t have a storage account in place, you can follow up the steps described here to get there.

Once inside my storage account (which needs to be v2) onto the Azure portal, I just go to “static website” option available on the main menu on the left side. Then, on the main blade (right side), I have enabled the static website feature, and by doing this, I was given two dynamic URLs, which are the public endpoints (primary and secondary) to my website’s files. The portal also notifies me that my files necessarily gotta be sitting into the “$web” container. Also, I have defined that my index document will be “index.html” and the default error page is going to be 404.html. Please, see Figure 3 to realize this process.

Figure 3. Enabling the static website feature

Done. The “web server” for your static files is already configured. Next? Let’s configure a CDN service to cache the content in and deliver it faster to the website’s customers.

Configuring Azure CDN profile and endpoint

Azure Content Delivery Network (CDN) is a global CDN solution for delivering high-bandwidth content. With Azure CDN, you can cache static objects loaded either from Azure Blob storage (our case here), or even a web application or any publicly accessible web server, by using the closest point of presence (POP) server.

To streamline the process of delivering our static front-end to our customers and also, to enable both SSL and custom name capabilities to my website, I’m going to enable a CDN endpoint (based upon Verizon – but it could be whatever you wanted).

To see how to create a new CDN profile through the Azure portal, and at the same time, how to add a new endpoint and connect it to a storage blob, please give a click on this link.

To see how to bounce a custom domain to a given endpoint, please follow up this link.

To see how to allow only HTTPS (SSL) connections to a give CDN endpoint through the Azure portal, please give a click on this link.

After follow up the steps described earlier, I was able to get my CDN endpoint correctly connected to the storage blob where my static front-end will be sitting, and also to have both custom domain and certificates working properly, as you can see through the Figure 4.

Figure 4. CDN profile and endpoint properly configured

Building the static website locally

So, what I’m going to do next is generate the local version of my website before to publish it. I’m doing this to make sure that everything works properly before to send it out to production.

As mentioned early on, I’m using Jekyll to build up my website, which means that I had to go through a simple set up to get everything in place. If you are interested to see what is needed to work with Jekyll locally, please give a click on this link. To execute my website locally, I just navigated to the root directory of it and from there, executed the command below.

jekyll serve

It started the local server to run my application, as you can see through the Figure 5.

Figure 5. Running the locar server for Jekyll

Now, by calling ” http://127.0.0.1:4000″ at the address bar of my preferred browser, I can see my website coming out (Figure 6).

Figure 6. Running the static website locally

Great! So now, we have both target environment (where my static front-end will be sitting onto), and also the website itself up and running locally. So, what I’m going to do next is to manually do a deployment of my website and make sure that everything works appropriately.

Uploading website files and testing

To get there, I’m going to use a free tool called Azure Storage Explorer, which allows me to explore (upload, delete, configure, and so forth) every element within the storage account, including containers and blobs sitting on top of it.

After downloading the tool, add my Azure subscription into it and select the “$web” storage blob, I was able to upload the files, of my website, as you can see through the Figure 7.

Figure 7. Website files uploaded into “$web” container on storage account

Ok. Files are already sitting there. Now, as the next step, I want to make sure that the website is working correctly on the remote environment as it does on my local machine. To do it, because I have configured a CDN endpoint along with a custom domain and certificate in an earlier step in this text, I’m going to call out my application respecting that specific configuration, as you can see through the Figure 8.

Figure 8. Website properly running in production

That’s awesome. Everything seems to be working correctly, but there is one aspect to be validated missing. I want to make sure that CDN is delivering the static content on behalf of my storage account. To capture the answer I’m seeking I’m going to use the developers tool that comes along with my preferred browser. On that tool, I’m going to inquire using the “Network” tab to see the origin of the file being delivered.

A sign that everything is working fine is to have a given file origin under the custom domain we configured as its origin. I’m glad to see that we have everything in place from here, as you can see through the Figure 9.

Figure 9. Files being delivered by the CDN service

Automatizing delivery with Azure DevOps

Only by going through the steps above, we were able to build up a scalable environment for our static front-end. However, that’s not enough if we want to automatize the way we’re going to deliver that given code into production. To help us out on that task (automation) I’ll take advantage of Azure DevOps Pipelines feature.

I’m not going through the process of creating a new tenant, organization and get a project up and running, in any case. From here, I’m assuming you already have it in place. If you have no clue how to get there, please, refer to the links below to get started.

First thing first, right? I mean, the very first step towards to get process fully automated by Azure DevOps is to have our code sitting into the git repository tied to my project. Once again, I’m assuming that the “Git basics” (commit, push, pull, fetch, and so forth) are fresh on your mind as I’m not going through this process as well. If it is not, please, give a click on this link.

Figure 10 shows up my website’s static content already residing on the project’s repository. As you can see, the code is under the “master” branch.

Figure 10. Website’s code sitting on project’s repository

Now, because I’m using Jekyll I do need to have a build process, once Jekyll acts as a kind of compiler from markdown to native HTML. The build is going to set up the hosted agent build in Azure DevOps to go through the process of compilation required by Jekyll and the peculiarities of my website. Figure 11 presents the pipeline I just created for it.

Figure 11. Build pipeline to support Jekyll

A brief explanation about each step follows up.

1. Use Ruby >= 2.4: Jekyll is written in Ruby and requires its version 2.4 or above to properly works. So I’m adding a built-in task provided by Azure DevOps which automatically installs and configures the newer version of Ruby on the agent build server. Please, see Figure 12 for details of this configuration.

Figure 12. Ruby’s task configuration

2. Configuring Gems: Gems is package manager (like NPM) for Ruby. Because I’m going to use some plugins provided by Jekyll later on, and they are distributed over Gems, we have to have it installed and configured at the agent build server level. The configuration to have it in place can be seen in Figure 13.

Figure 13. Adding Ruby Gems into the agent build server

3. Install Jekyll and Bundler: Step 3 installs Jekyll and bundler. These two elements are effectively responsible for translate the application’s code (Markdown) into native HTML. Figure 14 shows up how to get installed on the agent server.

Figure 14. Installing Jekyll bundler

4. Installing Jekyll-paginate: Because my website uses a plugin called “jekyll-paginate” provided by Jekyll, I need to install it in there explicitly. Figure 15 presents the configuration to get it appropriately installed.

Figure 15. Using Ruby Gems to install the jekyll-paginate plugin

5. Installing Jekyll-archives: My application uses a second plugin called “jekyll-archives” so, I need to install and configure it through Gems. Figure 16 shows up this configuration.

Figure 16. Using Gems to install and configure “jekyll-archives”

6. Build: Only after having all the dependencies resolved we can build up the application. We do this by executing the command “jekyll build”, as you can see observing Figure 17.

Figure 17. Building the source code with Jekyll

7. Copy “_site” files to: $(build.artifactstagingdirectory): The build process generates native HTML files, right? I mean, the product resulted of this building process is the HTML version of my website. By design, Jekyll puts the result of the building process into a directory called “_site” on the root of the source code project. So, what I’m doing on step 7 is copying the content of that “_site” directory into the default place on the agent server where I could retrieve, later on, i. e., “build.artifactstagingdirectory”. Please, see the configuration for this through Figure 18.

Figure 18. Copying the generated website into build.artifactstagingdirectory

8. Publish Artifact: _site: Step 8 publishes the application’s productive code. This way, the “drop” just generated is visible to other features within Azure DevOps, like Release pipelines, for example. Please, see Figure 19.

Figure 19. Publishing the drop generated by the building process

Now, to verify if the building process is working correctly, I’m going to go ahead and start a new build process. I’m doing this by simply giving a click on the blue “Queue” button, available at the top-right area within Build pipeline blade. By doing this, Azure DevOps shows me up a window asking for additional configuration. I’ll select “Hosted Ubuntu 1604” as agent pool and then clicked “Queue”.

Figures 20, 21, and 22 presents the building process being performed afterward and being completed successfully.

Figure 20. Building process started
Figure 21. Watching the building process happen
Figure 22. Build completed successfully

Great! Our build is passing; the website is being (at this point) manually generated by the building pipeline of Azure DevOps. So now, we’re ready to deliver it into production (if you recall, we have a storage account in Azure hosting website’s files). So, all we need to do is to put some scripts in place into a Release pipeline within Azure DevOps to copy our files from our agent server into Azure.

The Figure 23 shows up the Release pipeline with those scripts already in place.

Figure 23. Release pipeline to update storage account in Azure

Pretty simple and straightforward. I have one script based on Azure CLI that delete the old files on the destination (small downtime might occur), and then, a second one whereby I populate the website’s container with the new files. You can find the details about each one of those steps below.

1. Deleting old files: Azure CLI script that removes old files sitting into the $web container at the storage account. Please, observe Figure 24 to identify the way I’m connecting to my Azure subscription, and also, the way I’m executing the script itself.

Figure 24. Deleting old files on the destination

2. Uploading new files: Azure CLI script that uploads the files from the artifact generated by the building process into the $web container in my storage account. Please, observe Figure 25 to identify the way I’m connecting to my Azure subscription, and also, the way I’m executing the script itself.

Figure 25. Uploading new files into the target environment

Done. Now we can try it and see whether or not the publishing process works. To get there, I’m going to create a new Release manually. I was able to do it by just giving a click on the blue button “Create a new release” available upper-right on the releases blade and selecting “Stage 1” as the trigger for my manual release.

The result can be seen through the Figures 26, 27 and 28.

Figure 26. Release and stages view
Figure 27. Release process being performed
Figure 28. Release process completed successfully

That’s amazing, isn’t it? My target environment supposed to be updated, and my website should be up and running with the recent version of my static front-end files. To make sure it happened, just went to my browser, typed the address of my website, and was able to see the application running correctly, just like you can see over Figure 29.

Figure 29. Static website up and running after the build / release process

A next step towards to automation would configure Azure DevOps to automatically start a new building process based upon the acceptance of a pull request onto the Master branch, for example. Then, if it succeeds (the build process), a new release could be automatically triggered to update the environment itself. It could be either with or without human intervention.

Hope it helps! See you.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *