Tomasz Fidecki
Tomasz Fidecki
Managing Director | Technology

Infrastructure pipelines: the core of Continuous Integration

Sep 27, 20235 min read

Modern software development focused on building the components faster and in a predictable manner requires seamless collaboration between the development and operations teams. When properly designed and executed it leads to a very efficient, flawless, and less exposed to vulnerabilities software, being at the same time a door opener for continuous deployments to various environments. DevOps, as a combination of engineering and best practices, builds competitive advantage and business value.

Keep things in tip-top shape

Nowadays with the great support from communities the role of open source software brings a high dynamic to a software development life cycle (SDLC). The code base is evolving fast and on a daily basis which brings the burden of having things always tied up. Hence, the automation of separate steps in the software development process takes on importance. These steps, often referred to as jobs, form a logical sequence of events leading to the final result. The sequence is named as a pipeline and a transportation medium for all the jobs executed along the way. The jobs, theirs results and the pipeline which acts as an umbrella defines the Continuous Integration (CI) practice. The integration of code changes along with the validation and testing may lead to deployment into the selected environment. This practice is named continuous Deployment (CD) and in some cases may be directly interconnected with a Continuous Integration forming the CI/CD practice. It starts from the tiniest change in the code base and ends with tangible changes in the observable environment.

Pipelines may have different purposes with varied complexity. Organized into stages containing jobs, pipelines create powerful concepts to automate processes and deployments - even without human intervention. Overall organizations may benefit from well designed automation in various ways. It may start from accelerated development, better code quality, stability and predictability and ends with contented customers receiving high quality products.

Infrastructure as Code

With the rise of cloud computing the infrastructure became the topic and control over it started to be as important as in any other piece in the software development process. It turned out that infrastructure may be thought of as a code and the same concepts may be applied. Once the management and provisioning of the infrastructure is performed through the code, manual handling is minimized or completely eliminated. At the same time it automatically brings the benefits of versioning and documenting while easing the process of traceability. Infrastructure may be subjected to the same rules that apply to the application source code. It means that it can be statically analyzed, validated and tested before merging to the concurrent versioning system. As in programming, the OOP paradigm can be applied here, allowing templating, modularization and automation finally. The latter may drastically improve software development efficiency through the elimination of need of manual provisioning and managing loose components that make up the whole infrastructure such as machine types, operating systems, attached storage, networking or even functions as a service. Having infrastructure defined declaratively (describing the intended goal and not separate steps) e.g. using Terraform, gives means to control the economic aspect of the venture. This means that with every iteration of infrastructure modification the costs may be estimated and compared with previous state to have a clear understanding of upcoming changes.

Infrastructure pipelines

Pipeline as top level component in Continuous Integration concept may be successfully utilized when infrastructure is being developed. As with a regular application code pipelines can be granular to build awareness whenever changes are aimed to be merged to a branch or it is required to apply changes into different deployment environments either fully- or semi-automated. These pipelines are often referred to as merge request pipelines.

Infrastructure merge request pipeline

This type of pipeline is triggered for execution whenever engineers are planning to commit the changes to any or selected branch. In case of infrastructural changes it is a good practice to have implemented stages that are validating, formatting and previewing changes planned to be introduced. When Terraform is used as a tool to create and manage infrastructure as code then abovementioned pipeline stages are using command line interface and built-in commands. Once executed, the validate command validates all of the configuration files in a defined directory. It does not access any of the services but checks the completeness, consistency and overall correctness of the configuration. After validating the good and recommended practice is to automatically rewrite infrastructure configuration files to a canonical form and style. It is easily achievable with the Terraform fmt command which transforms files content to conform to the Terraform language style convention. This way the writing style is consistent and does not provide a basis for discussion during the code review phase. The last stage of the pipeline is to preview changes planned to be introduced to the infrastructure. Terraform’s plan command provides the possibility to read the current state, make comparisons with previous configuration and plan changes that are needed to achieve the desired state of the infrastructure. It is worth mentioning that using GitLab, information contained in merge request is presented in a convenient UI. The data consist of a brief summary of changes planned to be introduced to the infrastructure along with a link to the full execution plan. Merge request pipeline for infrastructure may be expanded to various deployment environments such as development, staging and production to stay consistent while still introducing changes specific for each environment.

Infrastructure post merge request pipeline

This pipeline consists of several other stages to apply the intended changes to the deployment environments. Once fundamental changes were successfully checked in the merge request pipeline then infrastructure may be updated in fully- or semi-automated manner. Full automation is advisable/recommended when introducing changes to non-production environments due to the fact that the risk of exposing unwanted changes to the customers is usually low or non-existent. On the other hand, application of infrastructural changes to the production environment often requires full awareness, attention and controllable release management. Thus, such pipelines are created with the possibility to manually release changes to the production environment.

Continuing considering usage of Terraform tool the exemplary pipeline may be composed with the following stages:

  • Initialization: needed to properly initialize a working directory containing Terraform configuration files,
  • Plan: create an execution plan with changes that are to be introduced to the infrastructure. When building a pipeline it is beneficial to pass over the artifact with a plan to the next stage (Apply). This allows to avoid potential inconsistency in subsequent calls of the same command as well as the need to invoke this command again,
  • Apply: actual introduction of changes to the infrastructure. It is a good practice that the ‘Apply’ stage is manually invoked to avoid introducing unwanted changes in the production environment. Common practice is to remotely store a state and lock it to prevent concurrent executions against the same state. This operation is supported by Terraform itself by defining a suitable backend. Remote state storage allows seamless collaboration within team members. In addition, when utilizing a cloud computing model and one of the well known service providers, the state is often stored within the object storage.

merge-request-pipeline-example.png Exemplary pipeline aimed for automation of two environments staging and production is shown below. Here, the execution environment is a GitLab Runner that controls and processes CI/CD jobs and sends back the results.

Final thoughts

Proper design and organization of the software development process using Continuous Integration concept and supported by automation brings a lot of benefits. It is not only the rise of the efficiency and delivery acceleration but also improved collaboration. Overall it leads to shorter cycles of software development while maintaining quality and security.

Need support in automation? Let us do it in a DevOps way.

RELATED POSTS
Tomasz Fidecki
Tomasz Fidecki
Managing Director | Technology

Templating Values in Kustomize: Unlocking the Potential of Dynamic Naming for Kubernetes Resources

Mar 13, 20247 min read
Article image
Tomasz Fidecki
Tomasz Fidecki
Managing Director | Technology

Maximizing Efficiency with Dev Containers: A Developer's Guide

Feb 22, 202419 min read
Article image
Bartłomiej Gałęzowski
Bartłomiej Gałęzowski
Software Engineer

Unleashing the power of serverless: a deep dive into web form deployment with our serverless plugin

Oct 12, 202313 min read
Article image