Today I’ll share some tips on how to speed up CI/CD tasks in GitLab pipelines. 

Agree, we all want to build, test, scan and deploy applications as quickly as possible. No matter how strongly we believe in asynchronous workflow, the speed of automated tasks remains one of the key performance indicators of the development process. The need to quickly roll out an application at a given moment is a critical requirement for many. In addition, fast pipelines also mean happy developers. 

Starting situation: one of my clients asked for help during the migration to the cloud version of GitLab. Input data: GitLab CI pipeline, which uses pulumi to check, verify and create an infrastructure stack in the AWS cloud (as we will see below, the content of CI tasks for the optimizations proposed here does not matter much). In this case, the tasks are running on GitLab.com’s shared runners , but in today’s article, we’ll also work on more general use cases for our own runners. Desired outcome: Minimize the total duration of all tasks. 

Let’s get started!

Pre-Installing Dependencies in a Docker Image

Avoid downloading and installing required tools and dependencies inside CI tasks. Use a pre-prepared image, sharpened for a specific task and containing a minimum set of required dependencies and libraries. Many developers are tempted to use a standard slim image that installs the necessary tools as needed during pipeline execution. Most often, this leads to the fact that the same components are downloaded and installed several times. By itself, the process of downloading and installing dependencies takes more time than downloading and loading the prepared image.  

If you can’t find a pre-made image (in particular, in my pulumi example, you could use the official Docker image), create your own! There is nothing difficult in this. Moreover, you can build it right there using GitLab CI and place it in the GitLab Container Registry. This approach has the added benefit of being able to pre-screen the images developers use, as GitLab also includes container scanning tools. More control, more security, less risk. Do not forget, however, about the recommendations from the next paragraph. 

If you remember one rule from this article, then please remember this! 

Docker Image Optimization

Another frequently encountered extreme is the creation of so-called mega images. All possible tools that may be required to complete tasks (both in practice and in theory) are sewn into the same image. This, in turn, leads to the growth of the image to a gigantic size.

Such a solution undoubtedly simplifies the process of writing pipelines (the author does not need to select and prepare an image), but inevitably leads to a decrease in the efficiency of their implementation. Avoid this if possible! The smaller your image, the faster a particular CI task will be initialized. Try to create individual images of the minimum size, sharpened to perform specific tasks. Below are a few methods for optimizing the size of your Docker image, but I also recommend that you carefully study the best practices for writing a Dockerfile

  • Use base slim images (e.g. debian-slim) 
  • Avoid installing custom tools (vim, curl, etc.) 
  • Disable installation of man pages and other documentation 
  • Minimize the number of RUN layers (combining commands into one layer) 
  • Use stage multid build 
  • If you use apt, then disable the installation of unnecessary dependencies using the key --no-install-recommends 
  • Don’t forget to clear your cache ( rm -rf /var/lib/apt/lists/*for Debian for example) 
  • Tools like dive or DockerSlim can help with further optimization.

Use Docker cache when building images 

By the way, about the assembly of images. If in one of the tasks of your pipeline you are building a new image (for example, to implement the previous recommendation), then do not forget about the possibility of using caching to speed up this process. 

The fact is that when the command is executed docker build, the layers are assembled from scratch. Using a key --cache-fromspecifying the images that will serve as the source of the cache can significantly speed up the build. Keep in mind that you can pass multiple arguments to a process --cache-from, thus using multiple images. 

#.gitlab-ci.yml
build: 
  stage: build 
  script: 
    - docker pull $CI_REGISTRY_IMAGE:latest || true 
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . 
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA 
    - docker push $CI_REGISTRY_IMAGE:latest 

Local caching of Docker images

The fact is that GitLab, among other things, contains the Container Registry Dependency Proxy, which can proxy and cache images from the Docker Hub. Thus, GitLab can be used as a pull-through service to minimize network latency when working with Docker Hub. Depending on your network and where the GitLab runners are located, caching like this can greatly speed up the process of running CI tasks. In addition, using Dependency Proxy will allow you to bypass the request limit on Docker Hub. 

To use this feature, you will need to: 

  1. Enable functionality at the group level ( Settings > Packages & Registries > Dependency Proxy > Enable Proxy ) 
  2. Add a prefix ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}to the image name in the .gitlab-ci.ymldefinition  
# .gitlab-ci.yml 
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/alpine:latest 

If you are already using another container registry, I recommend checking it out for similar functionality. 

Docker image download policy optimization 

This recommendation is unfortunately not available to users of the public (shared) runners on GitLab.com. However, if you customize your GitLab runners, it can bring significant speed improvements to running CI/CD tasks. 

When setting up your own runners, you can specify the download policy using a parameter pull_policy(this is done in the config.toml configuration file). This parameter determines how the runner will load the required images from the registry.  

Possible values: 

  • always (default): images are loaded from the remote registry each time 
  • never : images are not dumped from the remote registry at all, but must be manually cached on the Docker host 
  • if-not-present : the runner will first check the local cache and only if the desired image does not exist will download it from the external register 

As you can easily guess, using the if-not-present value can reduce delays in downloading and analyzing layers through the use of a local cache, and, therefore, speed up the launch and execution of tasks. However, be careful when using this configuration with frequently changing images. A fast-growing cache and the need to clear it regularly can nullify all time gains. 

# config.toml 
[runners.docker] 
  pull_policy = "if-not-present" 

CI/CD caching

The cache in GitLab CI is a powerful and flexible tool for optimizing pipelines. Probably one of the most common and popular examples of its use is dependency caching ( .npm/node_modules.cache/pip.go/pkg/mod/, etc.). 

Let’s assume we use pip to download the required Python libraries. Without using the caching mechanism, libraries will be loaded from scratch for each new pipeline, for each individual task. Caching solves this problem:

# .gitlab-ci.yml
flake8-install: 
  before_script: 
    - pip install virtualenv 
    - virtualenv venv 
    - source venv/bin/activate 
  cache: 
    paths: 
      - .cache/pip 
      - venv/ 
  script: 
    - python setup.py test 
    - pip install flake8 

An important feature is that the CI/CD cache can be either local (the files remain on the host where the runner is running) or distributed (the cache is saved as an archive in the S3 storage). This allows you to optimize the work of pipelines even if you do not have dedicated runners or if they are created dynamically, which means it is an effective solution for both your own and public (shared) runners on GitLab.com.

CI/CD caching policy

Most users know and successfully use the caching mechanism described above. However, many people forget about the additional optimization opportunities through the use of the right policy. The fact is that with a standard configuration, the cache is downloaded at the beginning of the CI task and loaded back at the end. With a large cache size and slow networks, this can be a problem. This process can be configured using the parameter cache:policy.

  • push-pull (standard behavior): the cache is downloaded at the beginning and loaded back at the end of the task execution 
  • pull : the cache is downloaded at the beginning of the task, but not downloaded at the end 
  • push : The cache is not downloaded, but is loaded at the end of the task. 

This way we can optimize task duration by using pull policy for some tasks

# .gitlab-ci.yml
flake8-test: 
  cache: 
    paths: 
      - .cache/pip 
      - venv/ 
    policy: pull 
  script: 
    - flake8 . 

Changing the compression level

All artifacts and cache required to complete tasks are transferred in compressed form. This means that archives should be decompressed at the start of a task and compressed at the end. GitLab Allows you to select the desired compression level for this process ( fastest, fast, default, slow, slowest ) 

When choosing the desired level of compression must be guided. your individual configuration: network speed, CPU resources available to the runner, size and content of archives. Here, most likely, you will have to experiment and choose the balance that suits you best. The configuration is done through the use of environment variables and can be performed both for the entire pipeline and for individual tasks. Note that the feature flag must also be included FF_USE_FASTZIP.

# .gitlab-ci.yml
variables: 
  # Enable feature flag 
  FF_USE_FASTZIP: "true" 
  # These can be specified per job or per pipeline 
  ARTIFACT_COMPRESSION_LEVEL: "fast" 
  CACHE_COMPRESSION_LEVEL: "fast" 

git clone strategy 

Another step that is performed at the beginning of any task is cloning the git repository. Unfortunately, this is often done even for tasks where repository cloning is not required at all. For example, I often see this configuration for manual tasks that are used to coordinate the transition of the pipeline to the next stage (for example, deployment to PROD). This can be solved by configuring the git clone strategy with an environment variable (GIT_STRATEGY). 

# .gitlab-ci.yml
approve: 
  variables: 
    GIT_STRATEGY: clone 
  stage: approve 
  script: 
    - echo Approved !! 
  allow_failure: false 
  when: manual 

Available GIT_STRATEGY values: 

  • none : the repository is not cloned at all 
  • fetch : using a local working copy (usually faster, especially on dedicated runners) 
  • clone : without using a local working copy 

Other key features of GitLab CI

Do not forget about the other key features of GitLab CI, which allow you to control how, when and under what conditions tasks should be performed. I’ll go through them rather briefly (because writing effective pipelines in itself is too broad a topic for this article):

  • rules : I recommend that you familiarize yourself with their capabilities in depth, as they not only help you run tasks only when they are needed, but also modify their behavior 
  • needs : the so-called directed acyclic graphs allow you to build pipelines in such a way that tasks will not wait for the completion of the previous stage, but will start at the moment when all dependencies are satisfied (thus optimizing the overall execution time) 
  • interruptible : Tasks marked as interruptible can be automatically canceled when starting a new pipeline on the same code branch. Allows you to eliminate unnecessary tasks 

Use your own runners

Returning to the topic of public runners on GitLab.com, although they are suitable for most tasks, sometimes their standard configuration (3.75 GB RAM, 1vCPU, 25GB Storage) is not enough. Public runners are a simple, effective and cheap (and often free) solution. However, for more complex tasks that require deep optimization, it makes sense to connect your own resources (for example, with more memory or access to a dedicated network). Let me remind you that custom runners can be used not only with your own GitLab installation, but also with the GitLab.com cloud service.