Migrating from Landscaper to Helmfile

Crunch Tech
5 min readJul 9, 2021


Photo by Mr Cup / Fabien Barral on Unsplash

In the beginning of 2020, we in the Synergy team — the team in charge of the cloud infrastructure, devops, SRE, chaos team — had the goal of implementing Istio by the end of the year. Before we started to focus on Istio, we took a look at all the tooling that we had and we found a few things that we needed to update as soon as possible. The first tool that we realised that we needed to update was Landscaper because it was deprecated.

Firstly, we need to say that we’ve been using kubernetes for a few years now. We have so many customizations in the cluster and, having performed this update, we’ve gone through all the config of each chart that we have.

We were looking for a tool that would be compatible with helm2 and helm3 because the next step before Istio was to upgrade to helm3. We started to check all the alternatives and the one that impressed us the most was Helmfile.


Helmfile defines itself as “a declarative spec for deploying helm charts. It lets you keep a directory of chart value files and maintain changes in version control; apply CI/CD to configuration changes and periodically sync to avoid skew in environments.”

Just reading the definition, we said we love it, so we started the development of Helmfile schema and we found our first problem: how we were going to be able to have all the charts in the same `helmfile.yaml` in an automated way?

Photo by Steve Johnson on Unsplash

First, we thought about the schema of our Helmfile directory, and we decided to have the following schema:

├── commons
│ ├── base
│ │ ├── environments.yaml
│ │ ├── mydefaults.yaml
│ │ ├── README.md
│ │ └── repositories.yaml
│ ├── secrets
│ │ ├── defaultvpc
│ │ │ └── secrets.yaml
│ │ ├── preproduction
│ │ │ └── secrets.yaml
│ │ ├── production
│ │ │ └── secrets.yaml
│ │ └── test
│ │ └── secrets.yaml
│ ├── templates
│ │ └── template.yaml
│ └── values
│ ├── defaultvpc
│ │ └── values.yaml.gotmpl
│ ├── preproduction
│ │ └── values.yaml.gotmpl
│ ├── production
│ │ └── values.yaml.gotmpl
│ └── test
│ └── values.yaml.gotmpl#
├── helmfile.yaml
├── README.md
└── releases
├── thanos
│ └── values.yaml.gotmpl
└── zipkin
└── values.yaml.gotmpl

As you can see in the schema, we have our own helmfile.yaml that imports the different bits and bobs from other folders, depending on the parameter that we’re using. Our main helmfile.yaml is the following one:

{{ readFile “commons/templates/template.yaml” }}releases:
# Releases are templated to here from the releases/[release].yaml files using template_helmfile_releases in the common.sh when calling helmfile-apply

In the main helmfile.yaml, we’re utilising a Helmfile template that we have in our base field and the temple, as you can see below:

- commons/base/environments.yaml
- commons/base/mydefaults.yaml
- commons/base/repositories.yaml
mytemplate: &mytemplate
missingFileHandler: Warn
- commons/values/{{.Environment.Values.alias}}/values.yaml.gotmpl
- releases/{{.Release.Name}}/values.yaml.gotmpl

By creating this file, we’re setting a structure for the releases. This is how we declare environment values and the release environment. We could have environments release values too, but we tried to have the other environments as Production; that way, developers can have a similar output in our staging environment and preprod.

In the bases field, we have environments, defaults, and repositories. After we finished this, we realised that the base repository yaml should be a field. At the time of writing, it still works like that, but we have the task in our backlog.

Environments.yaml is just a list with the environments and env values. As you’ll see, we have secrets too, and it’s because we have “sops”, which are not in use at the moment. That’s also in our backlog.

-alias: production
-alias: test
-alias: test

The next file in the bases is the defaults yaml. In this file, we don’t have that much — basically, we’re saying to Helmfile, “wait until the chart is deployed to deploy the next one”:

wait: true

In the repositories yaml, we have all the repos that we’re using. We have to say that all the repos point to our nexus that we’re using as a proxy for the charts:

- name: crunch-nexus-release
url: https://jamon.aragon.co.uk/repository/crunch-helm-releases/
- name: crunch-nexus-dev
url: https://jamon.aragon.co.uk/repository/crunch-helm-development/
- name: appscode
url: https://jamon.aragon.co.uk/repository/appscode-helm-charts/
- name: banzaicloud-stable
url: https://jamon.aragon.co.uk/repository/banzaicloud-stable
- name: bitnami
url: https://jamon.aragon.co.uk/repository/bitnami
- name: crunch-static
url: https://jamon.aragon..co.uk/repository/crunch-static-charts
- name: grafana
url: https://jamon.aragon.co.uk/repository/grafana
- name: elastic
url: https://jamon.aragon..co.uk/repository/elastic/
- name: jaegertracing
url: https://jamon.aragon.co.uk/repository/jaegertracing/
- name: kiali
url: https://jamon.aragon.co.uk/repository/kiali/

There is something that we didn’t show so far, and it’s the release.yaml for each release, which looks as follows:

Name: logstash
namespace: kube-system
chart: elastic/logstash
version: 7.10.2
-stage: stage1

We’re importing the template that we created before, so that way, the release can get the env values and the release envs.

As for our final file (and the one that we don’t like that much since we would like to have set up everything with Helmfile), we decided to create a bash script to template all the charts in the same file:

#!/bin/bashset -e -o pipefailfor release in`find ${helmfile_manifest_folder}/releases/-name "*.yaml"`;do
if`grep-E"$release_name$" ${helmfile_manifest_folder}/helmfile.yaml>>/dev/null`;then
echo "$release already present in helmfile.yaml";
echo"Templating$release to helmfile.yaml"
cat $release|sed 's/\(.*\)/ \1/'>> {helmfile_manifest_folder}/helmfile.yaml
echo >> ${helmfile_manifest_folder}/helmfile.yaml

This is not the ultimate/perfect config for Helmfile, but that’s what we did, how we did it, and how we’re going to improve it. At the moment, we have it like that and we’ll be back to it when we have Istio and argocd in place. I hope you enjoy it and see you at the next one.

Written by Jorge Andreu Calatayud — Senior Site Reliability Engineer