The DTX Web team is in charge of multiple UIs and APIs. Being able to give our stakeholders—mainly Product Managers and QA—the ability to view a feature during the development process and provide constant feedback lets us achieve a quicker iteration of that feature to production. In addition, it allows the developer to view their changes in an environment that closely resembles production. For that reason having testing environments is extremely valuable.
We’re going to be discussing our legacy setup for testing environments and how we finally managed, as part of the migration from one cloud provider to another, to achieve a much cleaner and more efficient solution using ArgoCD ApplicationSet.
Prior to migrating to Google Cloud from Amazon AWS our CI/CD setup for testing environments took quite a long time to complete.
Once a PR was created, a slew of Jenkins jobs would eventually publish the code as a TAR file to Amazon S3. Then the developer would be required to manually trigger a job in order for the artifact to be deployed on the testing environment of their choice. The testing environment itself was an AWS EC2 instance managed by Chef. Finally, I’d give the URL of that testing environment to the relevant stakeholders.
This solution meant we always had a fixed number of AWS EC2 instances that were live (and consuming valuable resources) available for use by our developers.
A recurring issue with this would be that if a developer wanted to use one of those testing environments they would have to verify with all of the team members, at one point across company site locations, that the target environment wasn’t currently used by someone else, otherwise they would be overriding their work.
Luckily, during our migration to Google Cloud our team had a chance to deal with our tech debt and finally containerize all of our applications.
During this process we decided to also go ahead and refactor our CI/CD pipeline.
In our CI, we would run our unit tests, build the image, upload it to the artifact registry, update our helm chart, and finally perform helm install in order for the changes to take effect in the cluster.
In addition, as part of the cloud provider migration, our DevOps team introduced the engineering teams to a tool called ArgoCD.
ArgoCD is a GitOps tool for continuous delivery that allows developers to define and control deployment of K8 applications from within their Git workflow. The way it works is that ArgoCD is installed as part of the cluster. The ArgoCD controller in the cluster monitors a GitOps repo which holds the application YAML files. Once it detects a difference between the two, it updates the cluster to match the GitOps repo’s YAML files.
It's important to emphasize that there are two separate components involved here. One is the ArgoCD Application which is managed by the GitOps repo and is simply an Application YAML file.
The other component is our source code for DT Console, which has its own Helm chart within its repository.
Integrating ArgoCD concluded our pipeline resulting in the following setup.
The only difference now is that the CD entails updating our Application YAML in the GitOps repository, while the ArgoCD controller in the cluster detects this change and syncs the cluster to match it.
Now that we’ve aligned on our current setup, we can get back to discussing testing environments.
The solution now would be to have the same number of dedicated ArgoCD Applications per the number of testing environments that we required. Obviously, we’d still have the same problems discussed pre GCP migration—having to check in with team members about which environment is free for use as well as wasting resources.
In order to avoid the issues described above we’d want a solution that would implement the following:
As ArgoCD was already implemented in our CD pipeline we could take advantage of one of the solutions that it offers called the ApplicationSet. ArgoCD’s ApplicationSet Controller is also installed as part of our cluster and enables automating the generation of ArgoCD Applications using pre-defined “generators”.
One of the tools available with ArgoCD is Pull Request generator. It uses the API of an SCMaaS provider (GitHub, Gitlab, or Bitbucket Server) to automatically discover open pull requests within a repository. This fits perfectly with our goal of building a testing environment once a pull request is created.
Let's take a look at our ApplicationSet YAML in the GitOps repo:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: console
namespace: argocd
spec:
generators:
- pullRequest:
gitlab:
project: <GITLAB_PROJECT_ID>
api: https://gitlab.com/
tokenRef:
secretName: <SECRET>
key: token
pullRequestState: opened
requeueAfterSeconds: 60
template:
metadata:
name: 'deployment-{{"number"}}'
spec:
source:
repoURL: [email protected]:digitalturbine/console.git
path: helm/
targetRevision: '{{"head_sha"}}'
helm:
parameters:
- name: "image.tag"
value: '{{"head_sha"}}'
- name: "deployment.mode"
value: "value"
- name: "deployment.tag"
value: '{{"number"}}'
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: {{ .Values.spec.destination.name }}
namespace: 'namesapce-{{"number"}}'
We link the ApplicationSet to our repo via the Project ID and set the condition to pullRequestState: opened using the above YAML definition. This tells ArgoCD that each time a new MR is opened in the linked GitLab repo, ArgoCD should generate a new ArgoCD Application in our cluster for that MR.
For each open MR in our Console repo we have a corresponding ArgoCD Application.
Finally we get a URL for our testing environment that matches our MR number:
Let's take this one step further in complexity and create testing environments for our micro frontends.
Our main dashboard, the Console, is made up of the parent application which hosts micro frontends within it by rendering them in iframes.
In order to deliver a testing environment for a specific new feature in the Console's Reporting Dashboard, we’d need the parent app (the Console itself) deployed to a testing environment and have it render our specific Reporting Dashboard MR with the new feature. An ApplicationSet YAML can easily facilitate this as can be seen below:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: reporting-mr-deployments
namespace: argocd
spec:
generators:
- pullRequest:
gitlab:
project: <GITLAB_PROJECT_ID>
api: https://gitlab.com/
tokenRef:
secretName: <SECRET>
key: token
pullRequestState: opened
requeueAfterSeconds: 60
template:
metadata:
name: 'reporting-deployment-{{"number"}}'
spec:
source:
repoURL: [email protected]:digitalturbine/reporting-microfed.git
path: helm/
targetRevision: '{{"head_sha"}}'
helm:
parameters:
- name: "reporting.image.tag"
value: '{{"head_sha"}}'
- name: "deployment.mode"
value: "value"
- name: "deployment.tag"
value: '{{"number"}}'
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
name: {{ .Values.spec.destination.name }}
namespace: 'namesapcev-{{"number"}}'
Thanks to ArgoCD’s ApplicationSet we were able to set up an automatic, on-demand per-merge-request solution to replace our legacy testing environments.
Notably, we also have our bases covered in terms of cleanup—when an MR is no longer in open status, the corresponding ArgoCD Application gets deleted, ensuring that we aren’t wasting resources. Regarding the refresh behavior, new commits pushed to an open MR will be automatically synced as ArgoCD monitors the commit SHA on the MR. Finally, a possible improvement can be adding a label-based condition so that a testing environment is created only when a developer actually requires it.