Syncing Kubernetes and Hashicorp Consul
If you use Hashicorp’s Consul for service discovery/DNS and also use (or plan to use) Kubernetes, then the recently announced integration between Consul and Kubernetes will come as welcome news!
Hashicorp released a Consul-Helm Chart for installing, configuring, and upgrading Consul on Kubernetes.
There are decisions to be made regarding the nature of the syncing, but the first step is always to clone the Consul-Helm project.
Before installing the Helm Chart, let’s review some of the essential configurations found in the standard helm values file – “values.yaml”.
By default, the Chart resolution installs everything: a Consul server cluster, client agents on all nodes and feature components.
If you already maintain a Consul cluster and are interested in joining the Kubernetes services to your existing cluster, then the “enable” property in the “server” section should be set to “false”:
You will also need to enable the Consul client and tell it what the Consul-server address is, so it can join the cluster:
Further, you’ll also need to specify the datacenter:
Now you must choose whether to sync to Kubernetes or to Consul (or both!).
Sync to Consul
Sync to Consul means that the Kubernetes services appears in the Consul catalog, and they could be available via HTTP or Consul-DNS. Later on, we’ll describe how to configure that.
If you already maintain a Consul cluster, you probably want to sync to Consul.
Sync to Kubernetes
Sync to Kubernetes means that services in Consul can be made available as first-class Kubernetes services, and thus able to access them through Kubernetes Core-DNS or with any Kubernetes way. If you are not planning on using it, you might want to set the sync to Kubernetes to “false”, since you might get lost with all the Consul services that would suddenly appear as Kubernetes’ components. If you enable it for every namespace, well, it can really get messy.
For example, in the image below we’ll define a syncing to Consul and not to Kubernetes.
I recommend going over the entire “values.yaml” file and setting the relevant values.
Now for the installation itself:
Clone the repository from here and perform “helm install”.
Accessing the Consul HTTP API
Access to the Consul HTTP API is through the consul-agent, Pod, we’ve created.
Every Node has a Consul agent, and there are a couple of ways to expose access to them.
One way is to create a Nodeport service. A Nodeport defines a static port, meaning that it opens a specific port on all nodes so any traffic from the outside to this port would get to this service.
The service, in turn, would forward traffic to an application labeled “consul”, which is a Consul agent.
Below is an example of such Consul Service. ‘Kubectl apply’ it to spin the service.
Configure the Consul domain with the CoreDNS
CoreDNS is a DNS server that commonly serves as the Kubernetes cluster DNS. It can be configured via its “Corefile”, which is defined in the “coredns” ConfigMap. If we like to use the Consul-DNS to call our external Consul services from a Kubernetes component (instead of or along the CoreDNS addresses for Kubernetes services), we’ll need to configure Consul in the “Corefile” section.
In the CoreDNS .yaml file below, the “consul:53” section is what we are interested in.
To force all non-cluster DNS lookups to go through our consul-sync Kubernetes service so we will be able to use the Consul names. The key “consul:53” means that names that end with the string “consul” will go through the consul-sync service. Since the address of the Consul services is of this template:
This is exactly what we want.
Note that we configured “proxy”. Any queries that are not within the cluster domain of Kubernetes will be forwarded to predefined resolvers, in our case the Consul service.
The address of the Consul service can be invoked with this script:
To reconfigure CoreDNS with the additionals to the ConfigMap, perform:
Configurations of the Consul-K8s sync components are implemented as Jenkins-Pipeline methods on my GitHub.