Automate reverse proxy configuration with consul-template
Load balancers play an important role in ensuring the equal distribution of traffic to backend services. Keeping load balancers configuration updated is often performed via manual processes which can affect the delivery time of a solution. In a modern environment, where multiple replicas of a service exist and where services scale or change over time, maintaining a load balancer configuration accurate is a risk-prone and time consuming activity.
Consul, with its built-in load balancing features, helps applications automatically adapt to changes in services. Consul, in its service discovery configuration, automatically provides round-robin traffic shaping across multiple instances of the same service and, when used in its service mesh configuration, can be configured to use different load-balancing profiles to fine tune how different instances are exposed to traffic.
In scenarios where a load balancer is already present, Consul provides external tools that integrate the configuration process for the load balancer and automate the load balancer configuration. This eliminates the need for manual process and greatly reduces the risk for errors and misconfiguration as well as reducing the reaction time to service landscape change in your datacenter. These tools are consul-template and Consul-Terraform-Sync.
In this tutorial you will use consul-template to automate the configuration for an NGINX server used as a reverse proxy. You will first deploy a basic configuration that will update the NGINX configuration file to automatically react to changes in the Consul catalog, and then change the configuration to use Consul KV store to dynamically change load balancing across multiple instances of a service.
Tutorial scenario
This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.
At the beginning of the tutorial, you have a fully deployed Consul datacenter with an instance of the HashiCups application, composed by four services, NGINX, Frontend, API, and Database, deployed and registered in Consul catalog. Two instances of the Frontend services are registered in the Consul datacenter.
Prerequisites
This tutorial assumes you are already familiar with Consul service discovery and its core functionalities. If you are new to Consul refer to refer to the Consul Getting Started tutorials collection.
If you want to follow along with this tutorial and you do not already have the required infrastructure in place, the following steps guide you through the process to deploy a demo application and a configured Consul datacenter on AWS automatically using Terraform.
To create a Consul deployment on AWS using terraform, you need the following:
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-template-load-balancing-vms
Enter the directory that contains the configuration files for this tutorial.
$ cd learn-consul-template-load-balancing-vms/self-managed/infrastructure/aws
Create infrastructure
With these Terraform configuration files, you are ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the
necessary providers and initialize the backend.
$ terraform init
Initializing the backend...
Initializing provider plugins...
...
Terraform has been successfully initialized!
...
Then, deploy the resources. Confirm the run by entering yes
.
$ terraform apply -var-file=../../ops/conf/automate_configuration_with_consul_template.tfvars
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 49 added, 0 changed, 0 destroyed.
Tip
The Terraform deployment could take up to 15 minutes to complete. Feel free to explore the next sections of this tutorial while waiting for the environment to complete initialization.
After the deployment is complete, Terraform returns a list of outputs you can use to interact with the newly created environment.
Outputs:
connection_string = "ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`"
ip_bastion = "<redacted-output>"
remote_ops = "export BASTION_HOST=<redacted-output>"
retry_join = "provider=aws tag_key=ConsulJoinTag tag_value=auto-join-hcoc"
ui_consul = "https://<redacted-output>:8443"
ui_grafana = "http://<redacted-output>:3000/d/hashicups/hashicups"
ui_hashicups = "http://<redacted-output>"
The Terraform outputs provide useful information, including the bastion host IP address. The following is a brief description of the Terraform outputs:
- The
ip_bastion
provides IP address of the bastion host you use to run the rest of the commands in this tutorial. - The
remote_ops
lists the bastion host IP, which you can use access the bastion host. - The
retry_join
output lists Consul'sretry_join
configuration parameter. The next tutorial uses this information to generate Consul server and client configuration. - The
ui_consul
output lists the Consul UI address. The Consul UI is not currently running. You will use the Consul UI in a later tutorial to verify that Consul started correctly. - The
ui_grafana
output lists the Grafana UI address. You will use this address in a future tutorial. - The
ui_hashicups
output lists the HashiCups UI address. You can open this address in a web browser to verify the HashiCups demo application is running properly.
List AWS instances
The scenario deploys seven virtual machines.
$ terraform state list
## ...
aws_instance.api[0]
aws_instance.bastion
aws_instance.consul_server[0]
aws_instance.database[0]
aws_instance.frontend[0]
aws_instance.frontend[1]
aws_instance.nginx[0]
## ...
After deployment, six virtual machines, consul_server[0]
, database[0]
, frontend[0]
, frontend[1]
, api[0]
, and nginx[0]
are configured in a Consul datacenter with service discovery.
The remaining node, bastion
is used to perform the tutorial steps.
Login into the bastion host VM
Login to the bastion host using ssh
.
$ ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`
##...
admin@bastion:~$
Verify consul-template is installed
The tutorial scenario automatically install consul-template on all the nodes of the datacenter. If you want to install consul-template in your environment refer to the installation instructions for consul-template.
$ consul-template --version
consul-template v0.36.0 (8fdab02)
Configure CLI to interact with Consul
Configure your bastion host to communicate with your Consul environment using the two dynamically generated environment variable files.
$ source "/home/admin/assets/scenario/env-scenario.env" && \
source "/home/admin/assets/scenario/env-consul.env"
That will produce no output.
After loading the needed variables, verify you can connect to your Consul datacenter.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 172.27.0.3:8301 alive server 1.17.0 2 dc1 default <all>
hashicups-api-0 172.27.0.2:8301 alive client 1.17.0 2 dc1 default <default>
hashicups-db-0 172.27.0.5:8301 alive client 1.17.0 2 dc1 default <default>
hashicups-frontend-0 172.27.0.4:8301 alive client 1.17.0 2 dc1 default <default>
hashicups-frontend-1 172.27.0.6:8301 alive client 1.17.0 2 dc1 default <default>
hashicups-nginx-0 172.27.0.7:8301 alive client 1.17.0 2 dc1 default <default>
Create ACL token for consul-template
Consul-template requires the ability to query Consul catalog to retrieve data about the hashicups-frontend and hashicups-api services, as well as data about the nodes running those services. For this reason, you need a token providing read
permissions on both hashicups-api and hashicups-frontend nodes and services.
First, create the proper configuration file for the policy.
$ tee /home/admin/assets/scenario/conf/acl-policy-consul-template.hcl > /dev/null << EOF
# -------------------------------+
# acl-policy-consul-template.hcl |
# -------------------------------+
service "hashicups-frontend" {
policy = "read"
}
service "hashicups-api" {
policy = "read"
}
node_prefix "hashicups-frontend" {
policy = "read"
}
node_prefix "hashicups-api" {
policy = "read"
}
EOF
Then, create the policy using the generated file.
$ consul acl policy create \
-name "consul-template-policy" \
-description "Policy for consul-template to generate configuration for hashicups-nginx" \
-rules @/home/admin/assets/scenario/conf/acl-policy-consul-template.hcl
That will produce an output similar to the following.
ID: 0c4b74d6-2a24-585c-f484-8486775e3523
Name: consul-template-policy
Description: Policy for consul-template to generate configuration for hashicups-nginx
Datacenters:
Rules:
# -------------------------------+
# acl-policy-consul-template.hcl |
# -------------------------------+
service "hashicups-frontend" {
policy = "read"
}
service "hashicups-api" {
policy = "read"
}
node_prefix "hashicups-frontend" {
policy = "read"
}
node_prefix "hashicups-api" {
policy = "read"
}
Finally, generate the token from the policy.
$ consul acl token create \
-description="Consul-template token" \
--format json \
-policy-name="consul-template-policy" | tee /home/admin/assets/scenario/conf/secrets/acl-token-consul-template.json
That will produce an output similar to the following.
{
"CreateIndex": 71,
"ModifyIndex": 71,
"AccessorID": "e3aae194-2eaa-ae45-d3cc-b1901071dbd1",
"SecretID": "51d4877b-a1f7-cbf0-b9af-60815ee622fc",
"Description": "Consul-template token",
"Policies": [
{
"ID": "0c4b74d6-2a24-585c-f484-8486775e3523",
"Name": "consul-template-policy"
}
],
"Local": false,
"CreateTime": "2024-02-07T15:45:26.88141534Z",
"Hash": "D5ZO1f7Csw/9ww07yT/e93BH5umXKfocXlTvhNnov3U="
}
Set your newly generated token as the CONSUL_TEMPLATE_TOKEN
environment variable. You will use this variable later in the tutorial to generate the consul-template configuration file.
$ export CONSUL_TEMPLATE_TOKEN=`cat /home/admin/assets/scenario/conf/secrets/acl-token-consul-template.json | jq -r ".SecretID"`
Configure consul-template
Consul-template requires:
- a configuration file, to configure the consul-template process
- a template file to use to generate the application configuration
Configuration file
First, generate the consul-template configuration file. To do so you need:
- a
consul
section, containing the address and permissions to connect to your Consul datacenter - a
template
section, containing instructions for the template, a source file path, a destination file to save the generated configuration, and a command to execute every time the configuration file is generated.
The configuration used as example also contains some extra parameters to define logging and signal handling.
$ tee /home/admin/assets/scenario/conf/hashicups-nginx-0/consul_template.hcl > /dev/null << EOF
# This denotes the start of the configuration section for Consul. All values
# contained in this section pertain to Consul.
consul {
# This is the address of the Consul agent to use for the connection.
# The protocol (http(s)) portion of the address is required.
address = "http://localhost:8500"
# This value can also be specified via the environment variable CONSUL_HTTP_TOKEN.
token = "${CONSUL_TEMPLATE_TOKEN}"
}
# This is the log level. This is also available as a command line flag.
# Valid options include (in order of verbosity): trace, debug, info, warn, err
log_level = "info"
# This block defines the configuration for logging to file
log_file {
# If a path is specified, the feature is enabled
path = "/tmp/consul-template.log"
}
# This is the path to store a PID file which will contain the process ID of the
# Consul Template process. This is useful if you plan to send custom signals
# to the process.
pid_file = "/tmp/consul-template.pid"
# This is the signal to listen for to trigger a reload event. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any reload signals.
reload_signal = "SIGHUP"
# This is the signal to listen for to trigger a graceful stop. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any graceful stop signals.
kill_signal = "SIGINT"
# This block defines the configuration for a template. Unlike other blocks,
# this block may be specified multiple times to configure multiple templates.
template {
# This is the source file on disk to use as the input template. This is often
# called the "consul-template template".
source = "nginx-upstreams.tpl"
# This is the destination path on disk where the source template will render.
# If the parent directories do not exist, *consul-template* will attempt to
# create them, unless create_dest_dirs is false.
destination = "/home/admin/def_upstreams.conf"
# This is the optional command to run when the template is rendered.
# The command will only run if the resulting template changes.
command = "/home/admin/start_service.sh reload"
}
EOF
Copy the configuration file on the hashicups-nginx-0
node.
$ scp -r -i /home/admin/certs/id_rsa /home/admin/assets/scenario/conf/hashicups-nginx-0/consul_template.hcl admin@hashicups-nginx-0:/home/admin/consul_template.hcl
consul_template.hcl
The remaining part of the configuration will be performed directly on the hashicups-nginx-0 node.
Login to hashicups-nginx-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx-0
##...
admin@hashicups-nginx-0:~
Verify the configuration file for consul-template got correctly copied on the node.
$ cat /home/admin/consul_template.hcl
# This denotes the start of the configuration section for Consul. All values
# contained in this section pertain to Consul.
consul {
# This is the address of the Consul agent to use for the connection.
# The protocol (http(s)) portion of the address is required.
address = "http://localhost:8500"
# This value can also be specified via the environment variable CONSUL_HTTP_TOKEN.
token = "51d4877b-a1f7-cbf0-b9af-60815ee622fc"
}
# This is the log level. This is also available as a command line flag.
# Valid options include (in order of verbosity): trace, debug, info, warn, err
log_level = "info"
# This block defines the configuration for logging to file
log_file {
# If a path is specified, the feature is enabled
path = "/tmp/consul-template.log"
}
# This is the path to store a PID file which will contain the process ID of the
# Consul Template process. This is useful if you plan to send custom signals
# to the process.
pid_file = "/tmp/consul-template.pid"
# This is the signal to listen for to trigger a reload event. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any reload signals.
reload_signal = "SIGHUP"
# This is the signal to listen for to trigger a graceful stop. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any graceful stop signals.
kill_signal = "SIGINT"
# This block defines the configuration for a template. Unlike other blocks,
# this block may be specified multiple times to configure multiple templates.
template {
# This is the source file on disk to use as the input template. This is often
# called the "consul-template template".
source = "nginx-upstreams.tpl"
# This is the destination path on disk where the source template will render.
# If the parent directories do not exist, *consul-template* will attempt to
# create them, unless create_dest_dirs is false.
destination = "/etc/nginx/conf.d/def_upstreams.conf"
# This is the optional command to run when the template is rendered.
# The command will only run if the resulting template changes.
command = "/home/admin/start_service.sh reload"
}
Template file
The template file is used as a model to render the configuration file for your service. For this reason it needs to be composed using the desired output file as a model.
In this scenario, consul-template will generate the upstream definition for the NGINX process. Check the original configuration file.
$ cat /home/admin/def_upstreams.conf
upstream frontend_upstream {
server hashicups-frontend-0.node.dc1.consul:3000;
}
upstream api_upstream {
server hashicups-api-0.node.dc1.consul:8081;
}
NGINX is configured to redirect requests to hashicups-frontend on port 3000 and hashicups-api on port 8081. The port values are hardcoded inside the configuration file. This can be an issue in scenarios where the port numbers are not known or fixed.
Note
If you use Consul service mesh this issue is not present. In Consul service mesh you define the ports for the upstreams in the service definition file and Consul makes them accessible on the port you defined on the loopback interface, no matter the actual port the services are using.NGINX is configured to use the Consul FQDN for the nodes where the services hashicups-frontend and hashicups-api are running. This binds the configuration to one single instance of the service, no matter how many service instances are registered in the Consul catalog. Also, using the Consul FQDN requires the use of Consul as DNS for the node where NGINX is running, when this is not an option IP addresses are usually required.
Note
In Consul service discovery environments, you can use the service FQDN for service resolution. Using the service FQDN, `_service-name_.service._datacenter_._domain_`, Consul will automatically load balance traffic across the healthy available instances of the service. The only load balancing policy in this case is round robin.List the different tags for services registered in Consul and make sure that hashicups-frontend
has two tags, one per each instance.
$ consul catalog services -tags
consul
hashicups-api inst_0
hashicups-db inst_0
hashicups-frontend inst_0,inst_1
hashicups-nginx inst_0
The hashicups-frontend application has two tags, each representing one application instance. Currently, NGINX is configured to send traffic to the address of only one of these instances.
In order for NGINX to send traffic to all available service instances, create a template file that dynamically generates the related NGINX configuration.
$ tee /home/admin/nginx-upstreams.tpl > /dev/null << EOF
upstream frontend_upstream {
{{ range service "hashicups-frontend" -}}
server {{ .Address }}:{{ .Port }};
{{ end }}
}
upstream api_upstream {
{{ range service "hashicups-api" -}}
server {{ .Address }}:{{ .Port }};
{{ end }}
}
EOF
The template iterates instances of hashicups-frontend and hashicups-api, using the range
function, and then generates the configuration using the Address
and Port
values returned from Consul catalog. Hardcoding the port number or the instance addresses is not necessary anymore.
For the full list of consul-template functions and parameters, refer to the Templating Language documentation.
Test consul-template configuration
Templates for consul-template can be tricky to write correctly at the first attempt. The -dry
option provided by consul-template prints the rendered template on stdout
without modifying the destination files. This makes testing your templates a safe operation. Also, use the -once
execution mode to stop consul-template after the first iteration.
$ consul-template -config=consul_template.hcl -once -dry 2>&1
That will produce an output similar to the following.
[DEBUG] (logging) enabling log_file logging to /tmp/consul-template.log with rotation every 24h0m0s
[INFO] *consul-template* v0.36.0 (8fdab02)
[INFO] (runner) creating new runner (dry: true, once: true)
[INFO] (runner) creating watcher
[INFO] (runner) starting
[INFO] creating pid file at "/tmp/consul-template.pid"
> /etc/nginx/conf.d/def_upstreams.conf
upstream frontend_upstream {
server 172.27.0.4:3000;
server 172.27.0.6:3000;
}
upstream api_upstream {
server 172.27.0.2:8081;
}
[INFO] (runner) rendered "nginx-upstreams.tpl" => "/etc/nginx/conf.d/def_upstreams.conf"
[INFO] (runner) once mode and all templates rendered
[INFO] (runner) stopping
The output shows that the generated configuration file now contains both hashicups-frontend
instance addresses.
Start consul-template
Once you tested the configuration, start consul-template as a long lived process.
$ consul-template -config=consul_template.hcl > /tmp/consul-template.log 2>&1 &
The process is started in the background, you can check the logs for the process using the log file specified in the configuration.
$ cat /tmp/consul-template*.log
[INFO] *consul-template* v0.36.0 (8fdab02)
[INFO] (runner) creating new runner (dry: true, once: true)
[INFO] (runner) creating watcher
[INFO] (runner) starting
[INFO] creating pid file at "/tmp/consul-template.pid"
[INFO] (runner) rendered "nginx-upstreams.tpl" => "/etc/nginx/conf.d/def_upstreams.conf"
[INFO] (runner) once mode and all templates rendered
[INFO] (runner) stopping
[INFO] *consul-template* v0.36.0 (8fdab02)
[INFO] (runner) creating new runner (dry: false, once: false)
[INFO] (runner) creating watcher
[INFO] (runner) starting
[INFO] creating pid file at "/tmp/consul-template.pid"
[INFO] (runner) rendered "nginx-upstreams.tpl" => "/etc/nginx/conf.d/def_upstreams.conf"
[INFO] (runner) executing command "[\"/home/admin/start_service.sh reload\"]" from "nginx-upstreams.tpl" => "/etc/nginx/conf.d/def_upstreams.conf"
[INFO] (child) spawning: sh -c /home/admin/start_service.sh reload
2024/02/07 15:45:27 [DEBUG] (logging) enabling log_file logging to /tmp/consul-template.log with rotation every 24h0m0s
2024-02-07T15:45:27.163Z [INFO] *consul-template* v0.36.0 (8fdab02)
2024-02-07T15:45:27.163Z [INFO] (runner) creating new runner (dry: false, once: false)
2024-02-07T15:45:27.163Z [INFO] (runner) creating watcher
2024-02-07T15:45:27.163Z [INFO] (runner) starting
2024-02-07T15:45:27.164Z [INFO] creating pid file at "/tmp/consul-template.pid"
2024-02-07T15:45:27.168Z [INFO] (runner) rendered "nginx-upstreams.tpl" => "/etc/nginx/conf.d/def_upstreams.conf"
2024-02-07T15:45:27.168Z [INFO] (runner) executing command "[\"/home/admin/start_service.sh reload\"]" from "nginx-upstreams.tpl" => "/etc/nginx/conf.d/def_upstreams.conf"
2024-02-07T15:45:27.168Z [INFO] (child) spawning: sh -c /home/admin/start_service.sh reload
Stop pre-existing instances.
RELOAD - Start services on all interfaces.
RELOAD - Reload the service without changing the configuration files
Start service instance.
Service started to listen on all available interfaces.
Starting NGINX...attempt 1
Inspect the contents of the NGINX configuration file.
$ cat /home/admin/def_upstreams.conf
If the file was generated properly, there should be two server
addresses in the upstream frontend_upstream
code block, and one server
address in the api_upstream
code block.
upstream frontend_upstream {
server 172.27.0.4:3000;
server 172.27.0.6:3000;
}
upstream api_upstream {
server 172.27.0.2:8081;
}
To continue with the tutorial, exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-nxinx-0 closed.
admin@bastion:~$
Verify configuration is generated dynamically
From this moment on, the configuration file is managed directly by consul-template and it is automatically updated when there are changes in the Consul catalog regarding the instances of hashicups-frontend and hashicups-api. Test configuration dynamic change by removing one of the two instances of hashicups-frontend
.
Login to hashicups-frontend-1
from the bastion host.
$ ssh -i certs/id_rsa hashicups-frontend-1
##...
admin@hashicups-frontend-1:~
$ ~/start_service.sh stop
That will produce an output similar to the following.
Stop pre-existing instances.
hashicups-frontend
Service instance stopped.
To check the configuration file, return to the hashicups-nginx-0
node. First, exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-frontend-1 closed.
admin@bastion:~$
Then, login to hashicups-nginx-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx-0
##...
admin@hashicups-nginx-0:~
Verify that the configuration file was updated to reflect the change in the hashicups-frontend services.
$ cat /home/admin/def_upstreams.conf
upstream frontend_upstream {
server 10.0.4.146:3000;
}
upstream api_upstream {
server 10.0.4.108:8081;
}
The file should now show only one instance for the frontend_upstream
.
As a last test, restart the second instance of the hashicups-frontend.
First , exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-nxinx-0 closed.
admin@bastion:~$
Then, login to hashicups-frontend-1
from the bastion host.
$ ssh -i certs/id_rsa hashicups-frontend-1
##...
admin@hashicups-frontend-1:~
Start the frontend service on the hashicups-frontend-1
instance.
$ ~/start_service.sh start --consul-node
That will produce an output similar to the following.
Stop pre-existing instances.
Error response from daemon: No such container: hashicups-frontend
START - Start services on all interfaces.
START CONSUL - Starts the service using Consul node name for upstream services.
NOT APPLICABLE FOR THIS SERVICE - No Upstreams to define.
Start service instance.
a60d8e011576e32af45e1adef5e45c13459f487fc753a24cacf4f7abd6f63fa7
Verify that the configuration file was updated to reflect the change in the hashicups-frontend services.
To check the configuration file, return to the hashicups-nginx-0
node. First, exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-frontend-1 closed.
admin@bastion:~$
Then, login to hashicups-nginx-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx-0
##...
admin@hashicups-nginx-0:~
Verify that the configuration file was updated to reflect the change in the hashicups-frontend services.
$ cat /home/admin/def_upstreams.conf
upstream frontend_upstream {
server 10.0.4.146:3000;
server 10.0.4.252:3000;
}
upstream api_upstream {
server 10.0.4.108:8081;
}
To continue with the tutorial, exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-nxinx-0 closed.
admin@bastion:~$
Tune configuration with Consul KV
The configuration obtained configures a basic round robin load balancing approach. Round robin approach is useful for scenarios where nodes are equivalent in terms of application version or capabilities but it is not effective when testing new version of a service, like during blue-green or canary deployments. In these cases, you want to have a fine grained approach to the load balancing by defining a different traffic balance across the different instances of the same service.
NGINX uses the weight
parameter to distribute traffic across the different available upstreams. In a static configuration file you can manually define the weight settings for each service instance but, in a situation where the content of the file is generated automatically by consul-template, you need a different way to pass configuration parameters to NGINX. In this scenario you will use Consul KV to define the weights for the different instances and will change the template to take these changes into consideration.
Add configuration in Consul KV
The convention adopted for this tutorial is that the KV store will contain a folder weights/
. Inside that folder, there will be a key, named as the node you want to configure, that defines the value for the weight to apply to that node in the NGINX configuration. The higher the value you set for the weight
parameter, the higher the amount of requests that will be sent to that node.
For example, to define a weight=3
for the second instance of the Frontend service, add a key at weights/hashicups-frontend-1
with value 3
.
$ consul kv put weights/hashicups-frontend-1 3
Success! Data written to: weights/hashicups-frontend-1
Update ACL policy
Having the configuration written in Consul KV means that consul-template needs permissions to read keys from the KV store at least on the paths where the configuration is located. For this example you will only need read
access to the weights/
path.
First, create the proper configuration file for the policy.
$ tee /home/admin/assets/scenario/conf/acl-policy-consul-template-2.hcl > /dev/null << EOF
# ---------------------------------+
# acl-policy-consul-template-2.hcl |
# ---------------------------------+
service "hashicups-frontend" {
policy = "read"
}
service "hashicups-api" {
policy = "read"
}
node_prefix "hashicups-frontend" {
policy = "read"
}
node_prefix "hashicups-api" {
policy = "read"
}
key_prefix "weights" {
policy = "read"
}
EOF
Then, use the configuration file to update the policy consul-template-policy
that you created earlier.
$ consul acl policy update \
-name "consul-template-policy" \
-rules @/home/admin/assets/scenario/conf/acl-policy-consul-template-2.hcl
That will produce an output similar to the following.
ID: 0c4b74d6-2a24-585c-f484-8486775e3523
Name: consul-template-policy
Description: Policy for consul-template to generate configuration for hashicups-nginx
Datacenters:
Rules:
# ---------------------------------+
# acl-policy-consul-template-2.hcl |
# ---------------------------------+
service "hashicups-frontend" {
policy = "read"
}
service "hashicups-api" {
policy = "read"
}
node_prefix "hashicups-frontend" {
policy = "read"
}
node_prefix "hashicups-api" {
policy = "read"
}
key_prefix "weights" {
policy = "read"
}
Updating the policy automatically extends permissions to the tokens associated with the policy. You will now verify that the token attached to this policy has the correct permissions to read from the KV store.
First, set the token as the CONSUL_TEMPLATE_TOKEN
environment variable.
$ export CONSUL_TEMPLATE_TOKEN=`cat /home/admin/assets/scenario/conf/secrets/acl-token-consul-template.json | jq -r ".SecretID"`
Then, query the Consul KV store for the weights/hashicups-frontend-1
key.
$ consul kv get -token=${CONSUL_TEMPLATE_TOKEN} weights/hashicups-frontend-1
3
With the updated permissions, you can now re-use the same token to continue with the configuration.
Generate new configuration file for consul-template
The previous configuration only used a single template
section to generate the configuration file that would react to changes in the Consul catalog for service changes. In the new scenario you want to add another section that will react to KV changes and will cause a configuration reload when something changes in the weights/
path. To do so, add a new template
section in the configuration file that will reload consul-template when something changes in the KV store. You also need to replace the previous template file used to generate the NGINX configuration file. The new template will include the weight
parameters.
$ tee /home/admin/assets/scenario/conf/hashicups-nginx-0/consul_template_weights.hcl > /dev/null << EOF
# This denotes the start of the configuration section for Consul. All values
# contained in this section pertain to Consul.
consul {
# This is the address of the Consul agent to use for the connection.
# The protocol (http(s)) portion of the address is required.
address = "http://localhost:8500"
# This value can also be specified via the environment variable CONSUL_HTTP_TOKEN.
token = "${CONSUL_TEMPLATE_TOKEN}"
}
# This is the log level. This is also available as a command line flag.
# Valid options include (in order of verbosity): trace, debug, info, warn, err
log_level = "info"
# This block defines the configuration for logging to file
log_file {
# If a path is specified, the feature is enabled
path = "/tmp/consul-template.log"
}
# This is the path to store a PID file which will contain the process ID of the
# Consul Template process. This is useful if you plan to send custom signals
# to the process.
pid_file = "/tmp/consul-template.pid"
# This is the signal to listen for to trigger a reload event. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any reload signals.
reload_signal = "SIGHUP"
# This is the signal to listen for to trigger a graceful stop. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any graceful stop signals.
kill_signal = "SIGINT"
# This block defines the configuration for a template. Unlike other blocks,
# this block may be specified multiple times to configure multiple templates.
template {
# This is the source file on disk to use as the input template. This is often
# called the "consul-template template".
source = "nginx-upstreams-weights.tpl"
# This is the destination path on disk where the source template will render.
# If the parent directories do not exist, *consul-template* will attempt to
# create them, unless create_dest_dirs is false.
destination = "/home/admin/def_upstreams.conf"
# This is the optional command to run when the template is rendered.
# The command will only run if the resulting template changes.
command = "/home/admin/start_service.sh reload"
}
# This block defines the configuration for a template. Unlike other blocks,
# this block may be specified multiple times to configure multiple templates.
template {
# This is the source file on disk to use as the input template. This is often
# called the "consul-template template".
contents = "{{ range \$key, \$pairs := tree \"weights/\" }} {{ end }}"
# This is the destination path on disk where the source template will render.
# If the parent directories do not exist, *consul-template* will attempt to
# create them, unless create_dest_dirs is false.
destination = "/tmp/mock_template.txt"
# This is the optional command to run when the template is rendered.
# The command will only run if the resulting template changes.
command = "kill -1 \`cat /tmp/consul-template.pid\`"
}
EOF
Copy the configuration file on the hashicups-nginx-0
node.
$ scp -r -i /home/admin/certs/id_rsa /home/admin/assets/scenario/conf/hashicups-nginx-0/consul_template_weights.hcl admin@hashicups-nginx-0:/home/admin/consul_template_weights.hcl
The remaining part of the configuration will be performed directly on the hashicups-nginx-0 node.
Login to hashicups-nginx-0
from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx-0
##...
admin@hashicups-nginx-0:~
Verify the configuration file for consul-template got correctly copied on the node.
$ cat /home/admin/consul_template_weights.hcl
# This denotes the start of the configuration section for Consul. All values
# contained in this section pertain to Consul.
consul {
# This is the address of the Consul agent to use for the connection.
# The protocol (http(s)) portion of the address is required.
address = "http://localhost:8500"
# This value can also be specified via the environment variable CONSUL_HTTP_TOKEN.
token = "51d4877b-a1f7-cbf0-b9af-60815ee622fc"
}
# This is the log level. This is also available as a command line flag.
# Valid options include (in order of verbosity): trace, debug, info, warn, err
log_level = "info"
# This block defines the configuration for logging to file
log_file {
# If a path is specified, the feature is enabled
path = "/tmp/consul-template.log"
}
# This is the path to store a PID file which will contain the process ID of the
# Consul Template process. This is useful if you plan to send custom signals
# to the process.
pid_file = "/tmp/consul-template.pid"
# This is the signal to listen for to trigger a reload event. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any reload signals.
reload_signal = "SIGHUP"
# This is the signal to listen for to trigger a graceful stop. The default value
# is shown below. Setting this value to the empty string will cause
# *consul-template* to not listen for any graceful stop signals.
kill_signal = "SIGINT"
# This block defines the configuration for a template. Unlike other blocks,
# this block may be specified multiple times to configure multiple templates.
template {
# This is the source file on disk to use as the input template. This is often
# called the "consul-template template".
source = "nginx-upstreams-weights.tpl"
# This is the destination path on disk where the source template will render.
# If the parent directories do not exist, *consul-template* will attempt to
# create them, unless create_dest_dirs is false.
destination = "/etc/nginx/conf.d/def_upstreams.conf"
# This is the optional command to run when the template is rendered.
# The command will only run if the resulting template changes.
command = "/home/admin/start_service.sh reload"
}
# This block defines the configuration for a template. Unlike other blocks,
# this block may be specified multiple times to configure multiple templates.
template {
# This is the source file on disk to use as the input template. This is often
# called the "consul-template template".
contents = "{{ range $key, $pairs := tree \"weights/\" }} {{ end }}"
# This is the destination path on disk where the source template will render.
# If the parent directories do not exist, *consul-template* will attempt to
# create them, unless create_dest_dirs is false.
destination = "/tmp/mock_template.txt"
# This is the optional command to run when the template is rendered.
# The command will only run if the resulting template changes.
command = "kill -1 `cat /tmp/consul-template.pid`"
}
Generate new template file for consul-template
Create a template file that dynamically generates the related NGINX configuration and includes weight
values.
$ tee /home/admin/nginx-upstreams-weights.tpl > /dev/null << EOF
upstream frontend_upstream {
{{- range service "hashicups-frontend"}}
server {{.Address}}:{{.Port}} {{ \$node := .Node -}} weight={{ keyOrDefault (print "weights/" \$node) "1" }};
{{- end}}
}
upstream api_upstream {
{{ range service "hashicups-api" -}}
server {{ .Address }}:{{ .Port }};
{{ end }}
}
EOF
Restart consul-template to use new configuration
Stop the running consul-template process.
$ kill -9 `cat /tmp/consul-template.pid`
Then, start consul-template with the new configuration file.
$ consul-template -config=consul_template_weights.hcl > /tmp/consul-template.log 2>&1 &
The process is started in the background, you can check the logs for the process using the log file specified in the configuration.
Verify that the configuration file was generated correctly.
$ cat /home/admin/def_upstreams.conf
upstream frontend_upstream {
server 10.0.4.146:3000 weight=1;
server 10.0.4.252:3000 weight=3;
}
upstream api_upstream {
server 10.0.4.108:8081;
}
You now have a way to dynamically generate the configuration for your NGINX that will get automatically updated in case the instances of hashicups-frontend and hashicups-api change over time. Plus you have a way to tune balancing across different instances of hashicups-frontend using Consul KV.
Verify configuration is generated dynamically
Verify that the configuration changes dynamically with the KV content.
Exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-nginx-0 closed.
admin@bastion:~$
Change the value for weights/hashicups-frontend-1
$ consul kv put weights/hashicups-frontend-1 2
Success! Data written to: weights/hashicups-frontend-1
Return to the hashicups-nginx-0
node.
$ ssh -i certs/id_rsa hashicups-nginx-0
##...
admin@hashicups-nginx-0:~
Verify that the configuration file was generated correctly.
$ cat /home/admin/def_upstreams.conf
upstream frontend_upstream {
server 10.0.4.146:3000 weight=1;
server 10.0.4.252:3000 weight=2;
}
upstream api_upstream {
server 10.0.4.108:8081;
}
Verify that with no keys present, the weight
defaults to 1
. This fall-back value is specified in the template file nginx-upstreams-weights.tpl
as default for the keyOrDefault
function.
Exit the ssh session to return to the bastion host.
$ exit
logout
Connection to hashicups-nginx-0 closed.
admin@bastion:~$
Remove the weights/hashicups-frontend-1
key.
$ consul kv delete weights/hashicups-frontend-1
Success! Deleted key: weights/hashicups-frontend-1
Return to the hashicups-nginx-0
node.
$ ssh -i certs/id_rsa hashicups-nginx-0
##...
admin@hashicups-nginx-0:~
Verify that the configuration file got generated correctly.
$ cat /home/admin/def_upstreams.conf
upstream frontend_upstream {
server 10.0.4.146:3000 weight=1;
server 10.0.4.252:3000 weight=1;
}
upstream api_upstream {
server 10.0.4.108:8081;
}
Destroy the infrastructure
Now that the tutorial is complete, clean up the infrastructure you created.
From the ./self-managed/infrastruture/aws
folder of the repository, use terraform
to destroy the infrastructure.
$ terraform destroy --auto-approve
Next steps
In this tutorial you learned how to integrate an existing NGINX load balancer with Consul catalog to balance traffic across multiple instances of the same service. You used consul-template to automatically generate the configuration and used Consul KV to further tune the weight applied to each instance of the service.
For more information about the topics covered in this tutorial, refer to the following resources:
- Service configuration with Consul Template
- consul-template repository
- Go-template documentation
To learn more about other load balancing capabilities provided by Consul: