How to use Consul-based Service Discovery in Prometheus 🤔

Abdulsamet İLERİ
5 min readMay 21, 2023

--

This picture was taken from this article.

Hello all! In this article, I will explain why we need to integrate service discovery tools like (Consul) within Prometheus with an example.

In this example, we’ll dive in step by step on how we achieve observing an API with Prometheus.

Prerequisites

  • You can install Prometheus via its GitHub repository.
  • You can install Consul here.
  • You can look at our API. It is nothing but has /metrics endpoint. Prometheus will use this endpoint for scraping when we are configured. You can find the source code here!
  • If you don’t want to install Prometheus and Consul locally, you can use the docker-compose file. Be aware some configurations are different because of docker networking.

First, before starting Prometheus, we must modify its configuration file, aka prometheus.yml. In default, this file configured as Prometheus scrapes itself for its health. I change it with my API address. Our API runs port 8080 as default. Therefore, I am adding localhost:8080 as shown below:

scrape_configs:
- job_name: "api"
metrics_path: /metrics
scheme: http
static_configs:
- targets: ["localhost:8080"]

If we start Prometheus and go to localhost:9090/targets, we can easily see our API instance up, and Prometheus successfully starts its scrapes.

Figure: Prometheus targets endpoint

In summary, it works as shown below.

Figure: Overall design

Okay fine! 👍 Clear and concise! What about we want to increase our API instance and want Prometheus scrapes them 🤔

No problem, we need to write the other API's address to prometheus.yml and restart Prometheus. 😄

scrape_configs:
- job_name: "api"
metrics_path: /metrics
scheme: http
static_configs:
- targets: [
"localhost:8080",
"localhost:8081",
"localhost:8082"
]
Figure: All API instances can be seen from Prometheus targets ui

Well! However, there is a problem here. If we run new API instances or stop older ones, we need to configure the configuration file repeatedly. In today's world, this manual work is challenging, even impossible. 😞

With the breakthrough of services, the increasing number of users, requests, and demands have made this job very difficult. Moreover, our services are constantly changing due to various situations, such as autoscaling, failures, and upgrades. As a result of these changes, they continuously get their new IPs. 😞

This is where service discovery enters the scene. And Prometheus has built-in support for it 😌😍. For example:

  • Cloud and VM providers (AWS EC2, Google GCE, Microsoft Azure)
  • Cluster schedulers (Kubernetes)
  • Generic mechanisms (DNS, Consul, Zookeeper)
  • File-based custom service discovery.

In the following, we will see how to configure consul-based discovery! 👊🚀

Setting Up Consul-Based Discovery

Consul is a tool for discovering and configuring services. It provides an API that allows clients to register and discover services. And also can perform health checks to determine service availability. From our article's scope of view, we don't dive into more detail about Consul.

For simplicity, we create a service configuration file (consul.json) and give it when we start Consul. This configuration is shown below.

{
"services": [
{
"id": "api1",
"name": "api",
"address": "127.0.0.1",
"port": 8080
},
{
"id": "api2",
"name": "api",
"address": "127.0.0.1",
"port": 8081
},
{
"id": "api3",
"name": "api",
"address": "127.0.0.1",
"port": 8082
}
]
}

First, start our Consul with development mode as

./consul agent -dev -config-file=consul.json

Second, configure prometheus.yml like as

scrape_configs:
- job_name: "consul-example"
consul_sd_configs:
- server: 'localhost:8500'
relabel_configs:
- action: keep
source_labels: [__meta_consul_service]
regex: api

That’s it. Let’s look more detail about this configuration 🤔

First of all, we are using Prometheus' relabelling future here! Prometheus uses this feature to filter and modify targets based on some rules. And It is also useful for preparing Grafana dashboards.

In our case, we want Prometheus to scrape only come from our consul service targets. Therefore we are specifying services (targets) whose names are api. Otherwise, Prometheus would also try to scrape the Consul agent, which doesn't have a metrics endpoint.

ℹ️ Service discovery mechanisms can provide a set of labels starting with __meta_ with more metadata about the target specific to the discovery method. In our situation, we are using consul_service. If we use the k8s discovery engine, it will be __meta_kubernetes_pod_name, __meta_kubernetes_pod_ready, etc.

And boom 🔥

Figure: Consul UI
Figure: Prometheus Target UI

We only specify the consul agent address, and no need to configure prometheus.yml again and again. Once our services register to the Consul, Prometheus will use them easily. Also, in this way, we are supporting hot reload. No need to keep track of new services or old services; the consul does this on our behalf of us.

You can see how to integrate a service discovery mechanism in Prometheus easily 💃

--

--