In Part II (Part I is here) of our “Hitchhiker’s Guide to Prometheus,” we are going to continue with the overview of this powerful monitoring solution for cloud-native applications. In particular, we’ll walk you through configuring Prometheus for scraping exporter metrics and custom application metrics, using the Prometheus remote write API, and discuss some best practices for operating Prometheus in production. Let’s get started!

Scraping Exporter Metrics

As you learned in Part I of this series, Prometheus collects metrics via a pull model over HTTP. The targets from which you want to pull metrics can be specified in the global Prometheus configuration file under the scrape_config section (see code below). 

Prometheus ships with many default exporters for popular monitoring targets including Kubernetes, Consuls’ Catalog API, the DigitalOcean API, Docker Swarm engine, and many more. These default exporters know how to ship metrics in the Prometheus-native format so you don’t need to convert the metrics manually. 

Below is a simple example of the scrape_config for Kubernetes monitoring targets. Prometheus ships with the kubernetes_sd_configs metrics module, which lets you retrieve monitoring metrics from the Kubernetes REST API. Metric targets may include Kubernetes nodes, services and endpoints, pods, and ingress. For each entity, you can also configure Kubernetes labels to filter specific Kubernetes resources: 

global:
scrape_interval: 20s # By default, scrape targets every 15seconds. # Attach these labels to any time series or alerts when #communicating with external systems (federation, remote storage, #Alertmanager).
external_labels:
  monitor: 'lab-monitor'
# Scraping Prometheus itself
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 10s
static_configs:
- targets: ['localhost:8090']
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: labelmap
  regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
  action: replace
  target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
  action: replace
  target_label: kubernetes_name

The Prometheus relabel_configs you see in the example above allows you to relabel default Kubernetes labels into labels you wish to refer to in your time series database and PromQL queries. 

Once you have configured default exporter metrics like Kubernetes, Prometheus enables their automatic discovery and scraping. You can then visualize and process the metrics per your organization’s needs.

Scraping Custom Application Metrics

Having access to popular metric exporters is great for quickly bootstrapping your monitoring pipeline, but what if you need to scrape custom application metrics? This is a more involved task because you have to convert your application metrics into the valid Prometheus format. 

Fortunately, Prometheus ships with client libraries for different programming languages that let you implement this.

Also, you’ll need to expose a /metrics endpoint on your application server and let Prometheus know about it. There are two options for this. The first approach is providing a static target for your application server, as seen in the example below. This static configuration approach, where the metrics are only collected on the specified endpoint, is not optimal when running multiple instances of your application on Kubernetes:

global:
scrape_interval: 15s # By default, scrape targets every 15seconds. # Attach these labels to any time series or alerts when #communicating with external systems (federation, remote storage, #Alertmanager).
external_labels:
  monitor: 'codelab-monitor'
# Scraping Prometheus itself
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']

The second option is to use Prometheus’ built-in auto-discovery service. You can implement this by running your app on Kubernetes or a similar platform with the support of containers and microservices. 

With this approach, you can configure service endpoints, as in the example above, and Prometheus will automatically discover your application on those endpoints. You should ensure that the application ships its metrics via the /metrics REST API endpoint in the service. For this, you need to use Kubernetes’ Service API resource to create and deploy a Service for your application and refer to it in Prometheus:

apiVersion: v1
kind: Service
metadata:
name: your-app-service
labels:
  app: your-app
spec:
ports:
- name: web
  port: 8082
  targetPort: 8082
  protocol: TCP
selector:
  app: your-app
type: NodePort

Thanks to the Prometheus Kubernetes auto-discover feature, Prometheus can easily discover this Service and pull the metrics shipped by pods behind the service load balancer.

Pushing Metrics via the Pushgateway

Prometheus is great at pulling metrics at regular scrape intervals. However, what if the process shipping the metrics is short-lived and Prometheus doesn’t know when it will start or finish? If the process runs when scraping is off, the metrics won’t be collected. In such a scenario, a push scraping model is more suitable, and luckily, Prometheus ships with the Pushgateway component that implements this model. 

Prometheus Pushgateway is primarily designed for short-lived batch jobs that run irregularly and cannot be scraped directly and should be used exclusively for these types of processes. There are a couple of reasons for this. First, unlike in the standard pull model where metrics do not persist in the pipeline and are immediately saved to storage, Pushgateway works as a metrics cache. Old metrics from previous job runs are stored indefinitely in its cache and need to be removed manually. As a result, when too many processes are writing to the Pushgateway, it may become a performance bottleneck and a single point of failure. Secondly, the push model does not leverage the benefits provided by the autodiscovery and health monitoring implemented in the HTTP pull model. To learn more about setting up the Pushgeteway to monitor batch jobs, you can consult the official Prometheus docs

Remote-Write API

Prometheus ships with a powerful remote write API, which allows you to send metrics to remote locations for long-term storage (cloud block storage); this is the primary use case for the remote write API. 

Prometheus includes numerous remote storage integrations, including support for Amazon Timestream, Azure Data Explorer, Elasticsearch, Graphite, InfluxDB, Instana, Kafka, Thanos, and many more. See this article for an exhaustive list of available integrations. 

Remote endpoints can be configured in the <remote_write> section of the Prometheus configuration file. There, you can specify the remote URL, remote request timeout, relabel settings, TLS configuration, etc. 

Below is an example of a remote write configuration for Epsagon, a tool and SDK that enables full observability of containers, VMs, and serverless in distributed compute environments:

remoteWrite:
  - url: 'https://collector.epsagon.com/ingestion?<EPSAGON_TOKEN>'
    basicAuth:
      username:
        name: epsagon-secret
        key: username
      password:
        name: epsagon-secret
        key: password
    writeRelabelConfigs:
      - targetLabel: cluster_name
        replacement: <CLUSTER_NAME>
    # Timeout for requests to the remote write endpoint.
    [ remote_timeout: <duration> | default = 30s ]
    headers:
      [ <string>: <string> ... ]
# Configures the queue used to write to remote storage.
    queue_config:
      # Number of samples to buffer per shard before we block reading of more
      # samples from the WAL. It is recommended to have enough capacity in each
      # shard to buffer several requests to keep throughput up while processing
      # occasional slow remote requests.
      [ capacity: <int> | default = 2500 ]
      # Maximum number of shards, i.e. amount of concurrency.
      [ max_shards: <int> | default = 200 ]
      # Minimum number of shards, i.e. amount of concurrency.
      [ min_shards: <int> | default = 1 ]
      # Maximum number of samples per send.
      [ max_samples_per_send: <int> | default = 500]
      # Maximum time a sample will wait in buffer.
      [ batch_send_deadline: <duration> | default = 5s ]
      # Initial retry delay. Gets doubled for every retry.
      [ min_backoff: <duration> | default = 30ms ]
      # Maximum retry delay.
      [ max_backoff: <duration> | default = 100ms ]

    # Configures the sending of series metadata to remote storage.
    # Metadata configuration is subject to change at any point
    # or be removed in future releases.
  metadata_config:
      # Whether metric metadata is sent to remote storage or not.
      [ send: <boolean> | default = true ]
      # How frequently metric metadata is sent to remote storage.
      [ send_interval: <duration> | default = 1m ]

In this example, we specify the Epsagon URL for sending remote write API requests and basic auths required to connect to it. We also specify the cluster name to identify the cluster from which the metrics should come from. 

Additionally, you can control the concurrency and throughput of shipping metrics by tweaking the number of shards and sample size. When setting non-default values, keep in mind that, on average, remote write APIs increase the Prometheus memory footprint by 25%. 

Tips and Best Practices for Operating Prometheus

Prometheus is a complex monitoring pipeline that works well out of the box but requires more advanced configuration and management when running in production. You need to ensure that your Prometheus setup does not affect the performance and security of the monitoring pipeline. You should also avoid some common anti-patterns when using Prometheus metric types and labels. Below are some key tips and best practices for using Prometheus. 

Metrics and Dashboards

Prometheus supports multiple metric types, such as gauges, counters, summaries, and histograms. Choosing the right metric type is important since each one has its specific use case. For example, gauges should be used if the metric value can go down, whereas counters are appropriate only for monotonously increasing metrics.

It’s better to set the default value for your metrics to 0—even if an event emitting the metrics has not occurred yet—to avoid missing metrics

Lastly, in a multi-service deployment, don’t put all of your service metrics into a single dashboard. Make sure to organize your dashboards properly by creating a separate dashboard for each relevant service.

It’s also a good idea to use a visualization and analytics solution for Prometheus metrics. For example, you can use Epsagon’s Metric Explorer to create metric dashboards and visualize them according to metric type. You can even perform queries against Prometheus metrics, retrieving statistics and applying various aggregations and filters to your data. 

Security 

The default Prometheus installation has relaxed security settings to simplify its usage in the development environment. When moving to production, you should ensure that security is hardened. Some of the most important security tips to achieve this are the following:

  • Prevent monitoring targets from impersonating other targets. By manipulating labels, the attacker application running in the cluster can impersonate the valid service you’re expecting metrics from. This can be prevented by disabling the honor_labels setting in production.
  • Avoid unauthorized API access. In Prometheus 2.0, you can protect access to administrative APIs by using the –web.enable-admin-api flag on the Prometheus binary. 
  • Protect remote reads. By default, the remote read feature allows sending HTTP requests to remote read endpoints. This lets anyone with HTTP access execute arbitrary queries against remote databases or read critical information from your clusters. You need to ensure that only Prometheus has access to remote data stores using the required credentials.
  • Protect the Alertmanager HTTP endpoints. It’s important to prevent notifications containing secrets or other credentials from being sent to attacker emails. For example, don’t use a destination email address as an alert label for a notification, as this can lead to anyone sending alerts to the Alertmanager receiving notifications with critical information directly to their email box.
  • Enable TLS authentication. The Prometheus configuration exposes HTTPS and TLS settings that can be configured per each monitoring job. 

Performance

Overusing Pushgateway can decrease the performance of metrics shipping and processing due to low cache and oversubscription of memory resources. Prevent overuse by not using Pushgateway for non-batch jobs.

Keep in mind that each key-value pair adds a new time series to your volumes, which can dramatically increase the storage usage by high-cardinality data. Limit the number of key/label pairs applied to your data by not applying labels to information such as email addresses or usernames.

Updating multiple metrics inside inner loops may significantly affect Prometheus’ performance. It’s important to control inner loops by keeping track of the number of metrics updated in the performance-critical parts of your code and, if possible, decrease it. 

Conclusion

As you’ve learned in this blog series, Prometheus is an advanced monitoring pipeline suitable for cloud-native applications and a great solution for both machine-centric monitoring on a single server as well as service-oriented monitoring of applications running on platforms like Kubernetes. 

Notwithstanding its complex architecture with multiple connected parts, Prometheus is easy to set up for the most popular metric sources (Kubernetes, InfluxDB, Docker, etc.). Prometheus’ metrics pull model also makes it easy to auto-discover services based on labels and service endpoints. For custom application metrics, Prometheus provides many client libraries that let users convert metrics into a native Prometheus format, plus you get other benefits, such as Pushgateway for batch jobs, Alertmanager for processing cluster events, and various tools for cloud integrations. 

Leveraging the power of Prometheus does require sticking to some best administration and configuration practices. Hopefully, the operational tips and best practices discussed in this blog post will help you successfully prepare your Prometheus deployment for production.

 

Read More:

How to Scale Prometheus for Kubernetes

Prometheus and Grafana: The Perfect Combo