The DevOps 2.2 Toolkit
上QQ阅读APP看书,第一时间看更新

Deploying exporters

Exporters provide data Prometheus can scrape and put into its database.

The stack we'll deploy is as follows:

version: "3"

services:

ha-proxy:
image: quay.io/prometheus/haproxy-exporter:${HA_PROXY_TAG:-\
latest}
networks:
- proxy
- monitor
deploy:
labels:
- com.df.notify=true

- com.df.scrapePort=9101

command: -haproxy.scrape-
uri="http://admin:admin@proxy/admin?stats;csv"

cadvisor:
image: google/cadvisor:${CADVISOR_TAG:-latest}
networks:
- monitor
volumes:
- /:/rootfs
- /var/run:/var/run
- /sys:/sys
- /var/lib/docker:/var/lib/docker
deploy:
mode: global
labels:
- com.df.notify=true
- com.df.scrapePort=8080

node-exporter:
image: basi/node-exporter:${NODE_EXPORTER_TAG:-v1.13.0}
networks:
- monitor
environment:
- HOST_HOSTNAME=/etc/host_hostname
volumes:
- /proc:/host/proc
- /sys:/host/sys
- /:/rootfs
- /etc/hostname:/etc/host_hostname
deploy:
mode: global
labels:
- com.df.notify=true
- com.df.scrapePort=9100
command: '-collector.procfs /host/proc -collector.sysfs\
/host/sys -collector.filesystem.ignored-mount-points\
"^/(sys|proc|dev|host|etc)($$|/)" -collector.\
textfile.directory /etc/node-exporter/ -\
collectors.enabled="conntrack,diskstats,entropy,filefd,\
filesystem,loadavg,mdadm,meminfo,netdev,netstat,stat,\
textfile,time,vmstat,ipvs"'

networks:
monitor:
external: true
proxy:
external: true

As you can see, the stack definition contains the node and haproxy exporters as well as cadvisor service. haproxy-exporter provides proxy metrics, node-exporter collects server data, while cadvisor outputs information about containers inside our cluster. You'll notice that cadvisor and node-exporter are running in the global mode. A replica will run on each server so that we can obtain an accurate picture of all the nodes that form the cluster.

The important parts of the stack definition are com.df.notify and com.df.scrapePort labels. The first one tells swarm-listener that it should notify the monitor when those services are created (or destroyed). The scrapePort labels are defining ports of the exporters that Prometheus will scrape metrics from.

Please visit Scrape Parameters section of the documentation for more information how to define scrape parameters.

Let's deploy the stack and see it in action.

docker stack deploy \  
    -c stacks/exporters.yml \  
    exporter 

Please wait until all the services in the stack are running. You can monitor their status with docker stack ps exporter command.

Once confirmed the exporter stack is up-and-running, we can verify whether all the services were added to the monitor config.

open "http://$(docker-machine ip swarm-1)/monitor/config"
Figure 4-1: Configuration with exporters

We can also confirm that all the targets are indeed working by accessing targets page.

open "http://$(docker-machine ip swarm-1)/monitor/targets"

There should be three targets. If they are still not registered, please wait a few moments and refresh your screen.

Two of the targets (exporter_cadvisor and exporter_node-exporter) are running as global services. As a result, each has three endpoints, one on each node. The last target is exporter_ha-proxy. Since we did not deploy it globally nor specified multiple replicas, in has only one endpoint.

Figure 4-2: Targets and endpoints

If we used the "official" Prometheus image, setting up those targets would require an update of the config file and reload of the service. On top of that, we'd need to persist the configuration. Instead, we let Swarm Listener notify Docker Flow Monitor that there are new services that should, in this case, generate new scraping targets. Instead of splitting the initial information into multiple locations, we specified scraping info as service labels and let the system take care of the distribution of that data.

Figure 4-3: Prometheus scrapes metrics from exporters

Let's take a closer look into the exporters running in our cluster.