tijmenvandenbrink / enviroplus_exporter

Prometheus exporter for enviroplus module by Pimoroni
MIT License
85 stars 39 forks source link
enviroplus grafana pimoroni prometheus prometheus-exporter raspberry-pi

Contributors Forks Stargazers Issues MIT License


Logo

Enviroplus-exporter

Prometheus exporter for enviroplus module by Pimoroni
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents

About The Project

This project was built to export sensor data from Pimoroni's Enviro+ environmental monitoring board for Raspberry Pi. The main goal is to export it in a format so that Prometheus can scrape it so it can then be visualized in a Grafana Dashboard (see installation instruction below), but along the way there were some contributions to the project to also include exporting the data to InfluxDB and Sensor.Community (formerly known as Luftdaten).

You can run the enviroplus-exporter as a script on your Raspi but I also maintain docker images and a Helm Chart. See the instructions below for your preferred way of installing it.

Built With

Getting Started

To get the prometheus enviroplus-exporter up and running I'm assuming you already have Prometheus and Grafana running somewhere. Note: I wouldn't recommend running Prometheus on a Raspberry Pi (using a local SD card) as this could drastically reduce the lifetime of the SD card as samples are written quite often to disk.

Prerequisites

Installation (run as a script)

When running the enviroplus-exporter as a script you need:

One-line (Installs enviroplus-python library from GitHub)

curl -sSL https://get.pimoroni.com/enviroplus | bash

Note Raspbian Lite users may first need to install git: sudo apt install git

We're going to run the enviroplus-exporter as the user pi in the directory /usr/src/. Adjust this as you wish.

1.Clone the enviroplus-exporter repository

cd
git clone https://github.com/tijmenvandenbrink/enviroplus_exporter.git
sudo cp -r enviroplus_exporter /usr/src/
sudo chown -R pi:pi /usr/src/enviroplus_exporter

2.Install dependencies for enviroplus-exporter

pip3 install -r requirements.txt

3.Install as a Systemd service

cd /usr/src/enviroplus_exporter
sudo cp contrib/enviroplus-exporter.service /etc/systemd/system/enviroplus-exporter.service
sudo chmod 644 /etc/systemd/system/enviroplus-exporter.service
sudo systemctl daemon-reload

4.Enable and start the enviroplus-exporter service

sudo systemctl enable --now enviroplus-exporter

5.Check the status of the service

sudo systemctl status enviroplus-exporter

If the service is running correctly, the output should resemble the following:

pi@raspberrypi:/usr/src/enviroplus_exporter $ sudo systemctl status enviroplus-exporter
● enviroplus-exporter.service - Enviroplus-exporter service
   Loaded: loaded (/etc/systemd/system/enviroplus-exporter.service; disabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-01-17 14:13:41 CET; 5s ago
 Main PID: 30373 (python)
    Tasks: 2 (limit: 4915)
   Memory: 6.0M
   CGroup: /system.slice/enviroplus-exporter.service
           └─30373 /usr/bin/python /usr/src/enviroplus_exporter/enviroplus_exporter.py --bind=0.0.0.0 --port=8000

Jan 17 14:13:41 wall-e systemd[1]: Started Enviroplus-exporter service.
Jan 17 14:13:41 wall-e python[30373]: 2020-01-17 14:13:41.565 INFO     enviroplus_exporter.py - Expose readings from the Enviro+ sensor by Pimoroni in Prometheus format
Jan 17 14:13:41 wall-e python[30373]: Press Ctrl+C to exit!
Jan 17 14:13:41 wall-e python[30373]: 2020-01-17 14:13:41.581 INFO     Listening on http://0.0.0.0:8000

6.Enable at boot time

sudo systemctl enable enviroplus-exporter

Enviro users

If you are using an Enviro (not Enviro+) add --enviro=true to the command line (in the /etc/systemd/system/enviroplus-exporter.service file) then it won't try to use the missing sensors.

Run using Docker

  1. Use the published image
docker pull ghcr.io/tijmenvandenbrink/enviroplus_exporter:latest

The following image tags are published:

For more information look at the Packages overview here.

  1. Or build it yourself
docker build -t enviroplus-exporter .

Or using BuildKit you can build Raspberry Pi compatible images on an amd64.

docker buildx build --platform linux/arm/v7,linux/arm64/v8 .
  1. Running
docker run -d enviroplus-exporter -d -p 8000:8000 --device=/dev/i2c-1 --device=/dev/gpiomem --device=/dev/ttyAMA0 enviroplus-exporter

Run using the Helm Chart

To use the Helm Chart for installing the enviroplus-exporter in a Kubernetes cluster you'll need Helm 3 and a Kubernetes Cluster. I personally use K3s bootstrapped with k3s-up.

Initialize a Helm Chart Repository

Once you have Helm ready, you can add the chart repository.

helm repo add enviroplus-exporter https://tijmenvandenbrink.github.io/enviroplus_exporter/

Once this is installed, you will be able to list the charts you can install:

helm search repo enviroplus-exporter

To install the chart, you can run the helm install command.

helm install enviroplus-exporter enviroplus-exporter/enviroplus-exporter

If you want to override any defaults specified in values.yaml you can provide your own values with the -f argument:

helm install -f enviroplus-values.yaml enviroplus-exporter enviroplus-exporter/enviroplus-exporter

Have a look at charts/enviroplus-exporter/values.yaml for more information.

Usage

So now we've setup the Prometheus enviroplus-exporter we can start scraping this endpoint from our Prometheus server and get a nice dashboard using Grafana.

Configure Prometheus

If you haven't setup Prometheus yet have a look at the installation guide here.

Below is a simple scraping config:

# Sample config for Prometheus.

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'external'

# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 15s
    scrape_timeout: 15s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

  - job_name: node
    # If prometheus-node-exporter is installed, grab stats about the local
    # machine by default.
    static_configs:
    - targets: ['localhost:9100']

    # If environmentplus-exporter is installed, grab stats about the local
    # machine by default.
  - job_name: environment
    static_configs:
    - targets: ['localhost:8000']
      labels:
        group: 'environment'
        location: 'Amsterdam'

    - targets: ['newyork.example.com:8001']
      labels:
        group: 'environment'
        location: 'New York'

I added two labels to the targets group: environment and location: SomeLocation. The Grafana dashboard uses these labels to distinguish the various locations.

Configure Grafana

I published the dashboard on grafana.com. You can import this dashboard using the the ID 11605. Instructions for importing the dashboard can be found here.

Grafana Dashboard 1 Grafana Dashboard 2

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Acknowledgements