Closed jult closed 4 months ago
@jult - Appreciate the comments here. For me, it was easier because I could target specific components that I knew I could version and make sure were available. The project has changed significantly from where I started it as a simple UDP collector with a single Grafana dashboard. It's also possible to make it work without Docker as it's just a collection of BASH scripts making callouts to JQ, Promtail, websocat, logcli, and Curl. Getting all of those pieces assembled in a fashion that allows a lot of other functionality based on features and APIs for WeatherFlow could certainly be streamlined. But I also wanted an easy way to share and have the community try it out. I'll take a look at my priorities list and see about building a non-Docker version. A few other members have asked for something like an LXC container, etc.
The reality is - the Docker start command (start.sh
) can be run locally as long as all of the environmental variables are provided and the external binaries listed above are available.
I'd also be happy to hear how you're running similar data collectors for WeatherFlow devices, as I know the WeatherFlow community has several different active contributors with their projects.
Well I really love what you've accomplished, the grafana graphing was something I would have loved to do for essentially everything output by a meteobridge pro I own, which now includes data from a Tempest, and several other sensors, like the ones from luftdaten/sensor.community, and a netatmo set that my girlfriend uses. I'm so not well versed in this area, using grafana, that I was hoping someone else had done it already, which is actually the case, except in my hardware setup it's near to impossible to run more docker engines than I already do. Energy bills are exploding where I live, so I was trying to go slim down overhead. I'll take a look at your scripts and see how far I get with achieving the same on just one server.
Totally understand. I haven't seen much overhead from the Docker components (in general). But I'm also not measuring each of the containers.
On a Raspberry 4 the above is my utilization - but it takes into account running Grafana and InfluxDB from my AIO project. There are certainly more ways to optimize and I understand the need to slim down. I added a to-do list to keep track of your request and I'll see what I can to do help out.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
aadaf231f354 wxfdashboardsaio-collector-a22afea7 19.81% 23.88MiB / 7.714GiB 0.30% 24.2MB / 18.6MB 26.9MB / 32.8kB 21
3de5b8c86155 wxfdashboardsaio_influxdb 1.04% 154.5MiB / 7.714GiB 1.96% 17.5MB / 8.04MB 241MB / 80.9MB 14
cc859ef6f416 wxfdashboardsaio_grafana 0.13% 34.49MiB / 7.714GiB 0.44% 818kB / 490kB 57.5MB / 176kB 13
A new version of my WeatherFlow Collector was just released. While it is still typically deployed as a Docker container, you can run it directly in a Python environment. This version has much less overhead than I started out with years ago! :)
Is all this docker overhead really making it easier? I mean, stacking entire operating systems in order to run just a few apps, I don't know, I can run all this on about a third the power footprint using just one debian system and localhost or unix-socket everything. You could even serve grafana through a static ssh tunnel, and then use a remote NGINX proxy-pass for access to the locally hosted grafana etc. Plus make the entire system more secure, because then you can firewall the whole thing (using CSF/LFD for example) without risking one of those stacked docker containers accidentally being exploited, because, you know, not every detail is looked at when it focuses on just the apps they need to serve.. Not to mention the fact that the docker engine itself is yet another point of failure (and risk) you need to have running in your network.