raboof / nethogs

Linux 'net top' tool
GNU General Public License v2.0
3.2k stars 289 forks source link

Support Telegraf #128

Open chronossc opened 7 years ago

chronossc commented 7 years ago

Hello! I love nethogs, and I plan to use it in my mining rigs, but with Telegraf (then sending to InfluxDB and showing in Grafana).

Telegraf supports many plugins that parse logs or handle http requests (to internal APIs for example).

Some of this is interesting to implement? There is no such tool in Telegraf/Influx community :|.

Thx!

raboof commented 7 years ago

Thanks for the question!

I'm not familiar with Telegraph, but if you don't need the process information and port/protocol information is sufficient, the nethogs approach is probably overkill.

chronossc commented 7 years ago

I need :), I want to store network data per process.

jantman commented 7 years ago

I'm actually working on something very similar to this, albeit using Graphite instead of InfluxDB. My plan is using Python (using the included contrib/python-wrapper.py around libnethogs) to send to statsd, which will then push to Graphite.

As to your telegraf use case, I don't think nethogs will work for that without something in between. The issue here is that telegraf is plugin-based, but nethogs needs to be running constantly (i.e. a daemon, as discussed in #127 ) to collect data; it works by actually capturing every packet going over the network interfaces and associating them with a process.

So you'd need to have some daemon running and collecting data via nethogs, and then getting that data to telegraf somehow - which is entirely possible. How you get the data from your nethogs collector to telegraf is entirely up to you and what technologies you're using; if you run statsd (or can), there's an input for that. You could also do anything from (at the simple end) writing the nethogs data to a log file and parsing it on a regular basis, to (less simple) sending it via RabbitMQ or AMQP.

You're welcome to look at my code when I've got it working, but right now I'm running into some strange issues with the counters and not really being able to figure out how much data is new and how much isn't...

Dubrzr commented 6 years ago

@jantman Hi Jantman, did you succeeded in sending logs to statsd? If so, can you share your work on this? Thank you very much ;)

jantman commented 6 years ago

@Dubrzr I did succeed in sending to statsd, though I'm not using the code anymore.

My code is here for the actual Python daemon and here for a systemd unit.

Note that the above code includes some custom logic for specific use cases of mine, such as manipulating git, ssh, and terraform command lines for cleaner stats.