louislam / uptime-kuma

A fancy self-hosted monitoring tool
https://uptime.kuma.pet
MIT License
57.4k stars 5.18k forks source link

Exec monitor #1117

Open otbutz opened 2 years ago

otbutz commented 2 years ago

⚠️ Please verify that this feature request has NOT been suggested before.

🏷️ Feature Request Type

New Monitor

🔖 Feature description

Please add a monitor which executes user provided programs and checks the exit code.

✔️ Solution

The monitor executes programs with optional arguments as provided by the user and checks the exit code. Users of the docker image would need to mount a directory with static binaries and shell scripts in order to use them.

e.g calling gRPCurl to properly check if a gRPC services works. This is currently not possible and would mimic Kubernetes exec probe or Monits program status test.

❓ Alternatives

No response

📝 Additional Context

No response

sysr-q commented 2 years ago

I concur. I'd like to use something like this to monitor my various Syncthing/Restic backups, and show the results on my status page. That way at a glance I could see when the latest backup was without having to SSH into the storage box and check it by hand.

Straight exec or even a shell script (either or) would be great.

weirlive commented 2 years ago

This would be great, I use Duplicati and would love to see that the backup completed.

poblabs commented 1 year ago

+1 this feature request! Would help to migrate away from nagios for my custom checks

sysr-q commented 1 year ago

Just circling back to say the way I solved this for myself in the interim is having a scheduled job run in my Nomad cluster that hits a Push type monitor with a high heartbeat interval. I chose 90,000 seconds since it's a bit more than 24 hours (daily backups).

Screen Shot 2023-05-02 at 12 35 14 PM

Then in Nomad I have a periodic batch job that just executes my desired task every morning - here it's PostgreSQL dumps to a folder that Restic picks up later (this could be a cronjob or scheduled Kubernetes task or whatever). After the task succeeds (or fails) I hit the Uptime Kuma push endpoint via wget with "up" (has to be status=up exactly) or "failed" (can be anything else) accordingly.

#!/bin/bash
umask 0077
timestamp=$(date "+%Y%m%d")

[[ range $db := .app.postgres.backup.databases ]]
output="/dump/$timestamp-[[ $db ]].sql"
echo -n "Backing up [[ $db ]] to: $output"
if pg_dump [[ $db ]] > "$output"; then
  echo " - success"
  wget -q -O /dev/null 'https://uptime.example.com/api/push/xxSZxxh5xx?status=up&msg=IT%27S%2012%20O%27CLOCK%20AND%20ALL%27S%20WELL%21'
else
  echo " - failed!"
  wget -q -O /dev/null 'https://uptime.example.com/api/push/xxSZxxh5xx?status=failed&msg=Postgres%20backups%20failed'
fi
[[ end ]]
exit 0

Not an exact solution, since you only find out an operation failed on a 24 hour lag when the check in doesn't happen. So still :+1: for this being added natively. :)

jerkstorecaller commented 1 year ago

Such a feature should be considered high-priority because it would immediately expand the supported monitor types without requiring @louislam to write explicit support for them. Before adding a single new monitor, add this, because it implicitly adds every protocol under the sun.

For example, let's say I wanted to use SNMP monitoring for an old router (this is just an example, it can be any protocol which has command-line packages that can use it). Instead of asking you "please add SNMP support", "oh Louis I need SNMPv3, you only added v2", I'd just install net-snmp on Linux, and call snmpget, Kuma checks the result code, and the problem is solved:

#!/bin/bash
snmpget -v2c -c public 192.168.1.1 .1.3.6.1.2.1.1.1.0

I could even do all the advanced stuff I want in a bash script.

louislam commented 1 year ago

It is not that easy. Please read my comment from https://github.com/louislam/uptime-kuma/pull/3178#issuecomment-1605493299.

jerkstorecaller commented 1 year ago

It is not that easy. Please read my comment from #3178 (comment).

Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :)

If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data -v uptime-kuma-custom-monitors:/app/custom-monitors louislam/uptime-kuma

Let's say uptime-kuma-custom-scripts directory contains:

snmp-check.sh
email-check.sh
tftp-check.sh

When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right?

stacksjb commented 1 year ago

I really like the idea approach of being able to execute commands. I get the security risk - definitely concerning.

One way a vendor would address this is through only allowing the execution of trusted/predefined scripts within a folder.

It is not that easy. Please read my comment from #3178 (comment).

Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :)

If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data -v uptime-kuma-custom-monitors:/app/custom-monitors louislam/uptime-kuma

Let's say uptime-kuma-custom-scripts directory contains:

snmp-check.sh
email-check.sh
tftp-check.sh

When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right?

Yes, this is the way that most vendors would address this type of concern. You could even require the scripts to be signed, hashed, or pulled from a trusted source.

Then, within the UI, you would simply specify the script and any parameters or variables.

jerkstorecaller commented 1 year ago

Definitely feeling the shortcomings of Uptime Kuma without this feature.

I made a list of the services I want to monitor and I have more protocols unsupported by Kuma than supported ones. 😆 As it is Kuma seems designed primarily for web developers. All the supported monitors are web adjacent.

Now my choices are:

  1. Find a worthy alternative to Kuma which allows running arbitrary scripts as your uptime check. Any recommendations?
  2. If there's no decent alternative, learn to write an HTTP API which when called, will execute an arbitrary Linux command. Run it, and have Kuma call it, eg http://localhost/workaround/check-smtp/ which then calls check-smtp.sh, and if exit code is not 0, return 404 or whatever to signify failure. I'm sure it's nothing hard, it's more about adding an extra piece of custom complexity.

Btw some features on @louislam todo list like domain expiration warning are trivially implementable with the feature we're requesting:

#!/bin/bash

expiration_date=$(echo | openssl s_client -servername site.com -connect site.com:443 2>/dev/null | openssl x509 -noout -dates | grep notAfter | cut -d'=' -f2)
expiration_date_seconds=$(date -d "$expiration_date" +%s)
current_date_seconds=$(date +%s)
days_left=$(( (expiration_date_seconds - current_date_seconds) / 86400 ))

if [ "$days_left" -lt 10 ]; then
  echo "Less than 10 days left for certificate expiration. We must warn the user!"
  exit 1
else
  echo "Still plenty of time!"
  exit 0
fi

That said, I realize Kuma is trying to be multiplatform (but who is doing their IT on Windows?), and Louis would probably prefer a cross-platform solution. Although bash is multipaltform, if the Windows user installs cygwin.

chakflying commented 1 year ago

If the services you are monitoring are not web based, and you are comfortable writing custom scripts, the Push type monitor should work more than good enough for you.

bernarddt commented 1 year ago

This would be great, I use Duplicati and would love to see that the backup completed.

Hi @procheeseburger, I also use Duplicati. And for monitoring the completed backups I use www.duplicati-monitoring.com. It's a free service that alerts you when backups are completed or not (it actually reads the report from Duplicati). So it sends a daily email with a notice of the amount of backups completed. Not sure how you would use gRPC for this, but you can already use the Push Monitor type and a heartbeat alert from Duplicati to also monitor this.

bernarddt commented 1 year ago

Definitely feeling the shortcomings of Uptime Kuma without this feature.

Btw some features on @louislam todo list like domain expiration warning are trivially implementable with the feature we're requesting:

I'm sorry, but if you are so good with bash scripts, can't you simply implement your monitoring requirements with a simple bash script on a cron job, and do a wget/curl call to the Push notification URL with an up or down status depending on the exit code?

This is what I do for my Windows PS Scripts on a Windows Task (yes we use Windows-based hosting and monitoring). The important part here is the service I'm monitoring is behind a NAT and Firewall, and my Uptime Kuma instance is running at another independent location. This way I can get monitor anything anywhere (from different data centres) and my Uptime Kuma notifications are not dependent on the monitored locations or services' internet access.

My concern with gRPC's would be that if your Kuma Instance is compromised as it is an internet-facing service, and they figure out they can execute any gRPC commands or scripts right from your monitor side, your infrastructure may get infiltrated this way.

ghomem commented 8 months ago

Just found this project and I am impressed by the usability. I would like to upvote the request for the execution of commands, possibly from a whitelist of user-given directories. With this implemented, the entire universe of monitoring-plugins from here:

https://github.com/monitoring-plugins/monitoring-plugins

would become available. And this is an enormous tried-and-tested collection. It would allow Uptime Kuma to use ssh checks (see #2609) to monitor exquisite things (snmp, ldap, smb, uptime, sensors, file age,...).

maple3142 commented 7 months ago

Regarding the security risk of executing arbitrary command, I think it is only a problem if uptime-kuma's account are shared with other users. (The server owner are already capable of executing arbitrary already.)

One way to solve this would be disabling this feature by default unless some environmental variable are set. (UPTIME_KUMA_EXEC_MONITOR_ENABLED=true)

redge76 commented 7 months ago

As a workarround there is some projects that expose commands as REST API. See: https://github.com/msoap/shell2http https://github.com/adnanh/webhook https://github.com/fdefelici/shellst

This yes, if it was directly integrated in uptime-kuma, it would be way better. jerkstorecaller solution is quite elegant and safe https://github.com/louislam/uptime-kuma/issues/1117#issuecomment-1644136066

ghomem commented 5 months ago

As a workarround there is some projects that expose commands as REST API. See: https://github.com/msoap/shell2http https://github.com/adnanh/webhook https://github.com/fdefelici/shellst

This yes, if it was directly integrated in uptime-kuma, it would be way better. jerkstorecaller solution is quite elegant and safe #1117 (comment)

I think with one of the web to shell bridges we would be able to retrieve an OK/NOK status based on the standard HTTPS monitor but we would not be able to fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space)

CommanderStorm commented 5 months ago

fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space)

Please have a look at https://github.com/louislam/uptime-kuma/issues/819#issuecomment-1363120710 and further discussions in https://github.com/louislam/uptime-kuma/issues/819

ghomem commented 5 months ago

fetch the value for the corresponding metric right? (ex: CPU use, load, memory, disk space)

Please have a look at #819 (comment) and further discussions in #819

Yup, I see. But I think it would be more interesting to have remote execs as first class citizen monitors which would grab a metric and plot it - just like happens, for example, with the HTTPS monitor.

I used this intensively with Adagios + SSH. I would be very interesting to bring this to UK, because it has a mind blowing UI. It would enable the use of the full monitoring-plugins package which is available on Linux machines and gives you the parsing of the OS metrics for free (no need to do scripts by hand like mentioned in #819 . These plugins have been distilled for many years, which is an advantage over the use of adhoc scripts.

https://github.com/monitoring-plugins/monitoring-plugins

EmaX093 commented 5 months ago

You should really consider into this. As another user points out there are many scenarios where the Push Monitor it is not situable for.

I don't buy the security excuse someone post here, you can always allow only to execute scripts from specific path (as a whitelist) and the problema is gone.

This would Open a whole world of opportunities to monitor. From dockers logs, ssh, usb ports, etc... infinite list. Kuma would be the definitive MONITOR.

ghomem commented 5 months ago

Kuma would be the definitive MONITOR.

Very likely as the more complete OSS monitoring tools are far behind UK in terms of UX.

CommanderStorm commented 5 months ago

I don't buy the security excuse

We just don't want people to get angry with us again. See https://github.com/louislam/uptime-kuma/security/advisories/GHSA-7grx-f945-mj96 for another issue that went into the same direction of a feature that if used maliciously has security implications. I would argue that if that is the level of security we are operating under, encouraging AUTHENTIFICATED ARBITRAIRY CODE EXECUTION allowing for priviege escalation is not something that we can allow.

=> If security folks tell us that this is not ok, then we try to listen. I am not working in security

This is especially a security boundary where crossing might be risky as this would essentially disable the multi-user (or auth in general!) features of

I would argue that such a feature would essentially only be viable without auth, as circumventing auth is trivial with console access to the machine.

If you can come up with a design that fits into our security model, we can discuss this but currently I don't see such a design.

there are many scenarios where the Push Monitor it is not situable for.

I might have overread something, but after re-reading the thread I cannot see a comment that is not adressed. Could you please cite your sources? 😓

If you are asking about monitoring-plugins/monitoring-plugins, this can be another monitor or a set of monitors.

EmaX093 commented 5 months ago

It is not that easy. Please read my comment from #3178 (comment).

Frankly, you're a developer of a tool, you can't stop a user from using the tool to destroy his system. You already require the admin to authenticate to add checks, what more can you do? GNOME Terminal doesn't stop the user from doing "rm -rf *" :)

If you really want to hand-hold the user you could restrict the execution to a list of scripts defined in a specific directory. So for example:

docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data -v uptime-kuma-custom-monitors:/app/custom-monitors louislam/uptime-kuma

Let's say uptime-kuma-custom-scripts directory contains:

snmp-check.sh
email-check.sh
tftp-check.sh

When the user is adding a new monitor, if they select Custom Monitor as the type, you ls /app/custom-monitors, show every file as an option in a dropdown selection. So in my case I would select snmp-check.sh. And then you run this pre-defined task. No concerns here, right?

@CommanderStorm here you have an example. We don't talk about remote execution of arbitrary code, just to allow user to load their owns scriptings and be happy.

I might have overread something, but after re-reading the thread I cannot see a comment that is not adressed. Could you please cite your sources? 😓

Consider if you have to watch 18 servers, which have multiple dockers containers running and you only have SSH access, and you can't change their systems configuring push monitors because that IT because it doesn't belongs to you. You don't wan't to change nothing more than neccesary. You have to monitor not only if the dockers containers are running, but if they are doing what they has to, so yo manually inspect the logs from each one parsing it with a lot of logic... This is a custom scenario. I would'nt pretend that someone else codes a official plugin to do this, but let me do it by myself atleast.

With Push Monitors you have to open ports, change iptables, use tunnels/vpns, etc. a lot of complications with something so trivial to do if you have custom monitors.

CommanderStorm commented 5 months ago

I think you are overcompicating your live. You can either:

I think from a security standpoint the first one is preferable as there is more compartmentalisation.

CommanderStorm commented 5 months ago

We don't talk about remote execution of arbitrary code, just to allow user to load their owns scriptings and be happy.

The same argumet as with https://github.com/louislam/uptime-kuma/security/advisories/GHSA-7grx-f945-mj96 applies though. If we call the arbitrairy executable a plugin, a shell script or sandboxed js I don't see a real difference. (Please correct me if my security understanding is bad)

@n-thumann (the researcher who discovered https://github.com/louislam/uptime-kuma/security/advisories/GHSA-7grx-f945-mj96) has better ideas how to prevent such an attack..

We really don't want to ship insecure software and if the security community thinks someting is not secure we should likely listen. My reluctance to allow for authentificated remote code execution is especially given the liability upcoming laws like the Product Liability Directive (and the Cyber Resiliance Act, but that likely does not matter here) introduce: https://fosdem.org/2024/schedule/event/fosdem-2024-3683-the-regulators-are-coming-one-year-on/

ghomem commented 5 months ago

Great discussion. I'd like to add that security is not a topic of concern here and that arbitrary code execution by a user is neither necessary nor desirable.

How I see this:

So, whatever code is executed is code that has been placed by the admin. The admin could as well delete, turn off the system, etc, There is no escalation here.

In case the custom scripts need to connect via SSH to remote systems, the code that is executed runs with the privileges of the remote user, which has been provisioned with this in mind - usually a restricted user created for this single purpose. In this use case the SSH port is usually whitelisted by IP, the SSH users are usually whitelisted by name and have their keys auto-managed by a central configuration system.

I am pretty obsessed about security but I do not see a problem here.

ghomem commented 5 months ago

Admin uploads malicious/exploitable executable to said directory, lets call it

If an admin of any system uploads malicious/exploitable executable then the system is already lost and there is nothing that can be done about it. The admin of a mailserver can impersonate hosted domain users and send malware on their behalf. The admin of a webserver can host malware discretely in a subdir of a hosted website. The admin of a DNS server can hijack specific DNS records and so on.

In regards to https://github.com/louislam/uptime-kuma/security/advisories/GHSA-7grx-f945-mj96:

the problem is that any user is able to install plugins via an API. You need to see if you really want any user to do so and if an API endpoint is the right way to do it. But this is not the point of the present issue.

mathieu-rossignol commented 5 months ago

It is not that easy. Please read my comment from #3178 (comment).

@louislam,

First hello, and thank you for this beautiful tool. That said, I agree with some others here, this one is a must have:

My 2 cents. So you got it, it's a big 1up for me on this one :+1: :+1: :+1:

Regards (and thx again).

thielj commented 5 months ago
  • Admin uploads malicious/exploitable executable to said directory, lets call it sh.

A malicious admin can already inject code into U-K to be executed both client and server side. The possibilities are endless. Think displaying Google, Github or bank login pages, phishing credentials, OTPs, mining crypto, DDOS, ... you name it. They have the combined potential of the user's browser session and the server backend at their disposal.

An admin without malicious intent running executables as monitors is not the problem.

Letting unauthenticated or low-priviliged users run or even install arbitrary or exploitable code is.

There's no need to replace /bin/sh either. When someone with malicious intent gains shell access, they have hit the jackpot already.

stacksjb commented 5 months ago
  • Admin uploads malicious/exploitable executable to said directory, lets call it sh.

A malicious admin can already inject code into U-K to be executed both client and server side. The possibilities are endless. Think displaying Google, Github or bank login pages, phishing credentials, OTPs, mining crypto, DDOS, ... you name it. They have the combined potential of the user's browser session and the server backend at their disposal.

An admin without malicious intent running executables as monitors is not the problem.

Letting unauthenticated or low-priviliged users run or even install arbitrary or exploitable code is.

There's no need to replace /bin/sh either. When someone with malicious intent gains shell access, they have hit the jackpot already.

As someone who works in cyber, this is 1,000% correct.

The proper way to restrict this is to restrict execution to specific scripts or paths. Yes, someone could replace that specific file, but doing so would require access to the file system, which is game over already.

If you really want to get protective, then you could add approval required when the file is modified, based on hash or modification date. But I think that's probably overkill.

Specific path restriction is probably adequate - You wouldn't want to allow execution of anything anywhere on the system, because a website compromised will typically allow you to upload files to website restricted directories.

And all of this illustrates why the web service shouldn't be running as an admin/root anyway...

stacksjb commented 5 months ago
  • Admin uploads malicious/exploitable executable to said directory, lets call it sh.

A malicious admin can already inject code into U-K to be executed both client and server side. The possibilities are endless. Think displaying Google, Github or bank login pages, phishing credentials, OTPs, mining crypto, DDOS, ... you name it. They have the combined potential of the user's browser session and the server backend at their disposal.

An admin without malicious intent running executables as monitors is not the problem.

Letting unauthenticated or low-priviliged users run or even install arbitrary or exploitable code is.

There's no need to replace /bin/sh either. When someone with malicious intent gains shell access, they have hit the jackpot already.

As someone who works in cyber, this is 1,000% correct.

The proper way to restrict this is to restrict execution to specific scripts or paths. Yes, someone could replace that specific file, but doing so would require access to the file system, which is game over already.

If you really want to get protective, then you could add approval required when the file is modified, based on hash or modification date. But I think that's probably overkill.

Specific path restriction is probably adequate - You wouldn't want to allow execution of anything anywhere on the system, because a website compromised will typically allow you to upload files to website restricted directories.

And all of this illustrates why the web service shouldn't be running as an admin/root anyway...

thielj commented 5 months ago

One possibility would be to provide available "exec monitors" in a configuration file, maybe with the option to pass environment variables from the monitor's setup page. Anything beyond that is probably unnecessary. You can't hash all binaries and libraries a process is going to open.

Also, "plain" Uptime Kuma isn't perfect either. Compared to e.g adding a shell script calling ldapsearch, exposing the GUI or API to the public is by far the bigger risk - no offense intended! Combine that with full access to the docker socket and it's an incident waiting to happen.

thielj commented 5 months ago

Just thinking, maybe it's worth differentiating between two distinct use cases: running an instance on an internal machine in a private network vs running an instance exposed to a potentially hostile public.

I can easily reserve the more detailed checks and use of third party binaries for the internal instance, and use the external instance for push API results, public reachability checks and status pages, without the need to store authorization headers, connection strings and other confidential information

The "glue" between these two could be groups "pushing forward" their status to the public instance.

CommanderStorm commented 5 months ago

@thielj Lets keep the discussion on track. What you are talking (having a remote executor that pushes to a primary instance) about is tracked here instead:

CommanderStorm commented 5 months ago

@thielj I know this is off-topic:

Combine that with full access to the docker socket

Can be migrated via proxying => only allowing read-access to the services you allow us to read We only use one API-endpoint in a read-only fashion. => don't see a huge risk here, unless I am not seeing something (If there is something, we could either discuss it in ANOTHER issue or in an advisory depending on content)

From a security risk perspective, I think that we allow users to connect to databases is more scary, as making a user read-only and only allowing a few tables is advanceter stuff (or at least likely to get de-prioritised in the scrum meeting..) => I'd expect a lot of users to connect to their database with way to deep priviliges (FYI: We don't have good documentation around this. If somebody wants to research how to make one of the DBs more secure, we would love to include this in the frontend as a helptext or a link to the wiki..)

Also, we would not currently be ISO27001 or SOC2 compliant because of:

=> I'd expect the cases where we are used to reflect this (Expectation: smaller deployments, hobby/small enterprise) We have no clue what "the fleet" looks like, what monitors are getting used/.. => see https://github.com/louislam/uptime-kuma/issues/4456

thielj commented 5 months ago

Lets keep the discussion on track.

I would think I am. There's no point to be paranoid about exec monitors when the elephant 🐘 in the room is a public facing GUI and quite often an exposed docker socket (aka hello root shell).

In a private network, U-K running exec monitors is basically a glorified cron without much risks. Would I generally recommend doing this on a public facing instance? Not really, and I suppose that's where you're coming from.

What we're discussing here can best be solved by compartmentalizing: running some stuff inside the private network (eliminating the public exposure risk) and everything else on a minimalist throw-away VPS or fly.io (for damage control if the system gets compromised).

Until this is easily doable/supported, the best advice is: don't do it. Or even: we won't implement exec monitors until the other bits are done. People would probably understand this.

CommanderStorm commented 5 months ago

One possibility would be to provide available "exec monitors" in a configuration file

Having configuration files just to have double accounting does not seem worth it (or at least I don't see a benefit). I'd expect a simple "list all executable files in the directory without following links" to be the same level of security.

maybe with the option to pass environment variables from the monitor's setup page

We try to restrict what needs to be configured via environment variables to the least ammount of options as having to dig trough a gigantic parameter dump is terrible for everyone involved.

CommanderStorm commented 5 months ago

@thielj compartmentalizing can be discussed in https://github.com/louislam/uptime-kuma/issues/84. In this issue it is off-topic.

thielj commented 5 months ago

@thielj I know this is off-topic:

Combine that with full access to the docker socket

I know, and have socket proxies everywhere. But this also "advanced stuff". Have you actually documented what access U-K needs? Until then people will probably use the same proxy for portainer, U-K and others (I.e. almost fully exposed).

And it's not just database credentials. There are authorization headers, some clear text passwords maybe, API tokens, and so on. As you say, reducing these to the bare minimum is advanced stuff for most users. And even then, you probably don't want to leak the endpoints, error messages, etc. Some apps encrypt them "at rest", but as long as the key is readily available this doesn't change much.

Anyway, let's end this here. I accept that this is only partially related.

pprazzi99 commented 3 months ago

All of the security concerns can be solved with a few scenarios to implement that funcionality. And yes I do agree that some of those things might be a security concerns, but every system that allows user input is vulnerable to some extent.

To enable custom script monitoring it might be required to:

Also don't forget that docker container would need to be modified by the end-user to actually utilize such custom monitoring options, as image doesn't have all the packages available pre installed.

That would greatly improve monitoring possibilities instead of requiring developer to write it's specific checks