Open kadomino opened 3 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi @kadomino - sorry for the late reply.
mailu (that is Mailu/Mailu
) is developed for a setup via docker-compose on a single node.
I think we should make sure this is clear in the Issue/PR templates and documentation.
Everything k8s-related should go to the Mailu/helm-charts
repository (maintainers needed).
There are many things to be thought of when using mailu via k8s or docker-swarm on multiple nodes (encrypted inter-container communication for example).
That said: freshclam is running as daemon (not via cron). I can implement a flag to disable freshclam via the environment. The helm chart then needs to set this flag and also create a matching k8s CronJob object. I'm sure we can work something out if there's another "flavor" of the antivirus container needed for this to work.
Hi @ghostwheel42 - thanks for looking into this :-) No problem for the delay - there is no urgency.
Indeed, I do understand that Mailu's main platform is docker-compose. Nevertheless, in my experience it works fine on K8S. I have been using it in production for 6 months now and the only issue that I have encountered is this freshclam DB curruption. This statement should be taken with a grain of salt though, because my use case is quite basic. Also, I only noticed the problem because on my particular K8S cluster the nodes were rebooting frequently (for unrelated reasons), so there was a high probability of the a pod being killed in the middle of a freshclam download.
Sorry for missing the point that freshclam runs as a daemon and not as a cron in the clamav containter. In any case this still violates the "one service = one container" principle which is more strictly needed under K8S than under docker-compose.
Indeed, it would be great if you could allow somehow for running freshclam and clamav in 2 different containers (they would need to operate on a common ReadWriteMany storage, of course). This would eliminate this corruption issue on K8S, I think.
Concerning the Helm Chart, I noticed it recently and made a fork with additions for my needs, which I would be happy to contribute back. I can also write the K8S CronJob part if/when the separate freshclam image becomes available.
Hi @kadomino - I know the "one service = one container" principle, but I think we'll have to live with what we have for now. The team is very small and I only recently joined to implement a configuration import/export I needed for my setup.
I think the Dockerfile/entrypoint of the sntispam container could be changed to run in 3 modes (vie env):
The helm-chart could be updated to make use of mode 2 and 3. What do you think?
Hi @ghostwheel42 - thanks again, this seems like a good idea to me :-) Indeed, it's not that important what's in the image, as long as it can be used for running different types of containers.
In terms of help from me, I'm probably not the right person to touch the images, but I can help with Helm & K8S. Maybe you could get the maintainer of the helm chart involved in this discussion at the right time.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Environment & Versions
Environment
Versions
Using v1.8.0, but this applied to any version.
Description
It seems that freshclam is run within the clamav container. While this often works fine, when an orchestrator is in play (K8S), it may (and does regularly for me) corrupt the downloaded DB and cause MailU to stop receiving emails.
Replication Steps
Run MailU on K8S and delete the Clamav pod while freshclam is downloading its DB.
Expected behaviour
One of the principles of using an orchestrator is that no container should ever run a Cron, because the orchestrator is the only one in charge of all the workloads. In the case of MailU, this means that Freshclam (or any other "container crons") should be run in a separate pod via a K8S CronJob object.
Logs
When the problem occurs, the Postfix logs show that the Clamav pod refused the connection and the Clamav logs show that the DB is corrupted.