Open reevelau opened 3 years ago
Any updates on this?
@cksharma11 I'm having the same issue for a different type of task. I'm working around this at the moment by using the timeout command to kill the job if it takes way longer than it should. That will then allow supercronic to start it at the next iteration. eg something like:
# Kill after 30 minutes.
* * * * * timeout 30m /usr/local/bin/php -d memory_limit=-1 /var/www/html/bin/magento cron:run 2>&1
https://www.tecmint.com/run-linux-command-with-time-limit-and-timeout/
Any updates?
I have the same issue, with the PHP script. do we have any update regarding this issue?
@mehrdad-op shareProcessNamespace
solved it for me
shareProcessNamespace
When process namespace sharing is enabled, processes in a container are visible to all other containers in the same pod.
I do not see how this can help me, because the zombie processes are/created in my single container.
@mehrdad-op I have single container too, but shareProcessNamespace
solved my problem. Read https://opendev.org/airship/promenade/commit/0ffde4162ecb7539fe505840c3e4fdd4e750deb0
https://cloud.google.com/architecture/best-practices-for-building-containers#solution_2_enable_process_namespace_sharing_in_kubernetes
I was trying to run Magento schedule task with supercronic. But sometime the schedule job isn't trigger because previous job doesn't end. Running
ps ufx
shows that supercronic is waiting for a zombie process to exit.Here is my the content of /etc/cron.d/magento2-cron