Closed afawaz2 closed 1 year ago
Are you using Cloudron?
No, I am using docker-compose from https://github.com/tiredofit/docker-freescout/blob/main/examples/docker-compose.yml
Are you using a modern Microsoft OAuth IMAP authentication or regular IMAP authentication?
You'll need to debug this: https://github.com/freescout-helpdesk/freescout/blob/dist/app/Console/Kernel.php#L105
I have a mix, some use modern Microsoft oauth imap and one uses regular imap using Amazon. We are still suffering from the empty emails problem, so we are migrating to another provider.
I'll add some print statements for the $fetch_command_pids, how do you recommend I log the variable?
Try to figure out why this line does not kill existing fetch-emails
processes: https://github.com/freescout-helpdesk/freescout/blob/dist/app/Console/Kernel.php#L118
Will do, does this give you a hint? After killing the process I got the following laravel errors:
The process has been signaled with signal "9". {"exception":"[object] (Symfony\Component\Process\Exception\RuntimeException(code: 0): The process has been signaled with signal \"9\". at /www/html/overrides/symfony/process/Process.php:434) [stacktrace]
"}
Add the following logging to the line 112 https://github.com/freescout-helpdesk/freescout/blob/dist/app/Console/Kernel.php#L112:
\Log::error('fetch_command_pids: '.count($fetch_command_pids));
\Log::error('mutex: '.\Cache::get($mutex_name));
Wait till you'll have fetch_command_pids
and mutex
values in Manage > Logs > App Logs and post them here.
I am not getting logs, I am not experienced with laravel should/how I reload the php to get the updated code to see the change?
There is no need to do anything else. Just wait till there will be multiple fetch-emails
processes running and check logs.
This is what I got; I will wait for the issue to reoccur and update the ticket
Killing processes that get stuck in Docker has been fixed in FreeScout v1.8.69
We are getting a new issue where freescout does not fetch emails until we go and clear cache or force fetch. We don't see any stuck job in the queue but we have many processes doing fetch running for a long time.