Open alykhachov opened 9 months ago
Hi,
So just to confirm, the files are being served from the PHP application and not the static share?
If so, I guess this is why
2024/01/29 19:45:08 [alert] 18#18 app process 30114 exited on signal 7
Though why it's getting a SIGBUS... anything interesting in dmesg?
correct, files are being served from the PHP application app is running in container based on alpine:3.18
On Mon, 29 Jan 2024 12:24:05 -0800 Anton Lykhachov @.***> wrote:
correct, files are being served from the PHP application app is running in container based on alpine:3.18
My initial thought is it's being OOM (Out of Memory) killed, how much memory is assigned to the container?
Anything like
$ dmesg | grep "Out of memory" [1324992.026646] Out of memory: Killed process 414564 (Isolated Web Co) total-vm:4338224kB, anon-rss:1095696kB, file-rss:4020kB, shmem-rss:1660kB, UID:1000 pgtables:7820kB oom_score_adj:167
in dmesg, systemd journal or other system logs?
container is running without memory limit, instance has 2GB RAM nothing interesting in the logs
What's the host OS?
Amazon Linux 2
So a SIGBUS should be generating a coredump. Though it's possible you don't have credumps enabled (ulimit -c 0) or something is intercepting them and whisking them off, e.g systemd-coredump(8).
So my question is, do you see any signs of coredumps being generated?
You can check
$ cat /proc/sys/kernel/core_pattern
core
$ cat /proc/sys/kernel/core_pattern
|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
The first is when nothing special is setup the second is using the systemd-coredump(8) facility and be checked wirh coredumpctl(1)
You can check ulimit -c
$ ulimit -c
unlimited
That's good, if you see 0
that's bad...
Whether you check these in the host or container may depend on the container system you're using...
If you're using systemd-coredump(8) then you could just try doing a
$ coredumpctl gdb
Which will let you get a backtrace (bt).
Or if you have a core file
$ gdb /path/to/unitd /path/to/coredump
(gdb) bt full
One thought occurs to me, how reproducible is this? Does it happen every time with a file of a certain size or more? Or does the amount of file that gets successfully transferred vary by some amount?
Just wondering if perhaps the script is buffering the file data in memory and you're hitting the php memory_limit
or somesuch...
Good morngin @alykhachov . Are the files shared via an PHP script or are thye shared / accessed via share
?
From your curl
output above it looks to me there is another Proxy Server frontending Unit? Is that correct?
I will try to download an ISO using the latest version of our Docker Container and let you know the results. @ac000 thanks for investigating!
Hi @tippexs, files shared via PHP script, they exist locally, app validates headers and whatnot and then returns a file Regarding proxy - yes, I'm behind proxy in this test, don't have time to setup new instance with public access however, I don't think it's contributing to a problem as it works fine with nginx+php-fpm
I assume your memory_limit in php.ini is set a good deal larger than the files that are failing to download?
We really need to find the cause of the SIGBUS's, if you don't have any coredumps, then do you have a simple reproducer?
I could probably knock up a simple php script to download a file but it's unlikely to work anything like yours.
@alykhachov are we talking about something like this? https://laravel.com/docs/10.x/responses#file-responses
Happy to draft an small demo project and share that. If @javorszky if you have time to do that, that would be great.
OK, so PHP seems to handle exceeding its memory_limit
gracefully...
@alykhachov can you share a phpinfo();
output with us? And the contents of the php.ini
file that you're passing to the laravel app?
Hi folks, appreciate your help I'm no php dev myself, just an admin with the task to deploy some legacy thing. I though that unit would be better than supervisord+nginx+php-fpm for packaging in a container.
From what I can make up - the app is calling this method
here is the php.ini.txt
Thanks @alykhachov It is actually better than supervisord + nginx + php-fpm :) I am using it for my PHP stack since 4 years without any issues. I will kick of an Container later and will see if I can reproduce it.
Hi folks, I have php laravel app that servers big binary files, up to 600MB
unit version: 1.31.1 config:
retrieving 24MB file - ok retrieving 93MB file - failure
curl output
unit log
Will appreciate any advise on which option to tune or how to properly debug this
Thanks in advance