php / php-src

The PHP Interpreter
https://www.php.net
Other
38k stars 7.73k forks source link

`max_execution_time` reached too early #12814

Closed Muffinman closed 4 months ago

Muffinman commented 10 months ago

Description

Hi, I'm looking for some guidance in tracking down a very strange error with max_execution_time setting not being honoured correctly.

We have an incoming JSON POST request to a Laravel backend, which does some processing and stores some data.

The request is a few KB of tree data, and the controller calls a rebuildTree() method as below. Internally this makes a lot of calls to the MySQL DB and Redis DB.

Category::scoped(['site_key' => config('cms_scope')])
    ->rebuildTree($request->data);

The problem I'm facing is that this fails every time after 1 second with this error:

Maximum execution time of 30 seconds exceeded

I have added some debugging to the exception handler to try to work out why it is doing this:

var_dump(time() - $_SERVER['REQUEST_TIME']);
var_dump(getrusage());
var_dump(ini_get('max_execution_time'));
var_dump(error_get_last());
int(1) // Request had only been processing for ~1s before exception happened

array(17) {
  ["ru_oublock"]=>  int(0)
  ["ru_inblock"]=>  int(0)
  ["ru_msgsnd"]=>  int(43432)
  ["ru_msgrcv"]=>  int(51349)
  ["ru_maxrss"]=>  int(71073792)
  ["ru_ixrss"]=>  int(0)
  ["ru_idrss"]=>  int(0)
  ["ru_minflt"]=>  int(17193)
  ["ru_majflt"]=>  int(17)
  ["ru_nsignals"]=>  int(15)
  ["ru_nvcsw"]=>  int(7368)
  ["ru_nivcsw"]=>  int(104279)
  ["ru_nswap"]=>  int(0)
  ["ru_utime.tv_usec"]=>  int(212295)
  ["ru_utime.tv_sec"]=>  int(8)
  ["ru_stime.tv_usec"]=>  int(564183)
  ["ru_stime.tv_sec"]=>  int(1)
}

string(2) "30"

array(4) {
  ["type"]=>  int(1)
  ["message"]=>  string(45) "Maximum execution time of 30 seconds exceeded"
  ["file"]=>  string(112) "/path/to/site/vendor/laravel/framework/src/Illuminate/Redis/Connections/PhpRedisConnection.php"
  ["line"]=>  int(405)
}

Now for the next strange part, if I set the max_execution_time to some other far larger value, it works!

// set_time_limit(30); // Didn't work, same error
// set_time_limit(60); // Didn't work, same error
// set_time_limit(120); // Didn't work, same error
set_time_limit(300); // Works. Request completed in around 3.5s!
Category::scoped(['site_key' => config('cms_scope')])
    ->rebuildTree($request->data);

Screenshot 2023-11-28 at 13 41 38

PHP Version

PHP 8.2.13

Operating System

MacOS 14.1.1

iluuu1994 commented 10 months ago

Hi @Muffinman. Can you share a bit more about your setup? What web server + SAPI are you using? Are you using Octane or something similar by any chance?

Muffinman commented 10 months ago

Hi @iluuu1994 ,

I'm running the homebrew compiled php-fpm under nginx.

No octane in use, I only have php-redis and imagick extensions installed via pecl, everything else should be out of the box.

iluuu1994 commented 10 months ago

That's odd. macOS should use (AFAIK) setitimer(ITIMER_REAL, ...) which doesn't even include IO time. So if anything, your request should allow to run for longer than the configured max_execution_time. Is this issue restricted to macOS or does it also occur on other operating systems?

Muffinman commented 10 months ago

I've only seen it on MacOS.

One other thing I've noticed is that if I replace the call to Category::rebuildTree() with a simple sleep(5), then the timeout problem does not happen, even though the request takes longer.

There must be something deep in that code which is causing PHP either to throw the wrong exception, or otherwise somehow miscount it's execution time.

iluuu1994 commented 10 months ago

Unfortunately I don't have a (working) macOS machine to test this. Some minimal reproducer would be great, although I understand this might be tricky. It's also hard to rule out an error in php-redis itself.

Muffinman commented 10 months ago

Well I think I've managed to rule out php-redis, if I switch all the Laravel queue/cache drivers away from Redis the max_execution_time exception still happens, but instead happens in some Guzzle code.

Will keep experimenting to see if I can narrow it down further.

JackWH commented 9 months ago

@Muffinman I've got exactly the same issue, relieved to see it's not just me! Have you been able to find any clues?

In my case this affects both PHP 8.2.13, and a fresh clean install of PHP 8.3.0. Both running on macOS 14.1.2 using Valet (php-fpm + nginx). However, I was tipped off to the issue after a random page on my production server (running Ubuntu 18, PHP 8.2 and nginx) started throwing HTTP 500s and timing out. So this possibly also affects non-Mac environments as well.

Unfortunately it's not easy to debug further on production at the moment, as I've since had to make a short-term workaround to get the page running again. If any more clues come up I can try some specific tests though.

Other things I've noticed:

iluuu1994 commented 9 months ago

Can any of you provide a reproducible codebase? If so, you can send it to me privately over email. If possible without dependencies (databases and whatnot).

Muffinman commented 9 months ago

No I didn't manage to track it down unfortunately. In my case I was able to mitigate it by pausing the Model events in Laravel.

I think it should be possible to create minimal a reproduction repo.

JackWH commented 9 months ago

@iluuu1994 I'm trying to isolate a reproducible example at the moment, I'll let you know if I manage to get one figured out but it's not straightforward.

In my case the error is limited to one specific page, and only occurs for users with lots of associated data. Just seeding a bunch of fake records doesn't seem to cause the problem. However our app does have quite complex nested relationships in user data, so this could be hiding something else.

Some more observations from further debugging...

JackWH commented 9 months ago

@iluuu1994 I've got a reproducible example working 👀 It's very simple to set up:


  1. Run these three commands:
composer create-project laravel/laravel timeout
cd timeout
php artisan serve
  1. Open http://127.0.0.1:8000 in your browser, you should see the Laravel welcome page

  2. Open ./app/Providers/AppServiceProvider.php in your editor

  3. Add an infinite loop to the boot() method:

    /**
     * Bootstrap any application services.
     */
    public function boot(): void
    {
        while (true) {
            // ... do nothing
        }
    }
  1. Start a timer, and reload the webpage.

Each time I tried this the page would load for somewhere between 20 to 40 seconds, before throwing the error Maximum execution time of 60 seconds exceeded. I assume 60 seconds is the internal server's default, whilst it's 30 seconds for Valet. The php artisan serve command shows durations for each request as they're returned, so you can confirm the timings in the console.

Given the mistimings still occur, despite taking longer than the 1-2 seconds we were noticing with established codebases, perhaps the root cause is compounding throughout the request lifecycle?

This was using a clean install of PHP 8.3 via Homebrew on macOS Sonoma 14.1.2, although I noticed a similar issue in production on Ubuntu 18.04.4.

Let me know if I can provide any more info. Thanks 🙏

iluuu1994 commented 9 months ago

@JackWH Thank you for the reproducer! Just to clarify: You are not using artisan serve or php -S in production, correct? But this is how you are working / running into this issue locally?

I noticed something peculiar. On PHP 8.3(-dev), I get the expected 30 seconds of timeout. 30s is the default value in PHP and has been since basically forever. On PHP 8.2(.13), I get 60 seconds of timeout. However. checking for var_dump(ini_get('max_execution_time')); returns 30. Maybe the internal server messes up the default timeout in some way. I cannot see any hard-coded values in Laravel or the internal server.

I can reproduce this with a minimal script:

<?php
// var_dump(ini_get('max_execution_time'));
// Reports 30
while (true) {}
// Fails after 60

Changing max_execution_time manually fixes the issue for me for both versions. I will have a closer look at this.

Since PHP 8.3 actually behaves correctly in my case I'm not sure if this is related. I get consistent 30/60s timeouts, there's no variability. I also tried with --enable-zend-max-execution-timers but that doesn't reproduce it either. I also checked https://github.com/Homebrew/homebrew-core/blob/master/Formula/p/php.rb, but it doesn't seem to do anything special. It's possible that this is a macOS only issue, in which case it would be nice if somebody else could try reproducing this.

iluuu1994 commented 9 months ago

This was indeed a false lead. The php-cli-server has a bug where max_execution_time is never set when a router script is used, because php_execute_script isn't called. This means that the max_input_time timer is never canceled, which was configured to 60s by default on my machine. Additionally, there's a socket timeout of 60s for the network socket, which is 60s by default and cannot be configured.

JackWH commented 9 months ago

@iluuu1994 In production we're running nginx with php-fpm, but this could possibly be a red herring. I recall the production timeouts happening "faster than usual", but admittedly I was in a hurry to get a workaround in place, and not paying close attention to how long it took.

We've not made any recent code changes, this really did seem to just spring up out of the blue. The fact it's the same page causing timeouts on production and my Mac makes me assume they're linked, but not ruling out something else.

Having said that, my test case just used php artisan serve for convenience. I normally run my local sites through nginx+fpm on the Mac too (via Laravel Valet), it seems both are affected.

iluuu1994 commented 9 months ago

@devnexen You have access to a macOS machine, right? Could you see if you can reproduce the issue at hand with the instructions from https://github.com/php/php-src/issues/12814#issuecomment-1842727083? I had no luck on Linux. Additionally, it would be great if @JackWH could try this reproducer on Linux, e.g. using Laravel Sail.

devnexen commented 9 months ago

I can't reproduce the issue, I get the Maximum execution time of 60 seconds exceeded message but after more than 1mn. Tried couple of times.

iluuu1994 commented 9 months ago

I fixed the issue I found yesterday (https://github.com/php/php-src/pull/12886), but I doubt it will help in this case. @JackWH Could you try reproducing this on Linux/Docker? That would be very helpful in narrowing down this issue.

Muffinman commented 9 months ago

I have also had this on another site, this time a Drupal 10 site - so still quite Symfony heavy but not specific to Laravel.

Fatal error: Maximum execution time of 30 seconds exceeded in /path/to/site/web/core/lib/Drupal/Core/Database/StatementWrapperIterator.php on line 110

Screenshot 2023-12-08 at 15 33 03

EDIT: For this site I am able to switch to PHP 8.3.0 to test under that, and I can confirm that I also have the same issue there.

Will test some other Drupal sites, should be easy to reproduce if it happens on fresh Drupal installation.

Muffinman commented 9 months ago

OK so I went back to basics:

<?php

while (true) {
  //
}
Fatal error: Maximum execution time of 30 seconds exceeded in /Users/matt/Sites/bug-report/public/index.php on line 3

Screenshot 2023-12-08 at 16 03 08

Seems to definitely be shorter the more work the script does, I will try putting some PDO queries in the loop and see if that affects the response time.

Muffinman commented 9 months ago

OK so it's definitely socket related.

<?php

while (true) {
    DB::connection()->select('SHOW TABLES');
}

// Timeout in 4.7s

<?php

while (true) {
    DB::connection()->select('SHOW TABLES');
    usleep(100);
}

// Timeout in 7.2s

<?php

while (true) {
    DB::connection()->select('SHOW TABLES');
    usleep(500);
}

// Timeout in 13.6s

<?php

while (true) {
    openssl_random_pseudo_bytes(32);
}

// Timeout in 19.2s

while (true) {
    $fh = fsockopen('localhost', 80);
    fclose($fh);
}

// Timeout in 0.3s!!

iluuu1994 commented 9 months ago

@Muffinman Thank you! I cannot reproduce any of those on Linux. With what method (built-in server, artisan serve, etc) are you running these tests? Can you also reproduce this if you just use php -S ... test_script.php? Can you try this in Docker? I can't find anything interesting on setitimer on Darwin in the documentation.

Muffinman commented 9 months ago

These were all run under php-fpm and I was able to reproduce all the way back to php7.4, I didn't try anything earlier.

I will re-do these tests with PHP built in server and an FPM docker container and report back.

Muffinman commented 9 months ago

I can confirm I get the same timeout behaviour PHP's built-in webserver on native OS.

Running under docker (php:8.3-apache) seems to work as expected, with correct timeouts honoured.

gharlan commented 8 months ago

I have the same problem in symfony projects on my mac. Especially after deleting the symfony cache, on next page load I often get the error after 1 or 2 seconds ("Maximum execution time of 30 seconds exceeded"). PHP 8.2 and 8.3 via homebrew. Apache with mod_php. Didn't see the problem on production (linux).

googletorp commented 8 months ago

I've seen the same issue. It started when I got PHP 8.3 with homebrew, then PHP 8.2 started having these issues. What @Muffinman writes is a good summary of how the problem manifests. For me I work with Drupal where it's possible to use Redis as a cache backend. For PHP 8.0 + 8.1 same code base + data works as expected. For PHP 8.2 I get timeout errors even after increasing it to infinity. The error always happens in code that connects to the Redis cache backend (fetching or setting cache items) and happens really fast. I haven't measured it, but I would estimate ~ 1-2 seconds.

I was so glad seeing this post, as it was starting to drive me crazy, so at least it's not a me problem. My setup is PHP with homebrew + Apache with homebrew.

silviokennecke commented 8 months ago

Hi everyone, I have the same issue with all installed PHP versions (in my case starting from 7.4).

I just tested it on an old Mac (with Intel processor). From what I was able to find out, it seems to be an issue related to Apple Silicon. I am using a M1 Pro. My test with an Intel based Mac was run on a MacBook Pro with Intel Core i7.

Hopefully this helps in further localizing the issue.

bukka commented 8 months ago

I tested on my M1 Max (Apple Silicon) and was not able to recreate it. But I'm still on Ventura (13.5) OS. I will update in the next couple of days to 14 and will try again.

apphancer commented 8 months ago

I am getting a message despite the script terminates 15 seconds before the time limit is exceeded. max_execution_time is set to 30 seconds, however profiling the script when it runs without problem I can see it runs in 14-15 seconds. Also on Apple Silicon (M1) with PHP 8.1. This happens when using https://github.com/thephpleague/csv particularly the fputcsv() method of their Stream class which is where the native fputcsv() function is called. The problem is that the issue is intermittent so it's hard to debug.

andrewnicols commented 7 months ago

I'm hitting this on MacOS Sonama 14.2.1 with PHP 8.1.24 as of about 10 days ago. The basic test case allows me to reproduce this easily:

while (true) {
    $fh = fsockopen('localhost', 80);
    fclose($fh);
}

I'm getting this via nginx with php-fpm, httpd with php_module, and via CLI if I remember to set max_execution_time.

This persists across php-fpm restarts, and with different versions. Also persists with httpd using php_module rather than phm. Stops on reboot, at least for a while (13 days for me last time I did so).

The first time I noticed this happen (13 days ago) it coincided with changing network. Could be a red herring.

Havent' been able to reproduce on Linux (Ubuntu) yet. Have also tried using Docker on MacOS (same machine that is failing) but that also continues to work.

I haven't restarted my machine yet so if there is any specific debugging you need me to do, I can probably hold off on restarting for the next few hours (but I will need to soon).

andrewnicols commented 7 months ago

Excluded extensions and other configuration:

php -n -dmax_execution_time=30 12814.php
andrewnicols commented 7 months ago

A bit of debugging with lldb:

➜  12814 lldb `which php` -- -dmax_execution_time=30 12814.php
(lldb) target create "/opt/homebrew/bin/php"
Current executable set to '/opt/homebrew/bin/php' (arm64).
(lldb) settings set -- target.run-args  "-dmax_execution_time=30" "12814.php"
(lldb) breakpoint set --name zend_timeout_handler
Breakpoint 1: where = php`zend_timeout_handler, address = 0x0000000100312438
(lldb) run
Process 80472 launched: '/opt/homebrew/bin/php' (arm64)
Process 80472 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
    frame #0: 0x0000000100312438 php`zend_timeout_handler
php`zend_timeout_handler:
->  0x100312438 <+0>:  stp    x28, x27, [sp, #-0x30]!
    0x10031243c <+4>:  stp    x20, x19, [sp, #0x10]
    0x100312440 <+8>:  stp    x29, x30, [sp, #0x20]
    0x100312444 <+12>: add    x29, sp, #0x20
Target 0: (php) stopped.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
  * frame #0: 0x0000000100312438 php`zend_timeout_handler
    frame #1: 0x00000001003a8f60 php`zend_signal_handler + 196
    frame #2: 0x00000001003a8e54 php`zend_signal_handler_defer + 252
    frame #3: 0x00000001896c5a24 libsystem_platform.dylib`_sigtramp + 56
    frame #4: 0x0000000195f17630 libsystem_dnssd.dylib`deliver_request + 692
    frame #5: 0x0000000195f19f8c libsystem_dnssd.dylib`DNSServiceQueryRecordInternal + 716
    frame #6: 0x00000001896d3f5c libsystem_info.dylib`_mdns_query_start + 624
    frame #7: 0x00000001896d3338 libsystem_info.dylib`_mdns_search_ex + 732
    frame #8: 0x00000001896d5c74 libsystem_info.dylib`mdns_addrinfo + 360
    frame #9: 0x00000001896d5abc libsystem_info.dylib`search_addrinfo + 176
    frame #10: 0x00000001896ceb78 libsystem_info.dylib`si_addrinfo + 1312
    frame #11: 0x00000001896ce5b0 libsystem_info.dylib`getaddrinfo + 168
    frame #12: 0x00000001002d0a24 php`php_network_getaddresses + 140
    frame #13: 0x00000001002d1680 php`php_network_connect_socket_to_host + 76
    frame #14: 0x00000001002e0e78 php`php_tcp_sockop_set_option + 1292
    frame #15: 0x0000000100046350 php`php_openssl_sockop_set_option + 152
    frame #16: 0x00000001002d5e74 php`_php_stream_set_option + 56
    frame #17: 0x00000001002dfc3c php`php_stream_xport_connect + 104
    frame #18: 0x00000001002df92c php`_php_stream_xport_create + 772
    frame #19: 0x0000000100249cf8 php`php_fsockopen_stream + 404
    frame #20: 0x0000000100f0892c xdebug.so`xdebug_execute_internal + 628
    frame #21: 0x000000010035eef0 php`ZEND_DO_FCALL_SPEC_RETVAL_USED_HANDLER + 316
    frame #22: 0x000000010033c4d8 php`execute_ex + 48
    frame #23: 0x0000000100f085b4 xdebug.so`xdebug_execute_ex + 744
    frame #24: 0x000000010033c6f8 php`zend_execute + 332
    frame #25: 0x000000010031f2fc php`zend_execute_scripts + 156
    frame #26: 0x00000001002c45d4 php`php_execute_script + 460
    frame #27: 0x0000000100402078 php`do_cli + 5824
    frame #28: 0x0000000100400888 php`main + 1404
    frame #29: 0x00000001893150e0 dyld`start + 2360
andrewnicols commented 7 months ago

Also worth noting that if I register a shutdown handler, I can see that the $fh was successfully opened.

pspanja commented 7 months ago

@JackWH

If I add ini_set('max_execution_time', -1) the page loads after 2 or 3 seconds.

Thanks for this, it fixed the problem for me.

FWIW, I started seeing this after upgrade to Sonoma, PHP-FPM and NGiNX installed over MacPorts.

windaishi commented 7 months ago

It looks like every developer in our company seems to be affected in some way by this bug. Therefore I took the time and tried to find out what the issue might be.

Thanks to the work done by @Muffinman I could use a good code sample for reproduction:

while (true) {
    $fh = fsockopen('localhost', 80);
    fclose($fh);
}

My system is on MacOS 14.1.1 and my Macbook has a Apple Silicon M1 chip. I checked out the PHP source at 8.2.9 and compiled a debug build.

What I found is that it might have to do something with a broken or changed getitimer/setitimer API behavior in MacOS 14.

When max_execution_time is set (either via config or via code) the following code is executed on a system with the getitimer/setitimer API available:

https://github.com/php/php-src/blob/1978a7b393ebbf5018e07b42ba65325282eee336/Zend/zend_execute_API.c#L1515-L1553

The important part is the following:

https://github.com/php/php-src/blob/1978a7b393ebbf5018e07b42ba65325282eee336/Zend/zend_execute_API.c#L1528-L1530

This tells the OS to send a SIGPROF signal after the max execution time has been reached. PHP registers a listener for this signal and triggers the known error if the OS actually sends it to the process.

For some reason the OS triggers this signal way to early.

In the docs of Apple it can be found what setting a ITIMER_PROF timeout does:

The ITIMER_PROF timer decrements both in process virtual time and when the system is running on behalf of the process. It is designed to be used by interpreters in statistically profiling the execution of inter-preted interpreted preted programs. Each time the ITIMER_PROF timer expires, the SIGPROF signal is delivered. Because this signal may interrupt in-progress sys-tem system tem calls, programs using this timer must be prepared to restart inter-rupted interrupted rupted system calls.

I have no idea what virtual process time is but for me it sounds edgy. So for testing reasons I changed the call in the PHP code to use a ITIMER_REAL timer and listen to a SIGALRM signal. This actually fixed the issue for me.

# else
            setitimer(ITIMER_REAL, &t_r, NULL);
        }
        signo = SIGALRM;
# endif

Unfortunately I don't know anything about possible side effects. But directly above the mentioned lines, this is already done under certain other circumstances. So this might actually be a fix.

Hopefully this information will help to get a clue about the issue and will help to fix it.

windaishi commented 7 months ago

That's odd. macOS should use (AFAIK) setitimer(ITIMER_REAL, ...) which doesn't even include IO time.

Hey @iluuu1994, seems like it doesn't. At least on my system. See my post, I could reproduce and fix it on my system.

iluuu1994 commented 7 months ago

@windaishi Sorry, that was a typo on my part. I meant to say that it should use ITIMER_PROF, which does not include IO time. ITIMER_REAL does include IO time. If ITIMER_PROF fires prematurely, that sounds like a bug in macOS to me.

I have no idea what virtual process time is but for me it sounds edgy.

This just means the CPU is executing the current process in user mode. That's the "normal" mode of execution.

Because this signal may interrupt in-progress system calls, programs using this timer must be prepared to restart interrupted interrupted system calls.

Just for clarification, this part sounds suspicious, but it's talking about the opposite. Timers with ITIMER_PROF may trigger when other system calls blocking, aborting them and returning EINTR. This must be handled for the other system calls.

Unfortunately I don't know anything about possible side effects.

This behavior would be just fine, it's just not the same as it is right now. Currently, when a function blocks on IO (e.g. a network request), this time does not count towards the max execution time. The timer only starts decrementing again once the call stops blocking the process.

I don't know why ITIMER_PROF was picked in the first place. There was a PR recently that suggested adding a new timeout that is based on ITIMER_REAL (https://github.com/php/php-src/pull/6504) but it never went anywhere.

iluuu1994 commented 7 months ago

Side note: As mentioned in https://github.com/php/php-src/pull/6504#issuecomment-1752954829, if you compile PHP with the --enable-zend-max-execution-timers flag you can switch to a newer timeout mechanism that works in multi-threaded environments, and uses real time. However, this requires recompiling PHP.

windaishi commented 7 months ago

@iluuu1994 What's your recommendation for our next steps? It seems to be a MacOS bug, but I don't expect it to be fixed by Apple.

Having each developer in our company compile their own PHP isn't practical. Moreover, it looks like we're not the only ones affected.

Should I submit a PR that defaults the timer to ITIMER_REAL? However, this would entail a change in behavior. I believe it's unnecessary to overhaul the whole timer part as suggested in #6504.

Maybe we can set the behavior to ITIMER_REAL for MacOS 14 only?

iluuu1994 commented 7 months ago

@windaishi I'm not sure how to best proceed. Enabling --enable-zend-max-execution-timers for Apple Silicon Macs by default might be an option. It would be nice if somebody could try and check whether this actually solves the issue. @dunglas Any thoughts? I think you have a Mac? Have you ever encountered this issue?

devnexen commented 7 months ago

Note, I can reproduce it on an old mac m1 however not in the m3. on m1 with homebrew's package and both master, I get the issue, using iluuu1994 suggestion the issue persists only on first launch sometimes but then it works or works on first launch.

iluuu1994 commented 7 months ago

using iluuu1994 suggestion the issue persists only on first launch sometimes but then it works or works on first launch.

Are you referring to --enable-zend-max-execution-timers here? What about switching to ITIMER_REAL?

devnexen commented 7 months ago

Yes. Switching to ITIMER_REAL fixes this particular issue too.

dunglas commented 7 months ago

--enable-zend-max-execution-timers only supports Linux for now. I plan to add support for FreeBSD soon because it now supports timer_create(), however, macOS doesn't support this function and it looks like Apple isn't planning to implement it (while it's part of POSIX).

An option could be to use their own Grand Central Dispatch library to emulate timer_create(): https://gist.github.com/lundman/731d0d7d09eca072cd1224adb00d9b9e

iluuu1994 commented 7 months ago

@dunglas Oh, I was unaware of this. Thank you for the clarification. Polyfilling timer_create() could be worth a shot. Otherwise, it might be enough to switch to ITIMER_REAL. I don't think ZTS is very common on macOS. It's not used by Homebrew by default.

devnexen commented 7 months ago

--enable-zend-max-execution-timers only supports Linux for now. I plan to add support for FreeBSD soon because it now supports timer_create(), however, macOS doesn't support this function and it looks like Apple isn't planning to implement it (while it's part of POSIX).

Oh right ... so it was only circumstantial/a bit of luck with local tests.

dunglas commented 7 months ago

@iluuu1994 this would be nice to fix because ZTS is popular on Mac for FrankenPHP (which is now in Laravel Octane), for using the parallel extension with AMPHP, etc. Unfortunately, it may not be very easy.

windaishi commented 7 months ago

Let me summarize what we have discussed so far.

Affected systems:

Possible solution ideas:

  1. Switching so that PHP generally uses ITIMER_REAL. This would be a change that might break backward compatibility too much. Systems could rely on, for example, sleep() not counting towards max_execution_time. Otherwise, the solution would not be a problem.
  2. Enabling ZTS by default on MacOS: This is not possible since ZTS currently does not work on MacOS.
  3. Using ITIMER_REAL only for Apple Silicon. Significantly fewer systems would be affected, and also, the timer is already broken on systems with Apple Silicon and MacOS >= 14 anyway.

Option 3 might be acceptable. What do you think?

dunglas commented 7 months ago

For the record, I'm working on a patch to use GCD for timers (regardless of if it's a ZTS build or not). It still too early to say if it will work or not (doing this on my free time so it could take some days).

iluuu1994 commented 7 months ago

Enabling ZTS by default on MacOS: This is not possible since ZTS currently does not work on MacOS.

ZTS does work on macOS, but --enable-zend-max-execution-timers doesn't, which is otherwise enabled by default when enabling ZTS.

I think option 3 might be acceptable in release branches. I highly doubt many people are using Macs as production servers anyway, where timeout actually matters.

neilpomerleau commented 7 months ago

For what it's worth, we are using Apple Silicon Macs as production servers and started experiencing this bug on both PHP 8.2 and 8.3 after upgrading to macOS 14. We will need to downgrade to macOS 13 until this issue is resolved.