Open ItsReddi opened 2 years ago
The code triggering the consistency check is here:
The code incrementing hits
is here:
This code is only triggered if the compilation of the script succeeds. If the consistency check fails persistent_script
is set to NULL
and thus hits
is never increased. I'm not sure if this is intentional. A simple fix would be to also track misses
in zend_persistent_script
and then add those together:
(persistent_script->dynamic_members.hits + persistent_script->dynamic_members.misses) % ZCG(accel_directives).consistency_checks == 0
Edit: The naming would be somewhat misleading. As the script is found but intentionally skipped due to the incorrect checksum. Maybe there's a better naming scheme.
I think the terminology is alright; after all, this is about cache hits, and if the script isn't fetched from the cache (for whatever reasons), there was no hit.
Actually, this doesn't look like a bug to me, but rather the underlying php/php-src#8065 is the problem.
I am unable to follow your explanation, why this should not be a bug.
Since the setting implies that "Check the cache checksum each N requests." and the second request seems to do a check on the checksum: "Message Checksum failed"
I do not know, but that tells me that the checksum is checked on each request? In this case i do not understand the setting.
The first time a script is persistet, its hit count is zero. The first time the script is loaded from the cache, its hit count is still zero, so the consistency_checks will be executed; since these fail (for whatever reason), the hit counter stays zero (since the cached file isn't actually used, but it's rather compiled and cached again). So the next time the script is loaded from the cache, the consistency_checks will be executed again, and so on.
The actual problem is that the consistency_checks fail; that is not supposed to happen if the script hasn't been modified. If this check wouldn't fail, everything works as expected.
Since the setting implies that "Check the cache checksum each N requests."
Ah, that part of the documentation is not correct, or at least misleading. As I explained above, it's not about a certain number of requests, but rather a certain number of cache hits.
I believe that the fact that the first hit is checksum-checked (if consistency checks are enabled) is actually intentional. After all, it's the first time we're utilizing the cached version of the file, and if it's DOA, we should know about it - as opposed to waiting N requests to find out.
I agree with @cmb69 that the behavior is correct, so the correct solution comprises of doing both:
Seems like the consistency checks code is broken, and parts of the persistent_script that aren't immutable are getting factored into the checksum, which they shouldn't. Depending on what exactly is the root cause is here, it might imply a bigger issue too - it could be that not only the checksum code is broken - but that even with consistency checks disabled, there might be some sort of race condition that could result in a segfault (not very likely but impossible to rule out until we figure out the root cause).
Given the feedback let's transfer this to the docs repository. https://github.com/php/php-src/issues/8065 was analyzed in the meantime.
Description
The following code:
First request:
Second Request allready a check was applied
Expected: No Consistency check applied after 1 Request. (Also the files should not be inconsistent, since they are not changed but this is another issue https://github.com/php/php-src/issues/8065)
With
opcache.consistency_checks=1000
no check is applied on 2nd requestPHP Version
PHP 8.1.9
Operating System
php:8.1.9-fpm-alpine