pk-fr / yakpro-po

YAK Pro - Php Obfuscator
http://www.php-obfuscator.com
Other
1.24k stars 350 forks source link

Segmentation fault due to stack overflow in PHP's Garbage Collector (GC) #75

Closed sedimentation-fault closed 4 years ago

sedimentation-fault commented 4 years ago

NOTE: The following is not a bug (although one can use it as an invitation to think about ways to reduce resource consumption in yakpro-po), so feel free to close it whenever you like. You might, however, want to include some of its information in your documentation, or README, to prepare users who obfuscate large projects.

Problem

Trying to obfuscate ~5000 PHP files of ~1000 lines each, yakpro-po stopped after processing ~1600 files with a simple (and frustrating)

Segmentation fault

No other messages were printed, except two lines in syslog:

kernel: php[12345]: segfault at 7ffdd4c7dff8 ip 000055f81ab3fff4 sp 00007ffdd4c7e000 error 6 in php[55f81a6fe000+4d5000]
kernel: Code: e9 29 fe ff ff 66 81 e3 ff 3f 66 89 98 36 01 00 00 e9 18 fe ff ff e8 eb fb bb ff 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 fd <53> 48 83 ec 38 64 48 8b 04 25 28 00 00 00 48 89 44 24 28 31 c0 0f

However, rerunning yakpro-po would continue with the file where it had previously stopped, as if nothing had happened, for another 1500-1600 files - then stop at the next segmentation fault. A third run would continue from there up to the end. However, the files thus produced would be unusable, as the information from the first two runs that would normally be saved in yakpro-po's own directories (the translation tables and the like) would be lost due to the segfaults and thus the obfuscation would start as if it had started for the first time, only with a different "start file" each time. This indicated that the problem was rather "insufficient memory" than anything else.

But the value of _memorylimit in the php.ini file of PHP CLI (which is different from the one for PHP on the web server!) was high enough:

memory_limit = 4096M

and there was no complaining about it from PHP, as there was previously, with much lower settings:

PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 421888 bytes) in /usr/local/bin/yakpro-po/include/functions.php on line 391

Debugging

I was thus confronted (for the first time) with the question:

How is one supposed to debug segmentation faults on the PHP CLI?

I found the article at Debugging Segfaults in PHP helpful: for the PHP CLI, start php from gdb with

gdb php

and, inside the gdb shell, run your script with your options, e.g.

run /usr/local/bin/yakpro-po original-dir -o destination-dir

When the segfault happened, gdb gives you the opportunity to type commands. Type:

bt

for 'backtrace'.

Although I did not have compiled PHP with debug support, this was enough to point me to the right direction about the reason for the segfault.

Reason

I had already put various 'echo's in place, in yakpro-po.php and (mainly)

include/functions.php

From these, it was clear that the problem occurred inside the call to $parser->traverse in the latter:

$stmts = $traverser->traverse($stmts);

The backtrace command in gdb above, showed more than 100000 lines like these:

#61947 0x0000555555afb110 in gc_mark_grey ()
#61948 0x0000555555afb110 in gc_mark_grey ()
#61949 0x0000555555afb110 in gc_mark_grey ()
...

and, at the end:

#104532 0x0000555555afb110 in gc_mark_grey ()
#104533 0x0000555555afb110 in gc_mark_grey ()
#104534 0x0000555555afb110 in gc_mark_grey ()
#104535 0x0000555555afb110 in gc_mark_grey ()
#104536 0x0000555555afbe0a in zend_gc_collect_cycles ()
#104537 0x00007ffff7f40f57 in xdebug_gc_collect_cycles () from /usr/lib64/php7.2/lib/extensions/no-debug-zts-20170718/xdebug.so
#104538 0x0000555555afb93f in gc_possible_root ()
#104539 0x0000555555b17a74 in ZEND_DO_FCALL_SPEC_RETVAL_UNUSED_HANDLER ()
#104540 0x0000555555b7c5fe in execute_ex ()
#104541 0x00007ffff7f1c1ed in xdebug_execute_ex () from /usr/lib64/php7.2/lib/extensions/no-debug-zts-20170718/xdebug.so
...

gc stands for 'garbage collector', so obviously there was a memory problem there. Looking at Segfault in garbage collector brought the breakthrough - namely the solution. :-)

Solution

This is a stack overflow in garbage collector. The solution is to increase limit for stack. To see your current limit, type

ulimit -s

I had 8192 - for a task of this size obviously totally undersized...Change this to something more appropriate, say

ulimit -s 102400

and retry - the segmentation fault is gone! :-)

pk-fr commented 4 years ago

@sedimentation-fault :

thanx for your reporting... it could help people.... can you make a little try?

just insert gc_collect_cycles(); at line 307 of include/functions.php ... juste before the `continue;' statement... and tell me if the problem is gone or not ( with the default ulimit value )

sedimentation-fault commented 4 years ago

@pk-fr : I did as you suggested:

            touch($target_path,$source_stat['mtime']);
            chmod($target_path,$source_stat['mode']);
            chgrp($target_path,$source_stat['gid']);
            chown($target_path,$source_stat['uid']);
            gc_collect_cycles();
            continue;

but it did not work: with max. stack size at 8192 (the standard value), I got a segmentation fault at the exact same place as before. It's as if the _gc_collectcycles() function did not have any effect at all...

pk-fr commented 4 years ago

Thanx for your testing.... I have created a "Known Issues" section in Readme.md