Closed dmitry-ivanov closed 2 years ago
Seeing same issue locally on 6 cores and 12GB of RAM in a virtual machine/Docker environment. Randomly a worker just fails for no obvious reason, but 90% of the time the thing works OK.
Same problem here.
2 runs, 2 failures. Within Gitlab CI (latest version)
php artisan config:clear ; COMPOSER_PROCESS_TIMEOUT=600 APP_ENV=testing ./vendor/bin/paratest -p 4 -c phpunit.xml --colors --runner=WrapperRunner
We are also seeing random worker crashes with error Exit Code: 135(Bus error: "access to undefined portion of memory object")
(latest version, v6.1.1
)
Tests are running in Docker with 32 Cores. Example error:
............................................................. 3660 / 4464 ( 81%)
..S.......................................................... 3721 / 4464 ( 83%)
............................................................. 3782 / 4464 ( 84%)
..........
In WorkerCrashedException.php line 36:
The command "'/var/www/current/vendor/phpunit/phpunit/phpunit' '--configura
tion' '/var/www/current/tests/phpunit_para.xml' '--no-logging' '--no-covera
ge' '--printer' 'ParaTest\Runners\PHPUnit\Worker\NullPhpunitPrinter' '--log
-junit' '/tmp/PT_oDkBhm' '/var/www/current/Modules/AttributionModeling/test
s/Integration/Controller/AttributionDashboardControllerTest.php'" failed.
Exit Code: 135(Bus error: "access to undefined portion of memory object")
Working directory: /var/www/current
Output:
================
Error Output:
================
paratest [--bootstrap BOOTSTRAP] [--colors] [-c|--configuration CONFIGURATION] [--coverage-clover COVERAGE-CLOVER] [--coverage-cobertura COVERAGE-COBERTURA] [--coverage-crap4j COVERAGE-CRAP4J] [--coverage-html COVERAGE-HTML] [--coverage-php COVERAGE-PHP] [--coverage-test-limit COVERAGE-TEST-LIMIT] [--coverage-text] [--coverage-xml COVERAGE-XML] [--exclude-group EXCLUDE-GROUP] [--filter FILTER] [-f|--functional] [-g|--group GROUP] [-h|--help] [--log-junit LOG-JUNIT] [--log-teamcity LOG-TEAMCITY] [-m|--max-batch-size MAX-BATCH-SIZE] [--no-test-tokens] [--order-by [ORDER-BY]] [--parallel-suite] [--passthru PASSTHRU] [--passthru-php PASSTHRU-PHP] [--path PATH] [-p|--processes PROCESSES] [--random-order-seed [RANDOM-ORDER-SEED]] [--runner RUNNER] [--stop-on-failure] [--testsuite TESTSUITE] [--tmp-dir TMP-DIR] [-v|vv|--verbose] [--whitelist WHITELIST] [--] [<path>]
Re- Running the tests usually "solves" the issue, though the crashes keep happening randomly.
I've not been able to find a consisten way to reproduce them, BUT it is almost always the test /var/www/current/Modules/AttributionModeling/tests/Integration/Controller/AttributionDashboardControllerTest.php
that is run last.
Running the test "indivdually" does not crash. I even recorded the tests that each worker runs and executed the exact tests that the crashing worker executed but could not reproduce the error.
That being said, it would be great if there would be a way to "force" a certain order of tests to be able to completely reproduce the crashing run. As of now, the order appears to be more or less "random" due to that piece of code in \ParaTest\Runners\PHPUnit\SuiteLoader::initSuites
:
// The $class->getParentsCount() + array_merge(...$loadedSuites) stuff
// are needed to run test with child tests early, because PHPUnit autoloading
// of such classes in WrapperRunner environments fails (Runner is fine)
$loadedSuites = [];
foreach ($this->files as $path) {
try {
$class = (new Parser($path))->getClass();
$suite = $this->createSuite($path, $class);
if (count($suite->getFunctions()) > 0) {
$loadedSuites[$class->getParentsCount()][$path] = $suite;
}
} catch (NoClassInFileException $e) {
continue;
}
}
foreach ($loadedSuites as $key => $loadedSuite) {
ksort($loadedSuites[$key]);
}
ksort($loadedSuites);
foreach ($loadedSuites as $loadedSuite) {
$this->loadedSuites = array_merge($this->loadedSuites, $loadedSuite);
}
Cheers Pascal
That being said, it would be great if there would be a way to "force" a certain order of tests to be able to completely reproduce the crashing run. As of now, the order appears to be more or less "random" due to that piece of code in
\ParaTest\Runners\PHPUnit\SuiteLoader::initSuites
:
The SuiteLoader has nothing to do with the result randomness. The tests are executed in a precise order, but assigned "randomly" to each available process, and by random here I mean "the first process free gets the next test".
I agree that an option to re-run the exact previous run would be useful: any PR is welcome.
@Slamdunk we got the same problem! Could you give a few pointers to where the changes would have to be done?
Concurrency crashes by environment failures are tough: I'd need a reproducible case in order to help you.
Is any of you able to create the smallest possible repo that shows the random error, with a change higher than 1 out of 10 times?
@Slamdunk I was thinking about your comment about a log to re-execute all the tests in one worker. I want to be able to create a file that I can run locally to recreate the error. I think this is one or more tests that overwrite my container w/o telling me.
same issue
[Container] 2021/04/01 02:05:51 Running command docker-compose exec -T -e DB_CONNECTION=testing php php artisan test --parallel --processes=4
--
978 | Warning: TTY mode requires /dev/tty to be read/writable.
979 | ............................................................... 63 / 439 ( 14%)
980 | ............................................................... 126 / 439 ( 28%)
981 | .............................
982 | In WorkerCrashedException.php line 36:
983 | Β
984 | The command "'/var/www/vendor/phpunit/phpunit/phpunit' '--configuration' '/
985 | var/www/phpunit.xml' '--no-logging' '--no-coverage' '--printer' 'ParaTest\R
986 | unners\PHPUnit\Worker\NullPhpunitPrinter' '--log-junit' '/tmp/PT_K4Z3XQ' '/
987 | var/www/tests/Feature/Admin/Partner/PartnerTest.php'" failed.
988 | Β
989 | Exit Code: 139(Segmentation violation)
990 | Β
991 | Working directory: /var/www
992 | Β
993 | Output:
994 | ================
995 | Β
996 | Β
997 | Error Output:
998 | ================
999 | Β
1000 | Β
1001 | paratest [--bootstrap BOOTSTRAP] [--colors] [-c\|--configuration CONFIGURATION] [--coverage-clover COVERAGE-CLOVER] [--coverage-cobertura COVERAGE-COBERTURA] [--coverage-crap4j COVERAGE-CRAP4J] [--coverage-html COVERAGE-HTML] [--coverage-php COVERAGE-PHP] [--coverage-test-limit COVERAGE-TEST-LIMIT] [--coverage-text] [--coverage-xml COVERAGE-XML] [--exclude-group EXCLUDE-GROUP] [--filter FILTER] [-f\|--functional] [-g\|--group GROUP] [-h\|--help] [--log-junit LOG-JUNIT] [--log-teamcity LOG-TEAMCITY] [-m\|--max-batch-size MAX-BATCH-SIZE] [--no-coverage] [--no-test-tokens] [--order-by [ORDER-BY]] [--parallel-suite] [--passthru PASSTHRU] [--passthru-php PASSTHRU-PHP] [--path PATH] [-p\|--processes PROCESSES] [--random-order-seed [RANDOM-ORDER-SEED]] [--runner RUNNER] [--stop-on-failure] [--testsuite TESTSUITE] [--tmp-dir TMP-DIR] [-v\|vv\|--verbose] [--whitelist WHITELIST] [--] [<path>]
1002 | Β
1003 | Β
1004 | [Container] 2021/04/01 02:06:08 Command did not exit successfully docker-compose exec -T -e DB_CONNECTION=testing php php artisan test --parallel --processes=4 exit status 1
I am having the same issue on gitlab.. works perfectly fine locally =`[ Any idea of what could this be?
I ended up creating a runner with just one worker and bisecting through the testset until I found the culprit. It was a real slog but combining just one worker with the debug options is a good start.
Think about which environment differences (max memory, total memory, nr of workers, installed extensions etc) exists between the two instances - that might also give you some pointers.
For me it only happens on gitlab. I figure out that every time we create a bunch of new tests, it fails again and I have to increase this:
$ ulimit -S -s 2000000 I just don't understand why it is keeping so many files opened
In Runner.php line 120:
[ParaTest\Runners\PHPUnit\WorkerCrashedException]
The command "'/usr/local/bin/php' '-d' 'memory_limit=2G' '/builds/xxx/t
xxx-api-v2/xxx-api/vendor/phpunit/phpunit/phpunit' '--configuration'
'/builds/xxx/xx-api-v2/xxx-api/phpunit.xml' '--exclude-group' '
long-running-scripts,xx-mysql,xx-group,project-group' '--stop-on
-failure' '--log-junit' '/tmp/PT_jEOIOG' '/builds/xx/xx.php'" failed.
Exit Code: 139(Segmentation violation)
Working directory: /builds/xxxx/
Output:
================
PHPUnit 9.4.0 by Sebastian Bergmann and contributors.
Warning: Your XML configuration validates against a deprecated schema
.
Suggestion: Migrate your XML configuration using "--migrate-configuratio
n"!
...
Error Output:
================
Exception trace:
at /builds/xxx/xx-api-v2/xxx/vendor/brianium/paratest/src/Runners/PHPUnit/Runner.php:120
ParaTest\Runners\PHPUnit\Runner->testIsStillRunning() at /builds/xx/xx-api-v2/xx-api/vendor/brianium/paratest/src/Runners/PHPUnit/Runner.php:45
ParaTest\Runners\PHPUnit\Runner->doRun() at /builds/xx/xx-api-v2/xx-api/vendor/brianium/paratest/src/Runners/PHPUnit/BaseRunner.php:77
ParaTest\Runners\PHPUnit\BaseRunner->run() at /builds/xx/xx-api-v2/xx-api/vendor/brianium/paratest/src/Console/Commands/ParaTestCommand.php:86
ParaTest\Console\Commands\ParaTestCommand->execute() at /builds/xx/xx-api-v2/xx-api/vendor/symfony/console/Command/Command.php:255
Symfony\Component\Console\Command\Command->run() at /builds/xx/xx-api-v2/x-api/vendor/symfony/console/Application.php:1009
Symfony\Component\Console\Application->doRunCommand() at /builds/whxxispli/xx-api-v2/xx-api/vendor/symfony/console/Application.php:273
Symfony\Component\Console\Application->doRun() at /builds/xx/xx-api-v2/xx-api/vendor/symfony/console/Application.php:149
Symfony\Component\Console\Application->run() at /builds/xx/xx-api-v2/xx-api/vendor/brianium/paratest/bin/paratest:37
paratest [--bootstrap BOOTSTRAP] [--colors] [-c|--configuration CONFIGURATION] [--coverage-clover COVERAGE-CLOVER] [--coverage-crap4j COVERAGE-CRAP4J] [--coverage-html COVERAGE-HTML] [--coverage-php COVERAGE-PHP] [--coverage-test-limit COVERAGE-TEST-LIMIT] [--coverage-text] [--coverage-xml COVERAGE-XML] [--exclude-group EXCLUDE-GROUP] [--filter FILTER] [-f|--functional] [-g|--group GROUP] [-h|--help] [--log-junit LOG-JUNIT] [-m|--max-batch-size MAX-BATCH-SIZE] [--no-test-tokens] [--parallel-suite] [--passthru PASSTHRU] [--passthru-php PASSTHRU-PHP] [--path PATH] [--phpunit PHPUNIT] [-p|--processes PROCESSES] [--runner RUNNER] [--stop-on-failure] [--testsuite TESTSUITE] [--tmp-dir TMP-DIR] [-v|--verbose VERBOSE] [--whitelist WHITELIST] [--] [<path>]
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Hi, have you tried lsof locally at first and then on gitlab (just run it from a test method)?
I seem to remember that I had some similar issues and had to fix some leaks in my tests as well.
T
tor. 27. mai 2021 kl. 02:31 skrev Fernando Coronato < @.***>:
For me it only happens on gitlab. I figure out that every time we create a bunch of new tests, it fails again and I have to increase this:
$ ulimit -S -s 2000000 I just don't understand why it is keeping so many files opened
β You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/paratestphp/paratest/issues/384#issuecomment-849211047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABTSP3HXFPBNK7OCQVGTKTTPWHFJANCNFSM4HJKKX6Q .
--
Tarjei Huse Mobil: 920 63 413
Hi, have you tried lsof locally at first and then on gitlab (just run it from a test method)? I seem to remember that I had some similar issues and had to fix some leaks in my tests as well. T tor. 27. mai 2021 kl. 02:31 skrev Fernando Coronato < @.***>: β¦ For me it only happens on gitlab. I figure out that every time we create a bunch of new tests, it fails again and I have to increase this: $ ulimit -S -s 2000000 I just don't understand why it is keeping so many files opened β You are receiving this because you commented. Reply to this email directly, view it on GitHub <#384 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABTSP3HXFPBNK7OCQVGTKTTPWHFJANCNFSM4HJKKX6Q . __ Tarjei Huse Mobil: 920 63 413
What is the approach? Put on setup of tests lsof and check if there's some files that are always kept opened throughout all tests?
@fernandocoronatomf what is your local ulimit? I would guess it is possible to recreate the issue locally as well. If you're not running linux locally, install a VM and test in there.
ulimit is unlimited on my linux and on Gitlab pipeline
open files are maximum of 400 after every test is executed
$ ulimit -S -s 2000000
$ vendor/bin/paratest --passthru-php=" '-d' 'memory_limit=2G'" --exclude-group long-running-scripts,tenant-mysql,xxx-group,project-group -p16 --runner Runner tests/Feature
Running phpunit in 16 processes vendor/phpunit/phpunit/phpunit
Configuration read from /builds/ddd/zzzzzzv2/zzzzphpunit.xml
............................................................. 61 / 1256 ( 4%)
............................................................. 122 / 1256 ( 9%)
............................................................. 183 / 1256 ( 14%)
............................................................. 244 / 1256 ( 19%)
............................................................. 305 / 1256 ( 24%)
............................................................. 366 / 1256 ( 29%)
.S........................................................... 427 / 1256 ( 33%)
............................................................. 488 / 1256 ( 38%)
............................................................. 549 / 1256 ( 43%)
...................SS...SSSSSSS.............................. 610 / 1256 ( 48%)
............................................................. 671 / 1256 ( 53%)
............................................................. 732 / 1256 ( 58%)
............................................................. 793 / 1256 ( 63%)
............................................................. 854 / 1256 ( 67%)
............................................................. 915 / 1256 ( 72%)
............................................................. 976 / 1256 ( 77%)
............................................................. 1037 / 1256 ( 82%)
............................................................. 1098 / 1256 ( 87%)
............................................................. 1159 / 1256 ( 92%)
............................................................. 1220 / 1256 ( 97%)
.................................... 1256 / 1256 (100%)
Time: 07:46.021, Memory: 36.00 MB
OK, but incomplete, skipped, or risky tests!
Tests: 1256, Assertions: 5857, Skipped: 10.
$ cat storage/logs/pipeline-2021-06-03.log
[2021-06-03 04:39:46] testing.CRITICAL: 375
[2021-06-03 04:39:46] testing.CRITICAL: 343
[2021-06-03 04:39:46] testing.CRITICAL: 286
[2021-06-03 04:39:46] testing.CRITICAL: 321
[2021-06-03 04:39:46] testing.CRITICAL: 283
[2021-06-03 04:39:46] testing.CRITICAL: 309
It keeps failing randomly...
I'm wondering if the issue is Docker for everyone here. We've had this issue for years, fairly regularly, and increasingly so the longer Docker has been running. I suspect the issue lies somewhere in the volume mount middleware, osxfs/gRPC FUSE, or even with the :cached/:delegated layers. Further to that, I've had success restarting Docker, which would remount the volumes. But, this is very time consuming and doesn't always resolve the issue. The order in which tests are executed seems to often coincide with the error as well. It's as if multiple phpunit processes are trying to use the same file, but unable to do so.
I've been struggling with this issue a bit more after the latest Docker update and some other changes. Taking into account what I've already experienced and tweaking things further, I decided to remove the --max-batch-size
CLI option and tests are consistently not failing now. Using that option causes tests to fail every time. I had the batch size set at 8
. I'm guessing this causes a test class to be split between two processes which are attempting to use the test class at the same time. However, the test class that was continually failing just now only had 7 tests. I'm not certain, however, after reading the docs, what the max-batch-size
actually includes - seems to include other methods, not just tests?
I'll be leaving this option disabled for now and will continue to assess. Replicating this into a testcase, @Slamdunk, is going to be very difficult, I'm sure. But, if you have any suggestions, should the issue arise again, I'd be happy to do some debugging locally.
As a suggestion, when the WorkerCrashedException
is being thrown, does paratest try to access the file again, possibly with a timeout? I know that's not really ideal, but it could effectively resolve this issue, especially if the real issue is upstream.
Update: After testing with and without the max-batch-size
option passed, maybe a half dozen times and getting consistent results, I'm now getting a failure again after resolving a single failing test, but again on the same file as previously. After force saving that file again on the host machine, it's now working. The Docker volume mount in question is :cached
.
Hi @oojacoboo, best I can do to help you is share what have flawlessly worked for me for over 3 years now with Docker:
Heavy bootstrap actions should go into a TestListener
which tracks its state only in memory, so it is automatically fail-safe if the tests run in a single process or in multi processes env:
final class DatabaseTestListener implements TestListener
{
use TestListenerDefaultImplementation;
public function startTest(Test $test): void
{
static $databaseHandler = null;
if (null === $databaseHandler) {
$databaseHandler = new DatabaseHandler(getenv('TEST_TOKEN'));
$databaseHandler->createSchema();
}
$databaseHandler->truncateTables();
}
}
tmpfs
: databases, folders where tests write cache and stubs, etcdocker-compose up --detach
when you start your daily workdocker-compose down --volumes
when you stop itvendor/bin/paratest --runner WrapperRunner
only: I've never used --runner Runner
, --functional
nor --max-batch-size
since I maintain ParaTestI never knew about :cached
/:delegated
until now, I don't use them.
I know it's not much, sorry, but better than nothing, I hope.
Just ran into this again (Exit Code: 135(Bus error: "access to undefined portion of memory object")
) and started looking into exit code 135 a little more. This issue came up Codeception/robo-paracept#28 - feels kinda related because it's also dealing with parallel processes in PHP.
Plus, as I mentioned before, for us the same test keeps coming up when the processes crashes and that particular test is testing our UI. It's a Laravel application, i.e. the views are compiled dynamically (from blade files) and I have a hunch that this might be the culprit (as another process might also access the same view).
Interestingly it mentions PHP bug 52752 which has been closed at 2019-07-16 13:48 UTC
(according to the comments) with commit 5161cebe28cca36fa7f7989b5a799290a3f1eb6a. This commit is present in PHP >= 7.4
We are currently running PHP 7.3
==> Can anybody confirm that you are hitting this error with a PHP >= 7.4?
Having the foolish hope that it will resolve itself magically once we upgrade our PHP version ;)
Seems like I finally found a setup to reproduce the issue:
I created 5 classes with the following content:
class CrashTest extends \PHPUnit\Framework\TestCase
{
/**
* @dataProvider repeat_dataProvider
*/
public function test_repeat(): void
{
$dir = "/tmp/foo";
if (!file_exists($dir)) {
mkdir($dir);
}
// all processes write to the _same_ file
$file = $dir."/bar.php";
file_put_contents($file, "<?php return []; ?>");
// all processes include that file - increasing the likelyhood of "reading while being written"
include $file;
$this->assertTrue(true);
}
public function repeat_dataProvider(): \Generator
{
for ($i = 0; $i < 100; $i++) {
yield "Test $i" => [];
}
}
}
Crash/
βββ CrashTest.php
βββ CrashATest.php
βββ CrashBTest.php
βββ CrashCTest.php
βββ CrashDTest.php
And then ran paratest on the Crash
directory with 5 parallel processes:
www-data:/var/www/atmo# vendor/bin/paratest -p 5 --runner=WrapperRunner Crash/
Running phpunit in 5 processes with /var/www/atmo/vendor/phpunit/phpunit/phpunit
Configuration read from /var/www/atmo/phpunit.xml
In WorkerCrashedException.php line 36:
The command "'/var/www/atmo/vendor/phpunit/phpunit/phpunit' '--configuration' '/var/www/atmo/phpunit.xml' '--no-logging' '--no-coverage' '--printer' 'ParaTest\Runners\PHPUnit\Worker\NullPhpunitPrinter' '--log-junit' '/tmp/PT_djAoHa' '/var/www/atmo/Crash/CrashCTest.php'" failed.
Exit Code: 135(Bus error: "access to undefined portion of memory object")
Working directory: /var/www/atmo
Output:
================
Error Output:
================
paratest [--bootstrap BOOTSTRAP] [--colors] [-c|--configuration CONFIGURATION] [--coverage-clover COVERAGE-CLOVER] [--coverage-cobertura COVERAGE-COBERTURA] [--coverage-crap4j COVERAGE-CRAP4J] [--coverage-html COVERAGE-HTML] [--coverage-php COVERAGE-PHP] [--coverage-test
-limit COVERAGE-TEST-LIMIT] [--coverage-text [COVERAGE-TEXT]] [--coverage-xml COVERAGE-XML] [--exclude-group EXCLUDE-GROUP] [--filter FILTER] [-f|--functional] [-g|--group GROUP] [-h|--help] [--log-junit LOG-JUNIT] [--log-teamcity LOG-TEAMCITY] [-m|--max-batch-size MAX-B
ATCH-SIZE] [--no-coverage] [--no-test-tokens] [--order-by [ORDER-BY]] [--parallel-suite] [--passthru PASSTHRU] [--passthru-php PASSTHRU-PHP] [--path PATH] [-p|--processes PROCESSES] [--random-order-seed [RANDOM-ORDER-SEED]] [--repeat [REPEAT]] [--runner RUNNER] [--stop-o
n-failure] [--testsuite TESTSUITE] [--tmp-dir TMP-DIR] [-v|vv|--verbose] [--whitelist WHITELIST] [--] [<path>]
In my specific case it's indeed the views
that are problematic. I solved the issue by appending the paratest token to the compiled
directory, i.e.
// config/view.php
<?php
$token = getenv("TEST_TOKEN") ?: null;
$dir = storage_path('framework/views');
if($token){
$dir .= "_$token";
}
return [
/*
|--------------------------------------------------------------------------
| View CloudStorage Paths
|--------------------------------------------------------------------------
|
| Most templating systems load templates from disk. Here you may specify
| an array of paths that should be checked for your views. Of course
| the usual Laravel view path has already been registered for you.
|
*/
'paths' => [
realpath(base_path('resources/views')),
],
/*
|--------------------------------------------------------------------------
| Compiled View Path
|--------------------------------------------------------------------------
|
| This option determines where all the compiled Blade templates will be
| stored for your application. Typically, this is within the storage
| directory. However, as usual, you are free to change this value.
|
*/
'compiled' => $dir,
];
Can anybody confirm that you are hitting this error with a PHP >= 7.4?
PHP 8.0.9 and 8.0.17 + Ubuntu 20 (in Vagrant + Virtualbox) + Laravel = 139(Segmentation violation)
This issue has gone two months without activity. In another two weeks, I will close it.
But! If you comment or otherwise update it, I will reset the clock, and if you label it Backlog
or In Progress
, I will leave it alone ... forever!
We are currently running PHP 7.3
==> Can anybody confirm that you are hitting this error with a PHP >= 7.4?
We're on 8.1 but not using docker in dev and this is happening to me as well. It just started happening recently on the last test in the group, which uses spatie/browsershot
to generate a large PDF from HTML. This random crash is new. The PDF generation in a test takes much longer than in a live scenario (30 seconds vs. ~5 min). I end up rebooting and run the test suite again and all is well.
I'm getting this "Segmentation violation" issue on Bitbucket Pipelines.
+ php [artisan](https://bitbucket.org/crose3333/channel-dynamics-api/src/b741c6f4f23a7f4fab65f501a3423188d9822831/artisan) test --parallel --processes=2
ParaTest v7.2.8 upon PHPUnit 10.3.5 by Sebastian Bergmann and contributors.
Processes: 2
Runtime: PHP 8.2.5
Configuration: /opt/atlassian/pipelines/agent/build/phpunit.xml
In WorkerCrashedException.php line 41:
The test "PARATEST='1' TEST_TOKEN='2' UNIQUE_TEST_TOKEN='2_65d8c1d2568d4' t
ests/Unit/App/Models/ProcessingTest.php" failed.
Exit Code: 139(Segmentation violation)
Working directory: /opt/atlassian/pipelines/agent/build
Output:
================
Error Output:
================
paratest [--functional] [-m|--max-batch-size MAX-BATCH-SIZE] [--no-test-tokens] [--passthru-php PASSTHRU-PHP] [-p|--processes PROCESSES] [--runner RUNNER] [--tmp-dir TMP-DIR] [-v|--verbose] [--bootstrap BOOTSTRAP] [-c|--configuration CONFIGURATION] [--no-configuration] [--cache-directory CACHE-DIRECTORY] [--testsuite TESTSUITE] [--exclude-testsuite EXCLUDE-TESTSUITE] [--group GROUP] [--exclude-group EXCLUDE-GROUP] [--filter FILTER] [--process-isolation] [--strict-coverage] [--strict-global-state] [--disallow-test-output] [--dont-report-useless-tests] [--stop-on-defect] [--stop-on-error] [--stop-on-failure] [--stop-on-warning] [--stop-on-risky] [--stop-on-skipped] [--stop-on-incomplete] [--fail-on-incomplete] [--fail-on-risky] [--fail-on-skipped] [--fail-on-warning] [--order-by ORDER-BY] [--random-order-seed RANDOM-ORDER-SEED] [--colors [COLORS]] [--no-progress] [--display-incomplete] [--display-skipped] [--display-deprecations] [--display-errors] [--display-notices] [--display-warnings] [--teamcity] [--testdox] [--log-junit LOG-JUNIT] [--log-teamcity LOG-TEAMCITY] [--coverage-clover COVERAGE-CLOVER] [--coverage-cobertura COVERAGE-COBERTURA] [--coverage-crap4j COVERAGE-CRAP4J] [--coverage-html COVERAGE-HTML] [--coverage-php COVERAGE-PHP] [--coverage-text [COVERAGE-TEXT]] [--coverage-xml COVERAGE-XML] [--coverage-filter COVERAGE-FILTER] [--no-coverage] [--] [<path>]
Hey, guys! π
I run my tests on the Codeship CI using ParaTest like this:
Everything works fine 49/50 of the times.
But, from time to time, occasionally I have this error (for different, random tests):
If I just restart the build (after the error) - it works fine, so these are just some kind of random failures.
So, my questions are:
1) Does this relate to the usage of the
--runner WrapperRunner
? If I remove it from the command, may it help? 2) On the Codeship, I have 32-64 available CPU cores, so there are a lot of parallel processes. May it be related to that big amount of the processes? 3) Do you have any other suggestions on how can I avoid this error for sure?Thank you!