qrush / m

A Test::Unit runner that can run tests by line number.
MIT License
375 stars 59 forks source link

Multiple test output from `m` execution of tests with fork #80

Closed schneems closed 4 years ago

schneems commented 4 years ago

Origionally reported in https://github.com/seattlerb/minitest/issues/844 it looks like the problem is with the m gem and not minitest. Here's a reproduction:

cd /tmp
git clone https://github.com/zombocom/rate_throttle_client
cd rate_throttle_client/
git checkout b7aa2f1c1cb4a4cb2238bc4afeb6d9ccfc453641
bundle
m test/demo_test.rb:14

This produces the output:

$ m test/demo_test.rb:14
Run options: -n "/^(test_time_scale)$/" --seed 10235

# Running:

Run options: --seed 26700

# Running:

............

Finished in 15.511151s, 0.7736 runs/s, 1.1605 assertions/s.

12 runs, 18 assertions, 0 failures, 0 errors, 0 skips
Run options: --seed 52921

# Running:

............

Finished in 15.897505s, 0.7548 runs/s, 1.1323 assertions/s.

12 runs, 18 assertions, 0 failures, 0 errors, 0 skips
.

Finished in 36.906700s, 0.0271 runs/s, 0.0271 assertions/s.

1 runs, 1 assertions, 0 failures, 0 errors, 0 skips

I believe this has something to do with with fork and minitests's "autorun" as this behavior does not reproduce when running a test that does not call fork:

$ m test/demo_test.rb:10
Run options: -n "/^(test_print_results)$/" --seed 14173

# Running:

.

Finished in 0.003400s, 294.1176 runs/s, 588.2353 assertions/s.

1 runs, 2 assertions, 0 failures, 0 errors, 0 skips

Simple repro

Here's a local reproduction that might show the problem better:

cd /tmp
bundle gem foofoo --test=minitest
cd foofoo
rm Gemfile
echo "source 'https://rubygems.org'" >> Gemfile
echo "gem 'm'" >> Gemfile
bundle

cat << EOF > test/foofoo_test.rb
require "test_helper"

class FoofooTest < Minitest::Test
   def test_uses_fork

   end

  def test_does_not_use_fork
    assert true
  end
end
EOF

When I run the test with fork it runs the whole suite and then runs the focus test:

$ m test/foofoo_test.rb:6
Run options: -n "/^(test_uses_fork)$/" --seed 20085

# Running:

Run options: --seed 21564

# Running:

..

Finished in 0.012779s, 156.5068 runs/s, 156.5068 assertions/s.

2 runs, 2 assertions, 0 failures, 0 errors, 0 skips
.

Finished in 0.029824s, 33.5300 runs/s, 33.5300 assertions/s.

1 runs, 1 assertions, 0 failures, 0 errors, 0 skips

When I run the test without fork it only runs what I'm focusing:

$ m test/foofoo_test.rb:12
Run options: -n "/^(test_does_not_use_fork)$/" --seed 65308

# Running:

.

Finished in 0.000792s, 1262.6263 runs/s, 1262.6263 assertions/s.

1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
schneems commented 4 years ago

A related issue that provides a workaround, but does not fix the underlying problem https://github.com/qrush/m/issues/25:

if defined?(M)
  # https://github.com/qrush/m/issues/80
  require "minitest"
else
  require "minitest/autorun"
end
zamith commented 4 years ago

This only happens when you run the tests with m, not directly with minitest, correct?

schneems commented 4 years ago

Correct. I could not reproduce the behavior when I’m running the test directly via “ruby”. It only occurs when running via “m”

zamith commented 4 years ago

Sorry for the delay. I investigated this a bit further and unfortunately there isn't a lot m can do here (that's not even worse). Basically the "issue" is that minitest/autorun sets an at_exit trigger to run the tests, with one process we were able to go around it by running m and then calling exit!, which will not trigger at_exit.

With multiple processes, all the ones that are not running m will then trigger the at_exit, resulting in the behavior described here. Unfortunately I'm not aware of a way to either remove the at_exit or impede it to be set in the first place, that does not lead to a lot of potential bugs in the future.

I think that if you are doing work with forks and want to use m, you'll have to require minitest/spec, which does not set those triggers and depend solely on m to run your tests`.

schneems commented 4 years ago

Thanks for taking a look. I'm not very familiar with the internals of M. Do you think it could be possible to get some sort of a workaround/fix in some combination of minitest and the m gem? Perhaps minitest could record itself as having already set the at_exit handler in the parent process and not re-set it later?

I was also only able to reproduce this while using the m gem in combination with minitest/auto_test. Do you know what the m gem is doing to trigger the behavior, or rather how I could reproduce the behavior without relying on m (if I wanted to submit a patch to minitest)?

zamith commented 4 years ago

I'm not 100% sure, but what I think happens is that minitest/autorun sets the at_exit, but since it only runs the tests from within it, only one process actually gets to see it (the main test process), thus the other ones get the inner at_exit which runs the after_run callbacks, and never trigger a re run of the tests. That's actually a pretty smart use of the at_exit stack for minitest.

However, since m runs the tests from outside the outer at_exit, all the processes created in the test (the main one plus all the forks) will see that at_exit, thus triggering a run per process.

In terms of ways around it, maybe minitest allowing for a block to be passed to autorun, to configure what happens within that at_exit, but I don't see that happening. This works nicely for minestest, but makes the running pretty opaque to the outside, but since it's called autorun I guess that's kind of the point.

Sorry I can't be of more help here.