craigahobbs / unittest-parallel

Parallel unit test runner for Python with coverage support
MIT License
29 stars 5 forks source link

Add support for setUpClass/tearDownClass #14

Closed erezsh closed 1 year ago

erezsh commented 2 years ago

They are supported by unittest, and make shared setup much more efficient. (like setting up a database connection for multiple tests)

craigahobbs commented 2 years ago

If you have a setUpClass method you'll need to use the "--class-fixtures" option. The downside of this is that all parallelization will occur at the class level. Does that address your issue?

In the past, I've considered auto-detecting --class-fixtures (and --module-fixtures) but in a few hours of investigation I was unable to figure out how to accomplish it. Let me know if you figure it out. :)

erezsh commented 2 years ago

The downside of this is that all parallelization will occur at the class level

Can you explain what you mean?

What I would expect to happen is to call setUpClass once for each worker thread, and then split the tests between those threads. Is that different from what happens right now?

craigahobbs commented 2 years ago

unittest-parallel can parallelize at three "levels" - test (default), class, and module. When you run at the class level (--class-fixtures), a test case class is passed to a Python unittest runner that calls setUpClass/tearDownClass and runs each test sequentially. When running at the test level (default), each test case method is passed (class methods are not called since the test runner is unaware of the test class), and all tests are run in parallel.

This design is driven to take advantage of Python unittest's test runner implementations. Getting the "automatic" behavior you describe above (I agree its ideal) would involve duplicating the logic in Python unittest's runners.

Does "--class-fixtures" work for your project? If not, how is it failing? If it does work, how badly is parallelization impaired by running at the class level?

erezsh commented 2 years ago

My project is this: https://github.com/datafold/data-diff

We used parameterized to "explode" one test method into a few hundreds, to test combination of data types. (See: https://github.com/datafold/data-diff/blob/master/tests/test_database_types.py#L267)

The ideal scenario for us would be if it worked as I described, with worker threads that set up a class per worker (with a db connection), and then run several tests for each one.

But right now we're using the test-level parallelization, and that's good enough.

Except we have other tests that use setUpClass, and with this package they throw an exception. Right now I'm using a workaround where setUp() calls setUpClass(). (See: https://github.com/datafold/data-diff/blob/master/tests/test_diff_tables.py#L35)

But maybe the simplest solution would be if we could choose between class-level and test-level parallelization at the module level. That way we can get the parallelization where needed and possible, and revert to "safe" behavior for the other tests.

sy-be commented 1 year ago

--class-fixtures works in my case! Thank you!

craigahobbs commented 1 year ago

I added the --level option in unittest-parallel 1.6. By default, unittest-parallel now runs unit test modules in parallel which will work for all projects (even those with class and/or module fixtures). This will likely be faster in general due to lower cost of parallelization. You can still run test classes in parallel using the --level=class option or individual tests in parallel using the --level=test option.