Project-MONAI / MONAI

AI Toolkit for Healthcare Imaging
https://monai.io/
Apache License 2.0
5.89k stars 1.09k forks source link

Test Refactor #8185

Open ericspod opened 2 weeks ago

ericspod commented 2 weeks ago

The tests directory has gotten large over time as we've added new components with their test cases. It's absolutely a good thing to have thorough testing but as a almost-flat directory it's hard to find things and it's cumbersome to view so many files in IDEs. Further there are likely many areas of refactoring that we can do to reduce duplicate code and introduce more helper routines to do common tasks. I would suggest a few things to improve our tests:

garciadias commented 19 hours ago

Hi there. I am interested in contributing, and this issue is something I would be comfortable helping with. Following the contribution guide, I am communicating. 🙃 Should I open a pull request before starting or as soon I have a done a little work?

ericspod commented 19 hours ago

Hi @garciadias thanks for offering to help. Did you want to do one aspect of what I was proposing or more? I would suggest choosing which one you want to do first, consider how much work that would be, then starting implementing some things on your own. I would suggest waiting until you have something to review before raising the PR, unless you want help at some point in which case you can open a draft PR so we can look at your code. All these changes are important in my mind somewhat equally so I'll leave it to you to pick something to start with, let me know what you decide and we can discuss from there.

garciadias commented 17 hours ago

Hi @ericspod,

Thank you for your answer.

I was thinking about starting with this one:

  • A number of tests use large data items or perform quite a few iterations of training. These are slow running so it would be good to go through the slower tests to see if any speedup can be done without losing testing value.

I see the -q option skips the tests flagged as skip_if_quick, so my approach would be to look into the tests with this flag to understand which are actual integration tests and which could mock heavy functions. My main goal would be to increase the coverage with the -q option so you minimise the amount of slow tests that need to be run, making the development process more smooth.

Hopefully, this would give me enough knowledge of the test base so I could proceed with the first point:

  • Break the contents of tests into subdirectories mirroring those in the monai directory. Tests for transforms would go under transforms, those for networks under networks, etc. It may be necessary to have more directory structure under these as well but this doesn't need to be overcomplicated.

What do you think?

ericspod commented 17 hours ago

Hi @ericspod,

Thank you for your answer.

I was thinking about starting with this one:

  • A number of tests use large data items or perform quite a few iterations of training. These are slow running so it would be good to go through the slower tests to see if any speedup can be done without losing testing value.

I see the -q option skips the tests flagged as skip_if_quick, so my approach would be to look into the tests with this flag to understand which are actual integration tests and which could mock heavy functions. My main goal would be to increase the coverage with the -q option so you minimise the amount of slow tests that need to be run, making the development process more smooth.

Hopefully, this would give me enough knowledge of the test base so I could proceed with the first point:

  • Break the contents of tests into subdirectories mirroring those in the monai directory. Tests for transforms would go under transforms, those for networks under networks, etc. It may be necessary to have more directory structure under these as well but this doesn't need to be overcomplicated.

What do you think?

That sounds like a good plan so go for it. My feeling with the slow tests is that they might be using data that's much larger than needed for the same testing value, as well as running for too long. Testing things for functionality and accuracy are two different tasks and I wonder if there's too much focus in places on getting the right result. Definitely have a look around and see what you come up with, you can also make a new issue if you like detailing your findings if more discussion would help.