tBuLi / symfit

Symbolic Fitting; fitting as it should be.
http://symfit.readthedocs.org
MIT License
233 stars 17 forks source link

Convert unittests to pytests #268

Closed antonykamp closed 4 years ago

antonykamp commented 5 years ago

Hey guys,

sorry for the long time to convert the tests, but now you can find in this repo the new pytests. The converted tests are marked with a _py at the end. I saved the logs of each test just for a better overview. Before you merge it, I would rename tests and delete the logs.

Executing the test, I found many failures at the "old" unittests and that's why not every test executes 100% correctly. Maybe I'll find one or the other reason in the next weeks. Then I would make a new pull request.

Grüße

antonykamp commented 5 years ago

A little hint to the next commit. Pytest has a little bug. They though the problem is solved, but it isn't apparently.

I solved the problem temporary, but I'm in contact with them.

antonykamp commented 5 years ago

The last commit fixes #269

antonykamp commented 4 years ago

Everything looks fine! I'm happy about such constructive feedback :D! Nobody learns to finish, especially in this field ;)

In one or two weeks I have time to look into the code searching for the lines. if I ask, I know who to contact.

pckroon commented 4 years ago

Nice work so far :) However, you've reverted your changes to test_fit_result.py which is probably undesirable. You /do/ want to reset your changes to fit_result.py.

antonykamp commented 4 years ago

Nice work so far :) However, you've reverted your changes to test_fit_result.py which is probably undesirable. You /do/ want to reset your changes to fit_result.py.

I thought I should skip the whole test?

pckroon commented 4 years ago

Nice work so far :) However, you've reverted your changes to test_fit_result.py which is probably undesirable. You /do/ want to reset your changes to fit_result.py.

I thought I should skip the whole test?

No, changes to tests/test_fit_results.py are desirable. Your PR also made changes to symfit/core/fit_results.py, which are unrelated to this PR.

antonykamp commented 4 years ago

Nice work so far :) However, you've reverted your changes to test_fit_result.py which is probably undesirable. You /do/ want to reset your changes to fit_result.py.

I thought I should skip the whole test?

No, changes to tests/test_fit_results.py are desirable. Your PR also made changes to symfit/core/fit_results.py, which are unrelated to this PR.

Oh 😄 I mistook the folder. I'll change this

antonykamp commented 4 years ago

Thank you for your OK :) But before you merge, I would like to implementfixtures and parametrizes to use pytest the right way

pckroon commented 4 years ago

No. Do that in a next PR to keep them small/contained/managable/reviewable. And in a next PR, let's do one file at the time for the same reasons.

antonykamp commented 4 years ago

Allright. I could open PR for every File tm, okay?

pckroon commented 4 years ago

That sounds good to me :) You should be able to create a branch of of this one so you don't have to wait for us to merge this (but I'll poke @tBuLi a bit). It could be that git will get confused once we rebase this PR to clean up the commit history a bit though.

antonykamp commented 4 years ago

I would create some branches form master and add the tests peace by peace. Then we could simply close this PR...It's the easiest way, I think :)

pckroon commented 4 years ago

That would mean you won't have the changes you made in this PR which at the very least causes a duplication of effort, and in the worst case a very very difficult merge.

antonykamp commented 4 years ago

If I open a PR for single tests? really? :)

pckroon commented 4 years ago

That would make the PRs really very small. It would be preferably to large PRs though... Maybe a few tests per PR, and at most a file?

antonykamp commented 4 years ago

One big PR with every test as a commit? Would make more sense in my opinion :)

pckroon commented 4 years ago

I'd still prefer to limit it to one PR per file. But I can review it commit by commit.

antonykamp commented 4 years ago

If I understand you the right way: One branch and I commit test by test and after every commit, I open a PR? No right? :D Can you make some introductions please :))

tBuLi commented 4 years ago

Thank for your good work! I think @pckroon means that in order to keep the chunks manageable for review it is better to divide it up into several PRs. I agree that one PR per file is probably a good idea.

I'll try to go over this in the coming days, but I know that @pckroon has already reviewed this extensively and is more experienced with pytest than I am so if he's happy I probably wont have a lot of critique to add :P.

antonykamp commented 4 years ago

I could open a PR for 4 tests and @tBuLi and @pckroon could review (and possibly merdge or critique) them peace by peace.

pckroon commented 4 years ago

Something like that would work for me (except that I'd also be happy to review). I'm OK with PRs going beyond 4 tests if you think it makes sense, so long as they stay under ~1 file. Also, as far as I'm concerned you're free to make fairly radical changes. I can foresee you having to merge or split tests, as well as deciding that a specific test should be replaced by another so that it tests the same behaviour, but now fits in the same pytest.mark.parametrize shape as another. Just make sure the coverage doesn't drop.

Happy coding :)

antonykamp commented 4 years ago

I would recomment to merge these pure converted tests in multiple PRs (2-4 tests per PR). Later on I improve some of them with fixtures and parametrizes and create new PRs. In the sense of convert first and improve second...two steps

antonykamp commented 4 years ago

Or I create PRs for all tests except test_argument, test_fit_result, test_global_opt, test_auto_fit and test_constrained. I would continue to work on the tests and add fixtures and parametrizes.

tBuLi commented 4 years ago

Just to let you know, I plan to go through this PR on Friday in more detail, and hopefully merge it if there are no big things left.

pckroon commented 4 years ago

I know I'm starting to be annoying, but I think I'd be happier if we just let the warning be and fill the output. They're there for a reason after all. Later, once we've been over the tests in a few PRs we can always choose which, if not all, we want to squelch.

tBuLi commented 4 years ago

I agree with @pckroon on that, I would also not ignore warnings yet for now.

And something nitpicky I'm noticing now upon going through it in detail is that there are many lines which pass 80 characters with just a few characters, but that are broken into two since it violates PEP 8. But I disagree with that: as long as a line is just a couple of characters over then it is better to leave it as one line, since one line represents one coherent block of information. Splitting it in two actually makes it harder to read, which is against the intention of python for me.

For example:

assert fit_result.value(
        x0) / np.mean(data[:, 0]) == pytest.approx(1.0, 1e-2)

is much harder to read than

assert fit_result.value(x0) / np.mean(data[:, 0]) == pytest.approx(1.0, 1e-2)

So I would be very happy if those kinds of lines are restored to a single line. As I'm now going over it, I'll make a PR against your branch so you don't have to redo the work.

pckroon commented 4 years ago

And something nitpicky I'm noticing now upon going through it in detail is that there are many lines which pass 80 characters with just a few characters, but that are broken into two since it violates PEP 8. But I disagree with that: as long as a line is just a couple of characters over then it is better to leave it as one line, since one line represents one coherent block of information. Splitting it in two actually makes it harder to read, which is against the intention of python for me.

Are you sure you're reviewing the correct code? Most if not all of those were fixed in later commits.

antonykamp commented 4 years ago

Allright. So I'll bring back the warnings today.

tBuLi commented 4 years ago

And something nitpicky I'm noticing now upon going through it in detail is that there are many lines which pass 80 characters with just a few characters, but that are broken into two since it violates PEP 8. But I disagree with that: as long as a line is just a couple of characters over then it is better to leave it as one line, since one line represents one coherent block of information. Splitting it in two actually makes it harder to read, which is against the intention of python for me.

Are you sure you're reviewing the correct code? Most if not all of those were fixed in later commits.

I just checked out this branch this morning, didn't have it local before.

Allright. So I'll bring back the warnings and look over the linebreaks in the next few days

Don't worry about the line breaks, I'll make a PR against your branch with anything I catch right now.

tBuLi commented 4 years ago

Thanks for all your work, on the whole I'm quite satisfied and I think we're very close to being able to merge this. I made some comments on your code where I think some discussion might be needed, and I made a PR against yours with some simple changes that I have spotted.

But more importantly, the following files have not been translated in the same way that the other have been:

It still contains an object with all the tests on it, rather than having the tests as separate functions. Those really need fixtures like @pckroon suggested. After that has been done, I'm happy!

antonykamp commented 4 years ago

I would like to improve the tests with fixtures and parametrizes. But I doubt that it'll finish before the weekend.

tBuLi commented 4 years ago

That's fine of course, do it at your earliest convenience :).

antonykamp commented 4 years ago

The coverage dropped because we don't check for NoneType and fit_result.stdev(b) == pytest.approx(flat_result.stdev(b)) anymore.

antonykamp commented 4 years ago

I'm working at the test_fit_result.py at the moment. I noticed that during the test test_contest_objective_included chained_result is not listed (I added LeastSquares). There are such small gaps again and again :D

I was able to fill the gap attest_contest_objective_included but I would need your help with the rest :) By filling the gaps the coverage would increase and the use of parametrize and fixtures would be cleaner.

antonykamp commented 4 years ago

Since #274 was merged, I would suggest closing this one :)