Closed LennartPurucker closed 1 year ago
Patch coverage has no change and project coverage change: -0.12
:warning:
Comparison is base (
bb3793d
) 85.24% compared to head (09deee0
) 85.12%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Hey, is this still an issue? I don't think we should add workaround code for uncommon bugs on the server (i.e. they happened only once so far) and rather fix things on the server instead.
Hey, is this still an issue? I don't think we should add workaround code for uncommon bugs on the server (i.e. they happened only once so far) and rather fix things on the server instead.
Mhm, I would say the current workflow (i.e., crashing) is an issue, and the alternative workflow (i.e., warning instead of crashing) would be better. But I see your point that we technically are only following the server specification.
I would leave this to your judgment. Feel free to close the PR.
I vote +1 on merging this.
Philosophically I agree that openml-python should not accommodate to all kind of buggy server states. However specifically for this case, I think the pragmatic approach to issue a warning instead of an error leads to a much better user experience. My reason for preferring this is because the error is more likely than not caused by tasks in which the user had no interest to begin with (to contrast, I would not add work-arounds for actually downloading and instantiating such a corrupted task, for example).
Merging this now as two people would like to have this merged.
Closes #1234