Closed PGijsbers closed 2 weeks ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 83.82%. Comparing base (
923b49d
) to head (1a0e654
). Report is 1 commits behind head on develop.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
pre-commit fail is a false negative, as the pre-commit.ci job does not fail (and neither does my local). Speaking of, we can probably remove the pre-commit/run-all-files
workflow. I do not see what it accomplishes now that we have pre-commit.ci
.
+1, let's remove pre-commit/run-all-files
.
This allows us to better understand the traffic we see to our server API. It is not identifiable to a person.
Motivated after seeing large spikes in traffic recently. Some of them coming from
requests/1.x.x
agents - which leaves us wondering if this is from Python API users (potentially indirectly through e.g., the automlbenchmark), or scripts/bots that don't identify themselves as such.On top of this PR, I think we might want to make our user-agent configurable so that services building on top of
openml-python
can identify themselves (e.g., automl benchmark, experiment bots). But I don't want to get into that right now.I updated the unit tests that would otherwise crash. I explicitly did not add a check for the agent with these tests. To me it feels that would be outside the scope of those tests. In general, it's a thin line the way its set up - should our tests even fail if they would just keyword argument instead of a positional one? But that too felt out of scope, so I change it to a (in my opinion) slightly better version of the same tests.
I do not feel this warrants its own unit test. If I should make one, could you recommend an approach? Just mocking the requests library and then calling
_api_call
for all types of requests? I just thought that wasn't particularly useful.