Closed votti closed 4 years ago
Merging this PR leaves code quality unchanged.
Quality metrics | Before | After | Change |
---|---|---|---|
Complexity | 16.12 | 16.38 | 0.26 🔴 |
Method Length | 134.63 | 137.24 | 2.61 🔴 |
Quality | 8.06 | 8.06 | 0.00 |
Other metrics | Before | After | Change |
---|---|---|---|
Lines | 357 | 370 | 13 |
Changed files | Quality Before | Quality After | Quality Change |
---|---|---|---|
zenodo_get/zget.py | 8.06 | 8.06 | 0.00 |
Here are some functions in these files that still need a tune-up:
File | Function | Complexity | Length | Overall | Recommendation |
---|---|---|---|---|---|
zenodo_get/zget.py | zenodo_get | 123 | 904.08 | 0.07 | Reduce complexity. Split out functionality |
Please see our documentation here for details on how these metrics are calculated.
We are actively working on this report - lots more documentation and extra metrics to come!
Let us know what you think of it by mentioning @sourcery-ai in a comment.
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
zenodo_get/zget.py | 3 | 4 | 75.0% | ||
<!-- | Total: | 3 | 4 | 75.0% | --> |
Totals | |
---|---|
Change from base Build 52: | -0.1% |
Covered Lines: | 170 |
Relevant Lines: | 222 |
Hi,
Thanks for the contribution! Yes, i agree, sandbox seems to be a useful idea, i will merge it quickly.
I have to admit i haven't had time to clean up the code, so it is still the same ugly draft as it was 2 years ago, so i plan a refactoring/cleanup sometimes. But at the moment i will just update the tests (it seems i forgot to support python 3.8+).
At the moment i have no better idea than just assuming sandbox works, so i will ignore testing it. (I would need to create a sandbox record, and it is a bit of a pain.) (But let me know if there is an easy way to test sandbox automatically.)
Cool! The 'issue' with the sandbox records is, that they regularly clean them. I will ask @Zenodo if they happen to have a record which is guaranteed to stay there for testing purposes.
Thanks for the contribution. If i haven't made a mistake, then it should work, and now it is published on PyPi (1.3.2)
Thanks for merging! FYI: I asked and there doesn't seem to be a way of having a stable 'sandbox' record for testing:
In general, Sandbox is indeed an unstable environment, since it's being used not only for internal but also external testing purposes. It's been a long time since we've cleared the database, though when done we repopulate the instance with random samples from the live system. Unfortunately, I cannot pinpoint a specific record that would always be restored, since that would mean that we'll have to respect that convention indefinitely and make it part of our internal processes, which given our small team of developers, we try to keep very light and agile.
May I ask, what is the purpose of having a record permanently available? If this is for testing purposes as part of a CI/CD pipeline, keep in mind that usually testing using external services is problematic since you never e.g. know if Zenodo (Sandbox) will be up and running at all times (e.g. we might be having scheduled maintenance or unexpected downtime).
Hi!
Thanks for writing this useful utility! I am preparing a submission and am experimenting with Zenodo using their 'sandbox' environment (https://sandbox.zenodo.org/). To enable this together with
zenodo_get
I added a command line flag to use thesandbox
URL (-s or --sandbox). Maybe this could be generally useful?Cheers!