Closed vkresch closed 1 year ago
Thanks, the suggestion to use a mix of mock and real rpc calls makes sense. Also, I'll make sure to skip the pytest tests instead of commenting them out.
Need to implement some final tests for the reward api of tzstats and then the PR can be reviewed and merged.
@nicolasochem @jdsika can you both test the the tzstats reward api? I would like to merge this PR as soon as possible because it's getting to big. Thanks!
I still did not removed the release_override and currently there are issues with rpc which do not match with tzkt but that will be for another PR.
@vkresch I think you should have a look at your tests and then we can test it, right? I will be more active now and push a release
@vkresch please make sure this gets ready this week!
@jdsika @nicolasochem ready for review/merge.
I have spent time looking through the code and the tests, as well as running and confirming that my reports are the same while on ghostnet and using dry run while on mainnet on the different reward network providers. From what I can see this PR is ready to go. Awesome work @vkresch
I will add a disclaimer that I am still new to TRD and may have missed important questions.
thank you @vkresch @rvermootenct
Changelog:
Resolves https://github.com/tezos-reward-distributor-organization/tezos-reward-distributor/issues/600
To fix tzstats I am currently assuming since tzkt api is working it is the standard to which we should compare values for now.
Notes: I readded some tests with actual API calls. I think the best way to currently test TRD is a mix between real and mock API calls. With real API calls we can detect faster deprecated API endpoints IMO thus we can adjust/fix things quicker. The real API calls should only do non changing data calls. That means we call only data which is static and can never change e.g. the balance in the past cycles. I know it slows down the merge process and that there is a discussion about mocking all the API calls but that way we can make sure TRD works.
Mock API calls we should use if we want to test some recent data which will change in the future. E.g. the current balance of a delegator or the data is to big to query each time (performance issue).
IMPORTANT: Please use in future PRs:
@pytest.mark.skip(reason="no way of currently testing this")
if you really need to disable a test instead of removing or commenting it out. Also provide a reason for that for future developers who do not know why the test was disabled.Issues detected during test activation (tick means this PR solved this issue):
Work effort: 32h