Closed distributedstatemachine closed 5 months ago
TODO:
@distributedstatemachine I think you mentioned you had started some of this somewhere? Any code that can be re-used or naw?
@sam0x17 @orriin https://github.com/opentensor/subtensor/pull/332 . Although, this description is probably outdated since we wont be using https://github.com/opentensor/bittensor/blob/stao/tests/integration_tests/test_subtensor_integration.py and would have to write a integration test that doesnt use mocks i.e we will have to spin up the local node in the ci.
yeah worth mentioning, as long as you specify you are using SubtensorCI like the other github workflows do it will be a very beefy node with 128 gigs of ram
I've been exploring the bittensor
codebase, and have some questions about how to proceed with this issue.
After speaking with @distributedstatemachine, it has become apparent that test_subtensor_integration.py
is only designed for working with a mocked version of substrate making it unsuitable for the integration tests described in this issue.
Therefore, an entirely new test suite will need to be written for these e2e tests.
For basic tests, I could work through each file one-by-one in bittensor/commands
and write e2e tests for all the combinations of logic in each subcommand. Is this something we want to do? Are there any commands higher priority than others? It feels like it will take quite a long time to write tests for every subcommand, maybe we only want to write for some?
For multi-step tests, there was already a scenario described here: https://discord.com/channels/799672011265015819/1176889736636407808/1236057424134144152 . Are there any other complex e2e cases we should test?
I suggest directly call run
on the commands exported from bittensor/commands
. That way, it's easier to mock the command args compared to directly calling the cli binary.
Since from inside the bittensor script we have no way to restart the chain (necessary between e2e tests to prevent polluting state and potential weird race conditions) we will need a test harness which runs a new ./localhost.sh
instance for every test.
I'm thinking to create an orchestrator file, which will spin up a localhost.sh
instance, run a test, close the instance, repeat for each e2e test.
There may also be some voodoo possible with beforeeach and aftereach pytest hooks, but I'm not sure if it would be worth the extra effort to get those working.
I'm thinking of creating a new dir tests/e2e_tests
for these as they are more e2e than integration, and there's already a dir tests/integration_tests
which is used for mocked substrate testing.
My proposed structure of the new testing dir is
.
└── tests/
└── e2e_tests/
├── subcommands/
│ ├── subnets/
│ │ ├── list.py
│ │ └── ... # other subnets commands here
│ └── ... # other subcommands here
├── multistep/
│ ├── tx_rate_limit_exceeded.py
│ └── ... # other multi-step e2e tests here
├── common.py # common utils
└── run.py # test orchestrator which will spin up a new `localhost.sh`, run e2e test, repeat for each e2e test defined
- How do I call into the CLI? I suggest directly call run on the commands exported from bittensor/commands. That way, it's easier to mock the command args compared to directly calling the cli binary.
I think we can use the call the cli the same way its currently done :
https://github.com/opentensor/bittensor/blob/master/tests/integration_tests/test_cli.py#L445-L5
- How to clear state between each e2e test? Since from inside the bittensor script we have no way to restart the chain (necessary between e2e tests to prevent polluting state and potential weird race conditions) we will need a test harness which runs a new ./localhost.sh instance for every test. I'm thinking to create an orchestrator file, which will spin up a localhost.sh instance, run a test, close the instance, repeat for each e2e test. There may also be some voodoo possible with beforeeach and aftereach pytest hooks, but I'm not sure if it would be worth the extra effort to get those working.
I like this , i think we can call purge chain
on the binary . Alternatively , we can write a long test that covers all the happy paths , and not have to purge the chain at all
@sam0x17 DM'd me about 1., saying that it's more important for this PR to get a nice process / structure / examples in place to write e2e tests than to get full test coverage.
So I'll start with 2 examples
also some key requirements I would like to meet with this if possible:
I have a PoC using pytest fixures (https://docs.pytest.org/en/6.2.x/fixture.html) to spin up and spin down localnet nodes between tests.
The initial adaptation will need to run in serial, but with some additional logic to find free ports it should be possible to upgrade in the future with the ability to run in parallel.
I have a PoC using pytest fixures (https://docs.pytest.org/en/6.2.x/fixture.html) to spin up and spin down localnet nodes between tests.
The initial adaptation will need to run in serial, but with some additional logic to find free ports it should be possible to upgrade in the future with the ability to run in parallel.
awesome, this is a great first stab at this 💯
can you link the PR to this with a fixes #331
?
Description
To ensure continuous reliability between our Subtensor and the Bittensor package, we need to implement a comprehensive GitHub Actions workflow. This workflow will automate the entire testing process, from building the blockchain node using the
localnet.sh
script, to installing the Bittensor package from a configurable branch, and finally running thetest_subtensor_integration.py
integration test.The primary objective of this setup is to verify that any changes introduced to the subtensor codebase do not break or introduce regressions in the Bittensor Python code. By parameterizing the Bittensor repository branch, we can test against various development stages and release candidates, ensuring compatibility and robustness across different versions.
Acceptance Criteria
localnet.sh
script.test_subtensor_integration.py
integration test should be executed after successful installation of the Bittensor package.Tasks
.github/workflows
directory.localnet.sh
script into the workflow for building and starting the blockchain nodes.test_subtensor_integration.py
integration test.Additional Considerations
Related Links