Open muharem opened 2 years ago
After writing down the thoughts, it feels to me if scoping all possible success and fail cases is a doable task, automating this step from the release process via integration tests is a good idea.
All possible cases can be written down in xcm config file docs and have some flag which tells if all cases covered via integration tests.
All possible cases can be written down in xcm config file docs and have some flag which tells if all cases covered via integration tests.
I would say a README is a better place than the xcm_config.rs
file
it feels to me if scoping all possible success and fail cases is a doable task
It is doable and should be doable. If we are not able to identify all cases, the we have a problem. On the other hand, it is true that is difficult to identify/overlook what SHOULD NOT happen. Actually, the list can be "endless", so we'll have to identify the most important cases.
If we are gonna trust the integration tests, everybody should get involved in maintaining and reviewing them. I have the feeling that they are not properly reviewed.
In addition, I think that a xcm_config.rs
diff between releases for both Polkadot/Statemint runtimes is also valuable.
It is still a manual process, but can quickly spotlight some obvious expected changes that might have an impact on integration tests results
@NachoPal do you think that for reviewing it a person should be proficient with XCM configs and be familiar with all involved chains (like Statemint) to identify if some case is valid?
@NachoPal do you think that for reviewing it a person should be proficient with XCM configs and be familiar with all involved chains (like Statemint) to identify if some case is valid?
Not really, I think that person should understand what is the expected functionality/behaviour of Statemint and its interaction with Polkadot and other Parachains. How that behaviour is achieved (XCM config) is not necessary to know. Obviously, some XCM knowledge is going to be positive, mainly to understand the Events
outcomes.
@NachoPal do you think that for reviewing it a person should be proficient with XCM configs and be familiar with all involved chains (like Statemint) to identify if some case is valid?
Not really, I think that person should understand what is expected functionality/behaviour of Statemint and its interaction with Polkadot and other Parachains. To know how that behaviour is achieved (XCM config) is not necessary. Obviously, some XCM knowledge is going to be positive, mainly to understand the
Events
outcomes.
I am talking here about reviewing the integration tests. About reviewing the xcm_config.rs
diffs, yes, that person should proficient with XCM.
Problem
XCM configs require a high confidence level for every release. The possible issues and the review process is not well defined.
Context
XCM(P) due its power and api flexibility has a rich and complex configuration of the runtime. In addition the functionality of different use cases like teleport depends on XCM configuration and calls' weights of every chain involved in communication.
The risk of misconfiguration and possible high const of it requires a high confidence level of XCM configs correctness for every release.
In the document (and comments) we will propose how we can achieve:
part of an initiative - https://github.com/paritytech/cumulus/issues/1657
Proposal 1, automation
There are two main ways to verify the configs during the release process right now, a manual review of config files and integrations tests. First has some drawbacks:
Second has its own:
If the drawbacks of integration tests solution can be solved or at least controlled they can help us to achieve all goals we defined and will require no manpower within the release process.
First we should try to list 'all cases' and see if its achievable:
TBD @muharem
Proposal 2, manual review
From the drawbacks of the manual review listed above there is one we can not solve, it's a possible human error. The other two requires team members to be proficient with XCM configs and get familiar with all valid XCM cases. Which sends us back to the definition of 'all cases'.