Open aatifsyed opened 1 month ago
generate-vectors
? I assume all of those specified as included
in the spreadsheet? If so, this would need to be extendable to methods beyond those which, even though they aren't in the common node spec, are also important to test for our own sake, e.g., Ethereum ones. Probably a flag would do --only-common
, where without we generate all that we have. I'd also consider having a filter list which, as I recall, you were not a big fan of, but they proved quite useful.filecoin-common-node-api-tool
-> fil-api-tool
. It's shorter and more extendable if the purpose of the tool would evolve in the future.openrpc validate spec.json
validate against missing calls as well? Would error responses (adhering to the openrpc spec), in most extreme scenario, for all methods, still be considered okay and the call would return 0
? I'm asking because we had this issue once and it flew completely under our radars.All of that sounds good, however I'm not convinced that the process of tool decomposition has to necessarily affect our internal tools, especially not the ones we use locally to sanity-check the results of our work. Many a time complicated testing pipelines lead to developers delegating their testing to the CI, which is kinda slow and inefficient.
It'd be nice to avoid having to write, read and maintain yet another set of scripts that run all these steps. Hiding this complexity from the developer would be a huge win.
There is a nice solution to that however that might work for both standalone tools and a composed one, that solution is WASM. I've had some experience with tools that know to compile it on-demand from within a Rust program, e.g.
let tool = compile_on_demand("tool1");
tool.run()
That would then:
Very happy to help out on that front if that sounds feasible.
There's growing interest from the Lotus folks to do testing of their RPC api, and I think
forest-tool api compare
can be the basis of that.filecoin-common-node-api-tool
uses a paradigm ofcapture, then assert
when it comes to testing JSON-RPC endpoints.If we rearchitect our API comparisons, we can:
filecoin-common-node-api-tool
I think we'd have the benefit of having a cleaner implementation too :)
Here's the breakdown I propose:
forest-tool json-rpc generate-vectors --snapshot SNAPSHOT > vectors.ndjson
new ✨ Outputs this format: https://github.com/ChainSafe/filecoin-common-node-api/blob/4df2547b9e322b2182864e70c764f5e7522263a3/tool/src/main.rs#L102-L109 (without theresponse
member).filecoin-common-node-api-tool json-rpc replay http://127.0.0.1:1234 < vectors.ndjson > lotus.ndjson
filecoin-common-node-api-tool json-rpc replay http://127.0.0.1:2345 < vectors.ndjson > forest.ndjson
Hammers the nodes and saves the responses.filecoin-common-node-api-tool openrpc validate spec.json < lotus.ndjson
filecoin-common-node-api-tool openrpc validate spec.json < forest.ndjson
Check for spec compliance.forest-tool json-rpc validate forest.ndjson lotus.ndjson
new ✨ Does semantic comparison. Can also be run with just one test subject, so Lotus can use this in their CI.We get introspection etc for free.