Open Kouprin opened 4 years ago
All tests sounds very good to have. We already have 1. 2 should be simple to add in ci. 3-5 seems more expensive and should be in nightly. I think we don't need to write new infra though, most of them are available, from your list:
Create user with some amount of Eth
create_near_accounts(N) (probably should done by patch genesis) create_eth_accounts(N) (this is done when ganache start node)
Init contracts and start nodes/relays Create user with some amount of Eth Send tx from Eth to Near Send tx from Near to Eth Disable/enable near/ganache/relays/watchdog Get status of the user Get status of the system
These we have equivalent javascript functions, we can call them in a e2e test written in js.
Overall, looks good. There are more scenarios that we might want to test using system testing infrastructure: https://app.zenhub.com/workspaces/nearcore-5cea2bcf78297c385cf0ec81/issues/near/rainbow-bridge-cli/338
Also, I suggest we do not introduce Python into bridge infrastructure, because it would require duplicating lots of code that we already currently have: serialization/deserialization code, figuring out Web3 interface, etc. I suggest we keep it in JS for now.
Also, I suggest we do not introduce Python into bridge infrastructure, because it would require duplicating lots of code that we already currently have: serialization/deserialization code, figuring out Web3 interface, etc. I suggest we keep it in JS for now.
There are my thoughts why we should use Python:
I vote for JS because
We already want to move from JS to Rust (#150). I suggest to get rid of JS at some point, if possible (or left a simple wrapper instead).
The true blocker is in ethereum side. Although it's possible to use bare bone eth JSONRPC in rust, majority and most battle test way is via web3js, so we cannot drop js anyway
(Not sure) I expect our devs are more familiar with Python than JS. Personally, I'm more familiar with Python.
This is true, but we can stick to the "python equivalent subset" of nodejs. which i think can be more developer friendly than python. E.g. for a few major use case:
Also nodejs is very reliable and at speed of golang in async io, python isn't as reliable when invest same amount of effort. Python libs are also not as reliable and need additional error handling
I agree all @ailisp 's arguments. Lots of code is already available in JS. A very large part of the JS code is about data formats manipulations and interactions with non-robust interfaces, like web3js. If we move to Python we would have to reimplement it. Also, while working with web3js we had to learn all of its quirks and have workaround for them. If we move to web3.py we would have to re-learn quirks of another library.
What was the resaon of moving away from python when we did that? I don't remember @nearmax
The reason is that there was too much of the glue code between Python and JS which made it harder to maintain. So if we put Python back again we are going to have again lost of glue code, specifically for the testing suite.
The true blocker is in ethereum side. Although it's possible to use bare bone eth JSONRPC in rust, majority and most battle test way is via web3js, so we cannot drop js anyway
This is true. web3js is the standard, while other re-implementations like web3.py are not.
Also nodejs is very reliable and at speed of golang in async io, python isn't as reliable when invest same amount of effort. Python libs are also not as reliable and need additional error handling
This is also true. Parallelizing CPU computations with Python is PITA. Nayduck tests might require CPU-heavy test fixtures, like headers and proofs preparations.
@ailisp @nearmax Thank you for explanation! Let's start with JS then.
So my suggestion to structure things (including a bunch of other related issues):
Principle to follow is that we can test things on every level. E.g.
lib:
index.js (exports everything from src)
src/
borsh.js // ideally use nearAPI.utils.serialize instead of own version of borsh, but I see there is a bunch of customization
borshify-proof.js
utils.js // helpers and other tools to here
proof-extrator.js
contracts.js // turn EthOnNearProverContract and other classes into a method `async createEthProverContract({params})`.
contract-management.js // functions to deploy ethereum and near client & prover contracts
test/
contracts.test.js // test that contracts get deployed via contract managmeent and can be used via contracts api
proof-extractor.test.js
borsh / borshify tests
..
Because relayers right now can not live in lib, keep them in cli. But any logic (like iterating over blocks, contract management, transformations, etc) should got to lib
cli:
bin/
rainbow-cli.js
middleware/
key-store.js
services/
eth2near-relay.js // relayer of Eth blocks to NEAR
near2eth-relay.js // relayer of NEAR blocks to Eth
watchdog.js // watchdog for validity of NEAR blocks on ETH
test/
eth2near.test.js
near2eth.test.js
Local testing:
Remote testing: export NEAR_ENV=testnet run any test - should work
Additionally, in the specific connectors, there should be similar logic, e.g. rainbow-token-connector:
bridge-token-factory/ // rust contract with it's own tests
erc20-connector/ // solidity contract with it's own tests
src/
index.js // usage library for connectors / deploy contracts
test/
token-bridge.test.js // test with mock provers that connectors work
@ilblackdragon
Leverage NEAR's key management. Do similar thing for ethereum.
Could you be more specific here? What would be more appropriate way of leveraging NEAR key management than using near-api-js key stores https://github.com/near/rainbow-bridge-lib/blob/master/init/near-contracts.js#L49 ?
Replace maybe create account, etc with near-api-js APIs for all this
AFAIR In most places where we do not use near-api-js it is because either near-api-js did not have this functionality or it was not robust enough. @Kouprin please be extremely careful replacing this code with near-api-js, because you might be sacrificing robustness. This is where running your own instance of bridge on NEAR testnet <> Ropsten and NEAR testnet <> Rinkeby is going to be useful because it will expose you to real-life robustness issues.
Switch to use yargs / configs
How yargs are better than commander? Also we are already using configs.
- Don't use classes where classes are not needed. Remove extra folders. If class is 1 method or 1 initializer and 1 run - it should not be a class.
- Add utilities, like remove0x also use web3.utils that have a rich set of things
Agree
- Replace maybe create account, etc with near-api-js APIs for all this
I'm against on this too, near-api-js is not robust. we used to all use high level near-api-js apis for all but it cause at least 5 issues, and eventually we have to wrap it with low level sendJsonRpc, retry and error handling. For example, createAccount/deployContract/contract function call, it fails on or after submit txn and gives you timeout error, we don't know whether or not txn is successfully submitted. Then if simply retry you could double send txn with increased nonce and txn really been executed by node twice because nonce is different. With high level near-api-js api it's not possible to retry with same nonce. We fixed this by sendJsonRpc broadcast-txn-async, which never timeout and gives you txnHash and then retry with txStatus. Some related issues:
https://github.com/near/near-api-js/issues/360 https://github.com/near/rainbow-bridge-cli/issues/229 https://github.com/near/rainbow-bridge-cli/issues/149 https://github.com/near/rainbow-bridge-cli/issues/142 https://github.com/near/rainbow-bridge-cli/issues/258
- Leverage NEAR's key management. Do similar thing for ethereum.
Current near-api-js keyStore and web3's utils seems good, what specific key management requirement we need?
- Switch to use yargs / configs
Current setting of commander
and configstore
for arg and config, is there a reason to switch yargs
and configs
?
- Use truffle artifacts to locate artifacts like abi/bin instead of managing them manually.
It's good to do if we don't decide to move away from truffle. @mikedotexe 's research on truffle alternatives looks very attractive to me
contracts should be tested by themself first. then test that connectors work with each other with mocked provers then test that relayers can independently work test watchdog separately, sending invalid headers via utils
we have the first two. Didn't have unit test of relayer and watchdog.
Because relayers right now can not live in lib, keep them in cli. But any logic (like iterating over blocks, contract management, transformations, etc) should got to lib
Yeah this is better than current
Local testing: one command to run default local near node one command to run default ganache run any test - should work
We have these.
Remote testing: export NEAR_ENV=testnet run any test - should work
We also have this, but it's not by set NEAR_ENV, instead it's set nearNodeUrl in ~/.rainbow/config.json
.
RE: near-api-js:
RE: Key management:
nearAPI.connect(.. { keyStore })
where keyStore is input into the library, allows web applications to override key store. Also for web we need more WalletAccount
abstraction, because probably should sign transactions that move funds via pop-up vs local function call key.@mikedotexe wasn't researching alternatives to truffle to switch anything of ours - it was work to see which frameworks we need to support for EVM. I agree that buidler has nice features for debugging. I think truffle was adding them as well: https://www.trufflesuite.com/docs/truffle/getting-started/debugging-your-contracts#overview
can do truffle test --debug
and use debug()
Switch to use yargs / configs
Current setting of commander and configstore for arg and config, is there a reason to switch yargs and configs?
I don't really know the difference, but we should not have this file - https://github.com/near/rainbow-bridge-lib/blob/master/config/index.js. This looks like we are implementing something that have been already implemented times and times again. This file is not tested as far as I can see and if nothing else has clear issues with error propagation. But might have other bugs.
we have the first two.
I see a single test "should be ok" in all contracts:
I don't see tests that check that connector work on it's own. Can you point me to it?
We have these.
I don't actually see tests, I see a deployment and usage scripts. Which also involve a strange on-disk state machine - which @Kouprin took a fair bit of time to realize is happening. (strange - because no real use case can use this, as most of the usage will not have access to a disk)
Here is example of integration tests that I suggest (obviously not done and doesn't have edge cases / error handling):
We just missing an easy way to install neard via apt/brew (I guess can be done via nearup, unless we changed that)
There is an example of good system testing infrastructure which is used in nearcore: https://github.com/nearprotocol/nearcore/tree/master/pytest.
We need to build system testing infrastructure to have an ability to create and run custom tests which use RB system as whole. There are some simple scenarios of such tests:
Some of those tests can be executed in CI, some of them are Nightly ones. This infrastructure should completely replace current way of how CI tests written. After infra will be introduced, sh-scripts will be used only to check this infra validity and validity of external things like npm packages.
There is high-level functionality: