Open popzxc opened 5 days ago
As user of both foundry and foundry-zksync i'm looking forward for a solution to this!
My 2ct: 1 and 3, or even a combination of the two could make sense imo (so specifying binaries as part of the profile). Using different profiles on different chains is already sth ppl do a lot, therefore imo
The drawback I can see here is that users may potentially need up to N*M profiles, where N stands for the profiles they normally have (default/CI)
is not a real drawback. Different evms aside, it's already the case that e.g. on linea you can only run paris while on mainnet you might want to run cancun.
In an ideal scenario imo, i should be able to run a script or all tests without actively choosing a profile.
It would be great if in a multichain repo I could just run forge test
and each test would automatically run in the appropriate environment.
There should be the possibility to have inline configs (or similar) for profile
similar to how we can inline-config fuzzing, so we can have tests for different chains side by side and execute them without custom scripting to select the correct env.
Component
Other (please describe)
Describe the feature you would like
Problem
Right now, Foundry provides a wide coverage for Web3 ecosystem, but it comes with a few nuances:
anvil --optimism
). This creates a mixed environment for users, where they have to choose binaries or CLI flags, often in different ways, which can be troublesome.The issue will likely become even more complex with several companies now working on their stacks. I can imagine that eventually features like OP's supersim and similar ones for ZKsync/Polygon/Arbitrum would be wanted to be available out of the box (especially once interop becomes an industry standard). Based on our interactions with teams building on ZKsync, not being able to easily reuse the same setup is a big adoption barrier, even if it's "kind of" integrated (e.g. even having to pass
--zksync
flag was reported as an inconvenience, and understandably so).Solution
I propose creating a unified way for users to explicitly specify what network they want to use in upstream foundry, as well as a way for developers to create "hooks" to a particular implementation. It would mean that users always invoke
forge
/cast
/anvil
/etc, but based on some form of configuration (to be covered below) it can modify behavior to match network expectations, e.g.:foundry-zksync
--optimism
flagTechnically, I see several options to achieve this, but they revolve around a single prerequisite: All the tools from the Foundry suite must load some configuration before actual execution. Let's assume that we have a variable
network_family
, with supported options likezk_stack
,op_stack
,starknet
. Handling it might vary. For example, if we're invokingforge
withnetwork_family=zk_stack
, we will simply forward execution toforge-zksync
binary with the same arguments. If we're invokinganvil
withnetwork_family=op_stack
, it will imply the--optimism
CLI flag.Option 1 -
foundry.toml
profilesFoundry already has a similar mechanism for altering behavior: profiles. We can add a
network_family
variable there, and then users can reuse the same workflows they have by alteringFOUNDRY_PROFILE
variable. The drawback I can see here is that users may potentially need up toN*M
profiles, whereN
stands for the profiles they normally have (default/CI), andM
stands for the networks they support (l1/op/zksync). On the other hand, it is not unlikely that profiles for different networks will be different anyway, especially for ZKsync and Starknet.Option 2 -
FOUNDRY_NETWORK_FAMILY
env variableIn this option, there will be one more variable to choose a network family. For now, looks like we don't need to add anything to
foundry.toml
in this case, but in the future, it can be extended to have something similar to profiles (e.g.[network_family.zksync]
and[network_family.optimism]
sections). I'm not sure what the use cases will be there, but probably it may be relevant for interop (e.g. supersim configuration). The main drawback here is that we now have 2 environment variables, which has higher cognitive complexity.Option 3 - Less upstream support
If having logic to handle differences of particular chains feels troublesome, there is a more lightweight approach: we can introduce
binary_mappings
variable, so that e.g. for ZKsync it would be{ "forge" = "forge-zksync", "cast" = "cast-zksync" }
and for Starknet it would besnforge
/sncast
. This variable would simply tell which binary the execution should be forwarded to with the same arguments. This way no "custom" logic is added upstream, though it feels less extensible for networks with upstream support.If (hopefully) we will decide that this proposal makes sense and agree on a particular option, the ZKsync community will be happy to submit PRs for the implementation. We see it as a first step towards #feat(compatibility): add
zkSync
support, as well as greater integrity of the ecosystem.Additional context
No response