Open PaulRBerg opened 1 month ago
FYI I've started working on a draft PR for this
Nice write-up. I agree with the overall suggestions and ideas from the OP.
I just want to add that, beyond the information (ABIs, contract addresses, broadcasts, etc.) that the repo will provide, I think we could take this even further. My idea is to make this repo a central point for everything related to deployments across all protocols—including the deployment scripts themselves (which means we’d need to add a foundry.toml
file).
We’d also be installing all protocol repos (lockup
, flow
, airdrops
, etc.) here via package.json
, so we can implement a unified DeployProtocol
script to handle all contract deployments in one go.
One potential problem is dealing with different deployment profiles—for example, lockup
requires fewer optimizer_runs
, while airdrops can use a much higher value. But we can handle this using different Foundry profiles as a pre-step before running the deployment script to generate the initcode
. Precompiles could either be part of this repo or stay in their respective repos—we could refactor the DeployOptimized
testing utils to contain the bytecode directly instead of reading it from out-optimized
.
### foundry.toml
```toml
[profile.lockup]
src = "src/lockup"
optimizer_runs = 500
[profile.flow]
src = "src/flow"
optimizer_runs = 1000
[profile.airdrops]
src = "src/airdrops"
optimizer_runs = 100_000_000
```
### src/lockup/Precompiles.sol
```solidity
/// Import to generate the bytecode
import { LockupNFTDescriptor } from "@sablier/lockup/src/LockupNFTDescriptor.sol";
import { SablierBatchLockup } from "@sablier/lockup/src/SablierBatchLockup.sol";
import { SablierLockup } from "@sablier/lockup/src/SablierLockup.sol";
library Precompiles {
function lockupInitcode() internal pure returns (bytes memory) {
return hex'
deployments
NFTDescriptor
without storing them in the repoI really think centralizing everything here is the right call in terms of maintenance costs and ease of use. The idea is mainly inspired by how Uniswap handles it (they have a single repo for all contract deployments), and interestingly, their new v4-core doesn’t even contain any deployment scripts.
evm-deployments
—and keep Solana-related broadcasts in the solsal
repo, as I initially suggested. (we won't have to deal with multichains there, so it makes sense to keep there).multichain-deployer
logic into this repo. That way, we could repurpose the current repo into something else—maybe a multi-scripts
repo—and use it for scripts like this one.@sablier-labs/evm RFF — I tried to keep my message concise here, but I highly recommend checking out Uniswap's repo. Not necessarily to do a deep dive, but just to get a general sense of how they’re structuring things and what patterns they're following.
lockupInitcode
won't be that straightforward. Have you considered libraries in precompile generation?
There could be scenarios where we only deploy a new release of only one repo. We would have to take into account that as well in case plan to move all the deployment related things here.
lockupInitcode won't be that straightforward. Have you considered libraries in precompile generation?
i don't understand your point. we would have something automated (similar to update-precompiles.sh
script) to updates the bytecode/initcode/precompile (not the name is important). we can place them in a lib/contract what we will find that makes more sense
There could be scenarios where we only deploy a new release of only one repo. We would have to take into account that as well in case plan to move all the deployment related things here
ofc, we can have scripts to deploy all protocols at once or just a specific one—the idea is simply to centralize everything here.
i don't understand your point. we would have something automated (similar to update-precompiles.sh script) to updates the bytecode/initcode/precompile (not the name is important). we can place them in a lib/contract what we will find that makes more sense
I am not sure if deploying lockup using bytecode would work. The lockup bytecode requires placeholders to be replaced by the deployed libraries addresses. Foundry does that internally (either by deploying libraries or identifying the onchain addresses if already deployed) when we deploy the contract. But in case of precompiles, I am not sure if Foundry would be able to deploy Lockup or not.
Just throwing this idea as I remember I faced some problems with bytecode when working on singleton PR.
I am not sure if deploying lockup using bytecode would work. The lockup bytecode requires placeholders to be replaced by the deployed libraries addresses. Foundry does that internally (either by deploying libraries or identifying the onchain addresses if already deployed) when we deploy the contract. But in case of precompiles, I am not sure if Foundry would be able to deploy Lockup or not.
we were able to fix this in DeployOptimized
It was easy in DeployOptimized, because of local deployments. In case of production, the logic would be to:
"identify if library bytecode has changed. If not, use the onchain addresses of previously deployed library. If yes, redeploy the libraries and save those addresses."
IIRC, when libraries are already deployed, due to matching bytecode, foundry doesn't even return any address. I am not saying its not possible though, just that it won't be easy.
If it turns out to be too complicated, we could just keep the default optimizer runs for lockup
at 500 and deploy it normally via new
. Then we’d use precompiles only for airdrops
and flow
, since those don’t rely on public libraries.
@andreivladbrg your suggestion is interesting, thanks for sharing your thoughts.
I prefer to keep issues small and isolated, or, in case of complicated tasks, create meta issues that track multiple other subissues.
Could you please create a discussion for your proposal, and then hide your comments in this issue?
This particular issue should be exclusively about the TypeScript spec of this package, and the first version should be as simple as possible. That is, it should contain only the deployment addresses and maaaybe the subgraph URLs. The first version won't even contain the ABIs or the broadcasts.
What you're suggesting is a major overhaul, and we need to first debate it properly.
That is, it should contain only the deployment addresses and maaaybe the subgraph URLs. The first version won't even contain the ABIs or the broadcasts
I am not sure I understand this. We already have the broadcasts and ABIs in this repo.
Having them in the repo is one thing. Exporting them in a neat way in a TypeScript package is another thing. The latter takes effort (TypeScript types, TypeScript rules for exactly what is included and not included, etc.). Also, since these are meant to be used in production, we might want to sanitize the ABIs before including them in the public package.
Let's apply the KISS principle and ship a minimal version of this package, test that out, and then we can think of further improvements.
I haven’t reviewed the code you started yet, but I think all we really need is a simple TS process
function that takes inputs like chain, version, etc., and looks up the addresses from the repo.
No need to hardcode addresses again in multiple places—just rely on the broadcasts.
Asked claude for a function:
```ts import * as fs from 'fs'; import * as path from 'path'; interface ReturnValue { internal_type: string; value: string; } interface Returns { [key: string]: ReturnValue; } interface JsonData { returns: Returns; } /** * Fetches and formats return values from a protocol JSON file * @param protocol The protocol name * @param version The version number (without 'v' prefix) * @param chain The chain name * @returns Formatted string of contract names and addresses */ function fetchReturns(protocol: string, version: number, chain: string): string { try { // Construct the file path const filePath = path.join(protocol, `v${version}`, `${chain}.json`); // Read and parse the JSON file const fileContent = fs.readFileSync(filePath, 'utf8'); const jsonData: JsonData = JSON.parse(fileContent); // Extract the returns object const returns = jsonData.returns; if (!returns) { throw new Error(`No 'returns' section found in ${filePath}`); } // Format the output let result = ''; for (const key in returns) { if (returns.hasOwnProperty(key)) { const contractType = returns[key].internal_type.replace('contract ', ''); const contractAddress = returns[key].value; result += `${contractType}: ${contractAddress}\n`; } } return result.trim(); } catch (error) { if (error instanceof Error) { throw new Error(`Error processing returns: ${error.message}`); } throw error; } } export default fetchReturns; ``` ### Usage ```ts // Example usage const result = fetchReturns('sablier', 1, 'ethereum'); console.log(result); // Output: // SablierBatchLockup: 0xaD2f0228369D71605cd19c33FfA2Dde85A2FE477 // SablierLockup: 0xF56b79523FD0b4A6c9bf4e6F7a3Ea45dC0fB5bBC // LockupNFTDescriptor: 0xDd695E927b97460C8d454D8f6d8Cd797Dcf1FCfD ```
We need to hard-code the addresses because we do not want to export the broadcasts and the artifacts. As I've said before, we need to sanitize them before making available publicly.
We need to hard-code the addresses because we do not want to export the broadcasts and the artifacts
I can’t say I fully agree, they contain all the info one might need. What’s the problem with including them? We’d just need a simple getter function to process the JSON-based ones as needed.
As I've said before, we need to sanitize them before making available publicly.
I am not sure what you mean by this.
What’s the problem with including them?
https://github.com/sablier-labs/deployments/issues/21#issuecomment-2730587165
I am not sure what you mean by this.
This package is meant to be consumed in a TypeScript environment, meaning we cannot just dump the JSONs and the ABIs in the exported package. I mean, we could do that, but that would be a shitty design. We should create bespoke Types for the artifacts, so that they can be used in Viem environment (e.g. see Razvan's comment here).
But again — let's apply KISS and ship a simple version at first.
meaning we cannot just dump the JSONs and the ABIs in.. but that would be a shitty design
I disagree — they don’t need to worry about what’s included in the package or not. They’d simply use something like getLockupABIForV2
or getLockupAddressesForV2("mainnet")
— not those exact names, but you get my point. If it’s not clear, I can put together a sample at some point.
We already have 7 lockup releases, and we’ll keep adding more. Duplication isn't sustainable, both in the short and long term
Context
See the full rationale behind this proposal here: Package for listing deployment addresses and subgraph endpoints?
Spec
SablierChain
,SablierContract
, andSablierRelease
TypeScript Types
Toggle to see my proposed design
```typescript type SablierChain = { explorerURL: string; id: number; name: string; nativeCurrency: { "name": string; "symbol": string; "decimals": number; } } // Also used for libraries type SablierContract = { address: string; name: string; chain: SablierChain; } type Indexers = | { envio: string; thegraph: string | null } | { envio: string | null; thegraph: string }; type SablierRelease = { contracts: SablierContract[]; product: "airdrops" | "flow" | "lockup"; indexers: Indexers; version: string; }; type SablierContracts = SablierContract[] type SablierReleases = SablierRelease[] ```
Consuming Repos
TBD