Open buffalojoec opened 10 months ago
Thanks again for taking the time to offer that feedback! Allow me to index your points with letters so we can refer to them easily going forward.
[A] Regarding createJoeProgram
vs getJoeProgram
.
I agree, the latter is more consistent with the rest of the web3.js library. I have made the change on Kenobi (just need to regenerate the code when I release the new version).
[B] Regarding creating a package for the shared.ts
file.
Yes, that’s my goal. In fact, that file was much bigger to begin with. I gradually extracted its logic into new Web3.js packages such as @solana/accounts
and @solana/programs
. However, the remaining logic is really tight to the generated code and is simply there to avoid regenerating the same code over and over again. We’ll just have to name the package something very specific like @solana/kinobi-helpers
. I also think it’s best to extract that shared code as late as possible since it’s likely to change as we iterate on the generated API.
[C] Regarding making everything a defined type.
It is kind of how things work right now. The code for rendering defined types is exactly the same as the code generating account data and instruction data. It’s just the case of rendering that data in different files as opposed to having a big library of defined types.
However, the Kinobi tree itself is structured a little bit differently for reasons I’ll explain in a minute. Inside a program, an instruction, an account and a defined type each have their own node as they each have their own sets of requirements. Within each of these node, their data is defined as such:
type DefinedTypeNode = { type: TypeNode, /*...*/ }
type AccountNode = { data: StructTypeNode, /*...*/ }
type InstructionNode = { arguments: InstructionArgumentNode[], /*...*/ }
A TypeNode
is a type helper that is a union of all possible types — e.g. both a StructTypeNode
and a NumberTypeNode
are valid TypeNodes
.
When we generate these types and their codecs we simply do something like:
generateTypeAndCodecs(definedType.type);
generateTypeAndCodecs(account.data);
generateTypeAndCodecs(structTypeNodeFromInstructionArgumentNodes(instruction.arguments));
What you’re suggesting is the following:
type DefinedTypeNode = { type: TypeNode, /*...*/ }
type AccountNode = { data: DefinedTypeLinkNode, /*...*/ }
type InstructionNode = { data: DefinedTypeLinkNode, /*...*/ }
The DefinedTypeLinkNode
exists and allows us to reuse defined types within any type.
There are several reasons why the former approach is preferred to the latter:
StructTypeNode
. For instance, you can tell Kinobi that a particular instruction argument should default to the PDA bump of a provided instruction account. These special values are called ContextualValueNodes
and cannot be used everywhere.StructTypeNode
to define their data. I could in theory relax that to a TypeNode
and you could have, for instance, myAccount.data
being a simple number
. The issue with that is some of the other information present in the AccountNode
may need access to a particular field of the account data. For instance, the FieldDiscriminatorNode
tells Kinobi that the configured field should be used to distinguish this account from another one. If the account data is a NumberTypeNode
, it makes this impossible. That being said, we could add a validation rule that prevents that invariant from happening.DefinedTypeLinkNode
to reference a type outside of the current tree using the importFrom
attribute. Whilst this is good for code generation as we can compose generated clients together, it’s pretty bad for explorers unless they also have access to the Kinobi tree of the imported type.I’m hoping this explanation helped more than raising more questions haha.
[D] Regarding renaming safeFetchX
functions.
That’s good feedback! I’ve never been a big fan of the safe
prefix but couldn’t come up with something better. I really like nullable
so I’m guessing you mean like fetchNullableNonce
.
Another idea this gave me is to return a MaybeNonce
account instead of Nonce | null
to fully leverage all the MaybeAccount
helpers. Then we could call that function fetchMaybeNonce
. What do you think?
[E] Regarding resolving custom program errors on the user side.
My plan was to keep track of the generated programs in an array and use the resolveTransactionError
method from the @solana/programs
like so:
// Keep track of all programs used in your dApp in a store somewhere.
const programs = [
getSplSystemProgram(),
getSplComputeBudgetProgram(),
getSplAddressLookupTableProgram()
];
try {
await sendAndConfirmTransaction(transaction);
} catch (error) {
// Resolve the error if possible.
const resolvedError = resolveTransactionError(error, transaction, programs);
// If not just throw.
if (error === resolvedError) throw error;
// Otherwise do some custom things based on the resolved error.
if (
resolvedError.name === "SplSystemProgramError" &&
resolvedError.code === SplSystemProgramErrorCode.RESULT_WITH_NEGATIVE_LAMPORTS
) {
notifyUser("Your wallet doesn't have enough SOL mate.");
}
}
Note that the resolved error can come from a variety of programs since transactions contain multiple instructions which can each CPI into other programs. For the error to be resolved, the failing program should be part of the programs
array above.
An improvement we could make is offering a generated function that returns an array of all programs inside the generated client:
// Before.
const programs = [
getSplSystemProgram(), // From spl-core
getSplComputeBudgetProgram(), // From spl-core
getSplAddressLookupTableProgram(), // From spl-core
getCandyMachineCoreProgram(), // From mpl-candy-machine
getCandyGuardProgram(), // From mpl-candy-machine
];
// After.
const programs = [
...getSplCorePrograms(),
...getMplCandyMachinePrograms(),
];
[F] Regarding how instruction accounts and arguments default values work.
Kinobi doesn’t only define types, it can also define values. There are two categories of values:
ValueNodes
such as NumberValueNode
or PublicKeyValueNode
represent hard-coded values that need to be represented in whichever language is renderer. For instance, you can use them to set token.amount
to 1
by default when minting or a systemProgram
instruction account to 1111..1111
so the end-user doesn’t have to fill in that information.ContextualValueNodes
such as AccountValueNode
or ConditionalValueNode
. These values make sense in the context they are defined. For instance, when setting the default value to instruction account owner
to accountValueNode("authority")
, you’re telling Kinobi that the owner
account should default to the authority
account when not provided. This reduces the amount of inputs the user must provide and offers a better developer experience.With all that useful information in the tree, we can have fun with the renderers.
[G] Regarding teaming up with Anchor for the IDL spec.
Yes, and not just Anchor, there are plenty of use cases out there and having a spec that stays generic enough whilst suiting most of these cases is important.
However, the Kinobi tree itself is its own standard outside of the IDL. It can be fed any IDL and will absorb any information it can. For any additional information such as ContextValueNodes
mentioned above, the Kinobi config file can be used to apply visitors to the tree and shape it as you wish.
My ideal scenario is that eventually, we grow the current IDL spec (e.g. using Shank for vanilla programs) so that the Kinobi config file is no longer necessary. But I think it’s important for the Kinobi tree to act as an experimental ideal spec that we can play with and iterate on before we move the IDL spec closer to it.
Also note that, if with runtime v2 we’re able to get yet another definition of a program, we can simply create a parser that creates a Kinobi tree from that and we get all the Kinobi benefits for free.
[H] Regarding the validator.cjs
config file.
This is just the Amman configuration file. Amman is a wrapper on top of solana-test-validator
that offers helpful features for developers.
[I] Regarding non-generated templates.
You’ve asked if things like configs/scripts
are generated. They’re not. The only things generated by the Rust or JavaScript renderers are these generated
folders under their respective client folder. The reason for this is program maintainers may want to configure things differently and they may even want to provide manually written code to go alongside with or expand on the generated code.
My ideal plan here for the future is to have a template generator that can set up these environments with good default values based on some chosen options — e.g. Anchor vs vanilla Solana; Choose your clients: Rust, JS, Python, PHP, etc; Generate CI or not; etc. Or as you said “wen npx create-solana-repository
’’ haha.
The closest thing to that right now is the solana-project-template
repository which Febo and I created and keep maintaining to make it super easy for us to bootstrap new Solana projects.
[J] Regarding the kinobi.cjs
config file.
The purpose of the Kinobi config file is to:
RenderRustVisitor
, RenderJavaScriptVisitor
, etc.[K] Regarding defining some Solana tooling versions in Rust.
I’m not too sure what you meant by this. Would you mind clarifying? Was it maybe because you thought these package.json
and cargo.toml
files were generated?
[L] Regarding using Shank for core programs.
Yes, if we’re using vanilla Solana to write them (which I think we should) then Shank is a great tool for generating IDLs from them. It’s also a tool on which we have a great deal of control and can add to it to bring the IDL spec closer to the Kinobi tree.
[M] Regarding eliminating Metaplex references in the READMEs.
For sure, this solana-core-programs
playground was a copy/pasta from the mpl-toolbox
repository and the READMEs need to be re-written.
[A], [B]: Sounds good!
[C]:
- Instruction data is not only used to decode instructions but also to create them. Meaning it has very unique needs that would not fit in a typical
StructTypeNode
. For instance, you can tell Kinobi that a particular instruction argument should default to the PDA bump of a provided instruction account. These special values are calledContextualValueNodes
and cannot be used everywhere.
I see, so you'd rather configure the instruction data with user-defined customizations, whereas the StructTypeNode
is more of a layout.
- Accounts already use the
StructTypeNode
to define their data. I could in theory relax that to aTypeNode
and you could have, for instance,myAccount.data
being a simplenumber
. The issue with that is some of the other information present in theAccountNode
may need access to a particular field of the account data. For instance, theFieldDiscriminatorNode
tells Kinobi that the configured field should be used to distinguish this account from another one. If the account data is aNumberTypeNode
, it makes this impossible. That being said, we could add a validation rule that prevents that invariant from happening.
Yeah, again here I think considering the configurations involved maybe it does make more sense. I was coming from more of an IDL perspective on this suggestion. It sounds like it's much more valuable to have these customizations than just a bucket of types and pointers.
- Finally, it is possible for a
DefinedTypeLinkNode
to reference a type outside of the current tree using theimportFrom
attribute. Whilst this is good for code generation as we can compose generated clients together, it’s pretty bad for explorers unless they also have access to the Kinobi tree of the imported type.
How does this play out if my account state uses a struct from another crate?
[D]:
Another idea this gave me is to return a
MaybeNonce
account instead ofNonce | null
to fully leverage all theMaybeAccount
helpers. Then we could call that functionfetchMaybeNonce
. What do you think?
Yeah! I think this jives well with the API you laid down in @solana/accounts
.
[E]:
My plan was to keep track of the generated programs in an array and use the
resolveTransactionError
method from the@solana/programs
Ahhh I do remember seeing this in one of your PR's. Nice! That example looks great.
An improvement we could make is offering a generated function that returns an array of all programs inside the generated client
What about generating, within the error module, that list you mentioned for the workspace programs and then just a very simple, light wrapper that drives resolveTransactionError
so they don't ever have to build that array and can just reuse it all around try...catch
statements across their client? That way they're not hauling around the array, too.
[F]: Thanks for this explanation, it's always nice to glean more insight into how all of this works under the hood!
[G]:
My ideal scenario is that eventually, we grow the current IDL spec (e.g. using Shank for vanilla programs) so that the Kinobi config file is no longer necessary. But I think it’s important for the Kinobi tree to act as an experimental ideal spec that we can play with and iterate on before we move the IDL spec closer to it.
This is a good approach. I would say if you have anything you've been thinking about for a while, or added to Shank IDLs, that isn't in Anchor IDLs, we could strike up some conversation on that.
Also note that, if with runtime v2 we’re able to get yet another definition of a program, we can simply create a parser that creates a Kinobi tree from that and we get all the Kinobi benefits for free.
Yep! It sounds like there will be some BTF-parsing tooling coming out of that effort, and we could build parsing on top of that which feeds perfectly into Kinobi.
[I]:
My ideal plan here for the future is to have a template generator that can set up these environments with good default values based on some chosen options — e.g. Anchor vs vanilla Solana; Choose your clients: Rust, JS, Python, PHP, etc; Generate CI or not; etc. Or as you said “wen
npx create-solana-repository
’’ haha.
That would be interesting. You could even take over the npx create-solana-dapp
project DevRel has seen some interest in.
[J]: Cool, I think 2) was the one I was a bit fuzzy on!
[K]: Yeah, I think I was conflating what would make much more sense in workspace configs, but some config file to fix the Solana CLI version and only use that in tests/CI (if desired) would be pretty cool. Not sure if Amman supports this or if it can be included in the Amman file. We do this manually in Web3.js with the setup-test-validator.sh
whole kit and caboodle.
[C]:
How does this play out if my account state uses a struct from another crate?
In this case, the generated IDL will create a pointer to a missing DefinedTypeNode
which you can fix later on in your Kinobi config file.
Namely, you would add a new DefinedTypeNode
to a program inside your Kinobi tree such that its name matches the missing import.
However, if the imported crate belongs to a program that’s part of its own Kinobi tree, then you can simply update the DefinedTypeLinkNode
— i.e. the pointer — by setting its importFrom
attribute to the tree it belongs to.
Now, this is an arbitrary string that represents a Kinobi tree in the wild — e.g. splCore
for all the core programs. That means it is up to the various renderers to interpret this importFrom
string however they want. Typically, this is done by accepting an optional mapping from a “Kinobi identifier” to an actual module or crate. For instance, this is how you would tell the JavaScript renderer how to import the splCore
and mplTokenMetadata
clients.
kinobi.accept(
k.RenderJavaScriptVisitor(path, {
dependencyMap: {
splCore: '@solana/spl-core',
mplTokenMetadata: '@metaplex-foundation/mpl-token-metadata',
}
})
)
Whilst this all works, this approach currently has the following limitations:
com.solana.spl-core
— and/or offer an on-chain Kinobi tree registry, this will, unfortunately, mean nothing to explorers.My ideal solution for this in the future relies on having an on-chain registry of ProgramNodes
such that the name of the programs are spec’d like com.solana.address-lookup-table
. Then instead of linking to a set of programs, we can link to a specific on-chain program either by name or by address. Then right before building the clients, Kinobi would download the trees of all dependent programs in a dedicated importedPrograms
array of its RootNode
. It will then trim the content of these imported programs to exactly what they need — i.e. remove unused DefinedTypeNodes
— and include the trimmed imported programs to the tree definition so that explorers and renderers have all the information they need within the tree.
[D]:
Nice! I’ve added this to my to-do list.
[E]:
What about generating […] a very simple, light wrapper that drives
resolveTransactionError
?
Do you mean generating a function like resolveSplCoreTransactionError
? I think that’s a good idea but it might not always be enough to resolve the error.
For instance, say I generate a client for the Token Metadata program which itself CPIs into the Token program. Now I’ll have to use both resolveMplTokenMetadataTransactionError
and resolveSplTokenTransactionError
and even resolveSplCoreTransactionError
in case the error happens in the System program.
try {
// Send transaction.
} catch (error) {
let resolvedError = resolveMplTokenMetadataTransactionError(error, tx);
if (resolvedError !== error) {
resolvedError = resolveSplTokenTransactionError(error, tx);
}
if (resolvedError !== error) {
resolvedError = resolveSplCoreTransactionError(error, tx);
}
// Now we've checked for all possible programs.
}
I think it’s probably easier and less prone to errors to just do the following instead:
try {
// Send transaction.
} catch (error) {
const resolvedError = resolveTransactionError(error, tx, [
getMplTokenMetadataProgram(),
getMplTokenProgram(),
...getSplCorePrograms(),
]);
// Now we've checked for all possible programs.
}
P.S.: That’s why the Umi framework has got a ProgramRepositoryInterface
. Perhaps when I refactor Umi to be a thin layer on top of the new Web3.js, it can help people manage their plugged-in programs a bit better.
[I]:
Yes, I’ve seen that a lot of nice work has gone into the create-solana-dapp
template generator so it’s definitely worth adding to it as opposed to building our own separate one. The only difference for us I’d say is program maintainers (like Metaplex) may not actually need a dApp and just want to expose client libraries to their users. But that’s always something we can handle in the options of the generators like:
# What would you like to generate?
1. A JavaScript application with program clients baked in.
2. A JavaScript application with detached program clients.
3. Program clients only.
[J]:
That’s a good question, I’m not sure if Amman supports fixing the Solana version but, if not, that’s always something we could PR in. This won’t be enough for CI though since we’ll also need the fixed Solana version when building the environment. In the Solana project template I mentioned in my previous message, we handle that via a committed .env
file inside the .github
folder. This works with Amman out of the box since it will use whatever default Solana version runs in your environment.
[C]:
My ideal solution for this in the future relies on having an on-chain registry of
ProgramNodes
such that the name of the programs are spec’d likecom.solana.address-lookup-table
. Then instead of linking to a set of programs, we can link to a specific on-chain program either by name or by address. Then right before building the clients, Kinobi would download the trees of all dependent programs in a dedicatedimportedPrograms
array of itsRootNode
. It will then trim the content of these imported programs to exactly what they need — i.e. remove unusedDefinedTypeNodes
— and include the trimmed imported programs to the tree definition so that explorers and renderers have all the information they need within the tree.
Are you thinking that developers would send a transaction to this program to register their client when publishing to NPM? They may only have to do this once, since you're really only interested in the name of the client. I'm not sure how you'd verify in this instruction that the client is in fact generated with Kinobi and thus is compatible for the visitor.
Another thing to mention is that Program Runtime v2 is built around using Solana as an on-chain Cargo registry, in which on-chain programs can be imported as dependencies directly from the blockchain. This aligns really well with that approach, but since we're dealing with off-chain clients, we'd have to get a bit more crafty. Regardless, laying this down in the time running up to PRv2 could be extremely valuable.
[E]:
Do you mean generating a function like
resolveSplCoreTransactionError
? I think that’s a good idea but it might not always be enough to resolve the error.
No, I meant if you have, say 4 programs, in your generated client, you could have one error.ts
file that gets generated where a list is created within one composite error resolve function. Basically the exact same thing as the example you wrote in your first reply, but already pre-defined and not requiring me to build that list.
My plan was to keep track of the generated programs in an array and use the resolveTransactionError method from the @solana/programs like so:
I could be mistaking what you meant here. Does the developer, when defining one or more try...catch
statements, have to compose this programs
array in order to use the resolveTransactionError
function? If so, we could create the programs
array inside error.ts
and offer a resolveTransactionErrorFromMyClient
(whatever) and that way I just feed it the error, no array.
[I]: Worth exploring for sure, but I also don't think it's a bad thing if we decided to roll a different one for what you're looking to do with clients and not dApps.
[J]: Hmmmmmm perhaps we can explore this in the core programs repo as we go along.
[C]:
Yeah I'd like to wait to see the full potential of runtime v2 before thinking of a more concrete solution but essentially the idea is to have an on-chain registry that contains information such as:
And they every time you push a significant update to your program, you'd export your Kinobi tree as a JSON file, store it somewhere and send a transaction to the registry program in order to append a new version to your program definition.
Anyway, that's the rough idea, it'll likely be very different and I'm hoping runtime v2 can take a lot of the responsibility away.
[E]:
I think we've got a bit of a misunderstanding here haha. What I was trying to say is, even if we had a generated function like resolveTransactionErrorFromMyClient
, this will only contain the programs defined inside you client. The reality is that your error may have been thrown by a program outside of the ones defined in your client — even if you sent the transaction to one of these programs — because of CPI calls. So, the end-user will likely end up doing something like: call resolveTransactionErrorFromMyClient
then if it's not resolve call resolveTransactionErrorFromMyOtherClient
, etc.
[C]: Sounds like an NFT 👀
[E]: Gotcha. I see what you're saying now. Maybe it's not worth including then, but I think the use-case capturing net would increase in size a bit with resolveTransactionErrorFromMyClient
, I just don't know how much. Could be minimal. Up to you my friend!
[C]: lmao, I read that at the pub last night and almost spat my beer. I mean seeing your registered Kinobi tree as an NFT is an option lol. NFTs don't have the monopole on off-chain URI pointers though! 😤
[E]: I think the current way is good enough to lay the foundations of program error resolution. Then, we can imagine a light framework on top of Web3.js with a plugin system like Umi which would improve the developer experience greatly on that. For instance, clients could export a generic plugin function that expects a program repository and registers it for us.
const umi = createUmi()
.use(splCorePlugin())
.use(splTokenPlugin())
.use(mplTokenMetadataPlugin());
try {
// Send transaction...
} catch (error) {
const resolvedError = umi.programs.resolveError(error, tx);
}
[E]: Alright, sounds good. Let's table the error stuff for now. I see the issue you've linked. Nice!
Hey! Here's a detailed review of some of my thoughts going through the generated code from Kinobi in this repository.
I'm going to attempt to use line separators to isolate thought points.
Program Getters
Do we like
createJoeProgram()
orgetJoeProgram()
?Shared Helpers
Would it make more sense to have a package that houses all of this shared stuff? And then the generated code just imports it, like generated programs do with
@solana/addresses
. https://github.com/lorisleiva/solana-core-programs/blob/main/clients/js/src/generated/shared/index.tsAccounts and Types in IDL or Generated Modules
Anchor set the pattern for the account-type relationship, and a lot of people want it changed, including Acheron. In Anchor IDLs, any struct that was annotated with
#[account]
is inferred to be the account state layout of a program-owned account. As a result, Anchor places this layout under "accounts" in the IDL, not "types". However, you may choose to use some struct, let's sayJoePdaData
as a data structure for some other purpose, outside of specifically account state. I'm curious if we have any interest in making all types land in "types" and then accounts just point to their respective type for their associated state? On a related note, this concept can also apply to instruction data, but the instructions themselves are a repeatable interface, so we can leverage anInstructionDecoder
/InstructionEncoder
. This could be overkill, but you'd end up with a slew of program-defined types, and then instructions and accounts just point to these types where used. Like a canonical Rust program, generated types (structs, enums, etc.) would be layouts that can be used anywhere, for anything.Encoder/Decoder Syntax
I still love the encoder/decoder syntax. Man.
Safe Fetch
These are slick and useful, but does the keyword
safe
convey the value provided by these? In my opinion, something likenullable
makes more sense, even though it looks significantly less pretty. One could make the argumentsafe
actually applies more to the ones with assertions.Custom. Program Errors
The error setup is nice and robust, I dig it. Have you thought of how we might introduce a generic function (like in the "shared" tooling) that can interpret a transaction simulation error and convert to some error from your package's list of generated program errors? Something like:
The Generator is Smart
I'm really curious how the generator knows to build out this logic?
IDL Interfaces
📋 📋 Side note: is now a good time to team up with Anchor and establish an IDL interface? If there are ideas you have for expanding on IDLs/trees, we might want to make sure it always implements some interface (which may be defined already) to make sure wallets and other tooling don't have to freak out and keep up.
We can also work in conjunction with Anchor to publish upgrades to the interface where mutually beneficial. (I can see you have a few new items in your IDLs)
Validator Config
I think I know what this does, but don't remember us talking about it on the codegen call. Love it though! https://github.com/lorisleiva/solana-core-programs/blob/main/configs/validator.cjs
Test Shell Scripts
Are these generated? https://github.com/lorisleiva/solana-core-programs/blob/main/configs/scripts/client/test-js.sh
BytesOnChain Configs
I can set up a config like this if I want to track bytes created on chain? Is that the primary purpose of this config entry?
Solana Tools Toml
Should we define some Solana tooling versions in the Kinobi/Amman config? ie.
solana-program
,@solana/web3.js
,@solana/spl-token
? Maybe even CLI compatibility?We could even go so far as to leverage what would basically be a Loris-version of
rust-toolchain.toml
and configure the proper Solana toolkit for CLI, test validator, etc. for local testing within the repository.I think my limited rudimentary knowledge on amman is probably showing here in that last msg, but you get the idea. Seems like you've likely got a lot of this stuff in here already, I just can't tell!
Summary
Overall I love it. I think everything is looking nice.
I also dig the repository layout. A couple random parting questions:
npx create-solana-repository
?