Open petermetz opened 3 years ago
@petermetz : i may have an hint on "Something we need to work out is how to manage contract deployment of contracts that need to be on the ledger but are a part of Cactus in the sense that they deliver a feature of Cactus rather than being in user-land."
If I understood your concern correctly - many solidity smart contracts have "infrastructure" setup - including contract update/migration capability, and contract management capability (see for example, this) So, a solution could be defining an interface that can bootstrap those functionalities. In specific, If I want I can issue a contract update transaction via Cactus.
On step 4: still working on that!
@petermetz : i may have an hint on "Something we need to work out is how to manage contract deployment of contracts that need to be on the ledger but are a part of Cactus in the sense that they deliver a feature of Cactus rather than being in user-land."
If I understood your concern correctly - many solidity smart contracts have "infrastructure" setup - including contract update/migration capability, and contract management capability (see for example, this) So, a solution could be defining an interface that can bootstrap those functionalities. In specific, If I want I can issue a contract update transaction via Cactus.
@RafaelAPB I would love to claim that that's what I thought of, but you are a few steps ahead of me. :P Read the linked content just now and it's been very illuminating to me. What is being described there as "migration" I usually refer to as "disaster recovery" in architect speak. It's something we definitely need to take care of in advance since anyone who'll run a business that's based on a contract that's been developed using Cactus would have these questions themselves about what happens if disaster strikes.
Taking it a bit further, we could also do RTO and RPO 1 analysis of these contracts (more architect speak) which could also be a great addition to the academic research paper that I wouldn't mind volunteering to hash out a bit more in detail. I'm thinking that it would be very useful research for trying to establish what sort of business continuity guarantees one can expect if their business is ran on a DLT/smart contract. Thanks for inspiring me to have these thoughts as usual. ;-)
Recovery Point Objective
= Data Risk. RPO refers to the maximum acceptable amount of data loss an application can undergo before causing measurable harm to the business.
Recovery Time Objective
= Downtime. RTO states how much downtime an application experiences before there is a measurable business loss.
On step 4: still working on that!
Thank you for continuing to do so! I'm hoping that I'll be able to join back in there soon, maybe as early as next week. As I was typing the initial content for this issue I had an eureka moment regarding how to integrate non-Typescript files as resources into the project files in a generic and flexible way (the whole encoding into JSON/base64 mechanism that I described is that idea). I'll work on that and will try to make it so that it becomes a project wide feature of the build system. Will have to be careful with artifact sizes of course, but I believe it to be our best course of action as long as we are vigilant not to include 20 megabyte large binaries for example.
If I understood your concern correctly - many solidity smart contracts have "infrastructure" setup - including contract update/migration capability, and contract management capability (see for example, this) So, a solution could be defining an interface that can bootstrap those functionalities. In specific, If I want I can issue a contract update transaction via Cactus.
This makes a lot of sense, there are a ton of extra steps that need to be done the same way for all contracts when they are being deployed on an EVM based blockchain (Factories, Registries, Proxies), and helper contracts/libraries which are leveraged by most contracts (Access control, Pausability, SafeMath, Context)...
We could create a separate plugin for EVM deployments where a Factory, and Factory manager accounts (Accounts in charge of migrations) are defined in it's configuration, and handles all deployments and contract presence queries in a deterministic manner (using CREATE2 opcode so we can expect addresses deterministically)
Decoupling functionality this way would reduce development time for EVM based plugins and would allow us to manage all migrations in a single place
Added an issue to discuss #552
Is your feature request related to a problem? Please describe.
We don't have atomic swap support for cross-ledger transactions.
Describe the solution you'd like
HTLCs implemented for each ledger we have a connector for and the plumbing to make it all work with the plugin architecture in a way that is as seamless as possible to developers who'll use Cactus for running atomic transactions, testing their applications which depend on atomic transactions to implement their own, higher level use cases.
Contract Deployment
Something we need to work out is how to manage contract deployment of contracts that need to be on the ledger but are a part of Cactus in the sense that they deliver a feature of Cactus rather than being in user-land.
In a way, the contracts are similar to an SQL schema of an RDBMS so what we could do is have a
migrate()
method exposed on the common interface definition of plugins that are tied to specific contracts (such as the HTLC plugins). This migrate method would do the same thing as schema migration methods do in any run of the mill business application that has automated schema migration built-in:latest version of the contract
to the ledger.Note: The definition of
latest version of the contract
should be a compile-time constant of the plugin where the contract artifacts are encoded as JSON/base64 in such a way that Typescript files can import them at compile time and know the shape of the JSON files. An example for chaincode is provided below (this mechanism is already operational for Web3 ledgers, but not for others, yet)./packages/cactus-plugin-htlc-fabric/src/main/json/MyCoolHtlcChainCode.json
JSON file such as:const goSourceBase64 = files["./MyCoolHtlcChainCode.go"]; const goSource = Buffer.from(b64string, 'base64'); // deploy the chaincode to the Fabric ledger somehow (we have a PoC of // this working by SSH-ing onto the operating system running the Fabric ledger).
Describe alternatives you've considered
Trusted relays and notary schemes. We should implement those as well, in the form of plugins, so that people can choose which one is the best for their use-cases (each method has their pros and cons and we want to avoid prescribing the choices wherever possible)
Additional context
In the old codebase we already had a relay mechanism implemented, but it didn't yet support atomic swaps just regular data transfer. We'll need to bring that back eventually as well. For now it remains deleted from the main tree because a bunch of issues with CVEs related to dependencies that we didn't have time to fix, but the code for the contracts we had there can be used for sure.
cc: @takeutak @sfuji822 @hartm @jonathan-m-hamilton