Closed altafan closed 1 year ago
One way to overcome the docker workflow may be to package ocean inside tdexd, but we may loose control having two process the same containers
That being said, we can make automatic migration for tdexd data (trades, webhooks etc...) And "force" developer to run ocean
I would not mix migration code into tdex-daemon repo but create separate repo. I dont think we will benefit much from automatise this and dockerising new binary with tdex/ocean, on the contrary i think it would complicate deployment pipeline. I think migration process should be more of manual one in which new migration binary, once run, would produce "artifacts" necessary to migrate from version x to version y, like badger/sql dump and datadir zip which would be possible to import easily(not sure how badger backup/restore works).
new migration binary, once run, would produce "artifacts" necessary to migrate
Agreed, I think this has been the idea all along: we migrate the storage schemas (ACK to support badger only for now) and manually operators will need to setup processes that will consume it.
I dont think we will benefit much from automatise this and dockerising new binary with tdex/ocean, on the contrary i think it would complicate deployment pipeline.
I agree with this, also I want to add that is not recommended to run multiple services in a single docker container.
I would not mix migration code into tdex-daemon repo but create separate repo.
I think the consequences of this choice have an impact mostly on the dockerized version of tdexd. If we move the "migrator" to an external repo, the operator would need to download the images of ocean, the new version of the daemon, and this new service, while on the other hand, if we kept this binary here, we could embed it into the tdexd image - just like we did for the cli for example.
For those who serve the daemon as a standalone binary, instead, there's no real difference, in the sense that they would need to either compile the migrator or download the binary from gh release page. The operation is the same regardless the migrator is in this repo or another one. I do think it's worth keeping it here for convenience, the operator will have to download ocean already, let's prevent him to download a third one.
We currently miss a way to migrate an existing provider
v0.9.1
to the newv1
.At the moment this can be partially achieved by starting up the required services and interacting with them to restore the wallet and markets. Trade infos are not restored in this process.
But this is not really a migration from one version to another, because it is done at runtime, so it's more a "start clean and restore what you can".
The purpose here, instead, is to "translate" all the data stored in the "old" format to the new one and basically prepare the datadirs of the services to be started in v1 (ocean, tdexd).
My proposal is to implement a standalone binary like
cmd/migrations
that reads av0.9.1
datadir, translates everything in the new format, cleans the datadir and adds 2 new folders which are the ocean's datadir - with all wallet data - and tdex-daemon's one - with all metadata like markets, fees, prices, trades, etc.Ideally, once this service finishes its job, it should be enough for the operator to start
oceand
andtdexd
and use the datadirs created by the migration binary.From the operator's POV, this operation should look very simple while under the hood it makes quite complex stuff actually. I think this could be the right approach but I want your feedback guys @sekulicd @tiero.
Considerations
pkg/migration
with its own go.mod/go.sum in charge of reading all the data from the old version of the datadir - therefore isolating all the old deps in this external package. Practically, this would be a merge of the oldrepoManager
,pubsub
andmacaroon
services (did I forget some?) that write to the datadir. Withincmd/migration
, instead, we would translate the structs coming from this external pkg to the new format, prepare the new datadirs and flush all the old content (everything's been read, not needed anymore).0.x
tov1
? Because every bump of version (from0.x
to0.y
) coincides with a breaking change, event at the domain level.v0.9.1
is the last "old" version of the daemon, meaning that versions after that one are v1 or higher, so the operator must make sure to run exactlyv0.9.1
to migrate tov1
.