Closed louisinger closed 3 months ago
Dockerfile may need an update?
SQLite isn't embedded? ARK_DB_TYPE=sqlite
should be enough to switch on sqlite DB (beside migrations)
Dockerfile may need an update?
SQLite isn't embedded?
ARK_DB_TYPE=sqlite
should be enough to switch on sqlite DB (beside migrations)
Exactly, we need to copy the migration file inside the container as well. We can do later, but test a docker image locally first
This PR is not ready to be merged. The repositories are not handling correctly concurrent requests (SQLITE_BUSY errors while running integration tests). WIP
Exactly, we need to copy the migration file inside the container as well. We can do later, but test a docker image locally first
The migrations are hardcoded currently. We need to move to sqlc
for better handling migrations and query definitions.
cc @louisinger
We have always used this go package in other go projects so far. You have to bring the files inside the docker container
To avoid concurrent writes you should a) refactor tables in a way if two components likely write at same time, best to work against two tables b) use mutex on a holder structure so only one writer per time easily c) if you get SQLite Busy error, simply handle that in the code and keep retry-ing until the write does go through
To avoid concurrent writes you should a) refactor tables in a way if two components likely write at same time, best to work against two tables b) use mutex on a holder structure so only one writer per time easily c) if you get SQLite Busy error, simply handle that in the code and keep retry-ing until the write does go through
https://github.com/ark-network/ark/pull/180/commits/8f993926a3a9330a4933b1ab06666083bd644f7a fixes that SQLITE_BUSY errors. Still the integrations tests are hanging (but does not return this SQLITE_BUSY errors)
Why we don't have one events table in SQLite as well? So we have single DB for all repos no?
To avoid concurrent writes you should a) refactor tables in a way if two components likely write at same time, best to work against two tables b) use mutex on a holder structure so only one writer per time easily c) if you get SQLite Busy error, simply handle that in the code and keep retry-ing until the write does go through
https://github.com/ark-network/ark/pull/180/commits/8f993926a3a9330a4933b1ab06666083bd644f7a fixes that SQLITE_BUSY errors. Still the integrations tests are hanging (but does not return this SQLITE_BUSY errors)
But if we don't allow more than one open connection it means we can serve a single user and a single requests at the time? (that are writes? I guess read are not counted but SQL busy errors?)
Why we don't have one events table in SQLite as well? So we have single DB for all repos no?
It seems that we are never reading the event DB (we only push events without reading them once inserted). That's why we decided to re-use the badger implementation. cc @altafan
But if we don't allow more than one open connection it means we can serve a single user and a single requests at the time? (that are writes? I guess read are not counted but SQL busy errors?)
yes readers are not counted, several processes may read at the same time.
Ok but if we can only open a connection at the time, if 100 users are joining a round (and need to write?) what will happen?
This PR adds a new db type
sqlite
in config associated withRoundRepository
andVtxoRepository
implementations ininfrastructure/db/sqlite
package.it closes #177
@altafan @tiero please review