-
### What happened?
Consider using one of the continuous profiling tools instead of on-demand profiling:
- https://github.com/profefe/profefe (storage: BadgerDB, S3, Google Cloud Storage and ClickH…
-
## Objective
Ensure that Shannon scales both on-chain & off-chain.
## Origin Document
This issue is intended to be a living document to keep track of all related efforts.
## Identified iss…
okdas updated
3 weeks ago
-
Currently dymint uses only `github.com/dgraph-io/badger/v3`
we should aim to include at least `goleveldb` as additional (maybe default?) option
from tendermint to reference:
```
# Database ba…
-
During our investigation of why the size of our database was endlessly growing, even when no data was being written to cete, we figured out that there is an important design flaw in how BadgerDB and R…
-
Implement the `DB` interface (from `tendermint/tendermint/libs/db` package) using BadgerDB (https://github.com/dgraph-io/badger) so it can be used as a backend in the IAVL store (and also the TM store…
-
The transaction implementation in BadgerDB is critical for the Kernel consensus, and we can't make the core consensus implementation rely on such a large external project which we have no deep underst…
-
We're currently in the process to upgrade the RN key-value store from `badgerdb/v2` to `badgerdb/v3`. However, it became apparent that the read latency of this library may be too high for the amount o…
-
Hello,
I suspect that under certain conditions the Upsert method creates duplicate records.
I have tried reproducing the issue, but I couldn't manage yet - I will try more and update here with f…
-
```
// WriteBlock writes a one or multiple blocks to the underlying writer.
func (s *Flusher) WriteBlock(blocks []block.Block, schema typeof.Schema) error {
if s.writer == nil || len(blocks) == 0 …
-
i am thinking of rocksdb or clickhouse to implement metadata store.
but the rocksdb store newest data in high level, oldest data in lower level, it is not good for our search.