-
### Celestia Node version
v0.13.2
### OS
docker/kubernetes
### Install tools
_No response_
### Others
_No response_
### Steps to reproduce it
1. Monitoring and alerting systems to detect when…
-
```
15:54:14.919 [xtdb.compactor-5] ERROR xtdb.compactor - Error running compaction job.
org.apache.arrow.vector.util.OversizedAllocationException: Memory required for vector is (2147483648), whic…
-
The scylla logs will no longer show compaction information by default, and we are left to query the compaction history table.
However, there is precious information that doesn't make it into the co…
-
```
Background:
There will produce many files that include the deleted row data after running the delete operation, as time going, its space contain many stale data and become slower and slower, s…
-
- [x] Fix for crawlers
- [x] Fix the setDensityMaskedParallelogram on L155 in ssSCspec
- [x] Need to set cultivatorDecompactionDelta in ssSC. Get the value from the active cultivator.
- [x] Increa…
-
## My Environment
* __ArangoDB Version__: 3.12.0 Enterprise
* __Deployment Mode__: Single Server
* __Deployment Strategy__: sytemd
* __Infrastructure__: own
* __Opera…
-
```bash
TEST_TMPDIR=/tmp/db1 ./db_bench_900 --benchmarks=fillrandom --num=12345678 --write_buffer_size=51200 --max_bytes_for_level_base=52428800 --level0_file_num_compaction_trigger=4 --statistics=1
…
-
We have several issues about adding some "copy" utility that compacts the arrays, i.e. truncates all sliced arrays, child arrays and buffers to just the part that is needed to represent the data (http…
-
**Describe the problem**
We use a default of 3 cores per store to run compactions (see `COCKROACH_ROCKSDB_CONCURRENCY`). For multi-store setups, with insufficient cores, that may be far too many. I…
-
Hi Team,
I am new to Grafana tempo, we want to get the traces from the backend api's to grafana tempo dashboard.
As of now- we have configured, grafana tempo(2.5.0) on k8 using helm charts. We are a…