-
I run java service with `Xms1G and Xmx2G`. My mapdb file is 1.4G. Compaction fails with `java.lang.OutOfMemoryError: Java heap space`. While the solution here would be to increase Xmx to probably 4G i…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 2.4-latest
- Deployment mode(standalone or cluster):both
- MQ type…
-
Compaction and image-creation are handled by a single thread. That easily becomes a bottleneck, causing e.g. compaction to fall behind. This is easy to see with pgbench -s1000 -i, for example. Make it…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: master/2.4
- Deployment mode(standalone or cluster):
- MQ type(roc…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:
- Deployment mode(standalone or cluster):
- MQ type(rocksmq, pulsa…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 2.4 latest
- Deployment mode(standalone or cluster): both
-…
-
Ultimate boss @easel shared a great insight which is instead of using sha256 for compacted keys, allow for well-known keys
`rpk topic config -c "redpanda.well-known-compaction-key: {uint64, uuid}"`…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version: 2.4-20240911-134422ee-amd64
- Deployment mode(standalone or cluster…
-
**Description**: I am trying to maintain OHLC stock data of 1minute, 5 minute in my in-memory RAM(Application). As I am using redis timeseries' compaction policy to calculate OHLC, How can I get an ev…
-
大量数据导入到Tendis后,发现Tendis使用内存,远高于配置的rocksdb的BlockCache大小,相关查询和配置信息如下
# Server
redis_version:2.3.6-rocksdb-v5.13.4
redis_git_sha1:532b9a95
redis_git_dirty:0
redis_build_id:13480377323411654960
redi…