Closed mostolog closed 7 years ago
Hi, it seems to be easy to implement this if decides the output structure of JSON report. If you may help me with it, i has add it fast.
Hi
I don't know if I'll be useful, but here are my two cents. Let's first help me understand the output of: rma -s myredis -b all > _all.txt
Match : 0% 0/1 [00:00<?, ?it/s] Match : 100% 1/1 [00:00<00:00, 1079.34it/s]
If this is a progress shown during execution, it shouldn't be printed on output but on stdout.
Aggregating keys by pattern and type Apply rules
Useful for stdout, but should be omitted in JSON.
Processing keys: 0% 0/1 [00:00<?, ?it/s] Processing List patterns: 0% 0/1 [00:00<?, ?it/s] Processing List patterns: 100% 1/1 [00:01<00:00, 1.66s/it]
Same as first comment.
Server stat Stat Value Total keys in db 1 RedisDB key space overhead 96 Used hash-max-ziplist-value
64 Used set-max-intset-entries
512 Used zset-max-ziplist-value
64 Used list-max-ziplist-entries
512 Used hash-max-ziplist-entries
512 Used zset-max-ziplist-entries
128 Used list-max-ziplist-value
64 Info used_memory_peak_human
146.72M Info used_memory_lua
38912 Info used_memory_peak
153846712 Info mem_fragmentation_ratio
1.22 Info mem_allocator
jemalloc-3.6.0 Info used_memory_rss
68235264 Info used_memory
55815136 Info used_memory_human
53.23M
Elements arranged alphabetically with some notes/questions about it
{
"clustered" : false,
"clusterName" : null,
"nodes" : [{
"clusterRole" : null,
"info" : {
"mem_allocator" : "jemalloc-3.6.0",
"mem_fragmentation_ratio" : 1.22,
"used_memory" : 55815136,
"used_memory_human" : "53.23 MB",
"used_memory_lua" : 38912,
"used_memory_peak" : 153846712,
"used_memory_peak_human" : "146.72 MB",
"used_memory_rss" : 68235264
},
"name" : "myredis",
"used" : {
"hash-max-ziplist-entries" : 512,
"hash-max-ziplist-value" : 64,
"list-max-ziplist-entries" : 512,
"list-max-ziplist-value" : 64,
"set-max-intset-entries" : 512,
"zset-max-ziplist-entries" : 128,
"zset-max-ziplist-value" : 64
},
"redisOverhead" : 96,
"totalKeys" : 1
}]
}
As I'm not an expert on Redis, fields like redisOverhead or zset-max-ziplist-value are totally unknown for me. Even the difference between info and used objects.
Keys by types
Omit.
Match Count Type % mylist:mytype 1 list 100.00%
[
{
"name": "mylist:mytype", "type":"list", "usage": "100"
}
]
Processing list key stats
Guess what...?
Match Count Useful Real Ratio Encoding Min Max Avg mylist:mytype 1 10 64 6.40 embstr [100.0%] 10 10 10 Total: 1 10 64 0.00 0 0 0
{
"lists" : {
"name" : "mylist:mytype",
...
},
"total" : {
"count" : 1,
...
}
}
Could you elaborate on those columns? are those "usage in bytes"? items in queue?
List stat
What's the difference between keys stats and list stats? AFAIK, i just have 1 key and it's both enlisted on keys stats and list stat
| Match | Count | Avg Count | Min Count | Max Count | Stdev Count | Value mem | Real | Ratio | System | Encoding | Total |
|:-----------|--------:|------------:|------------:|------------:|--------------:|------------:|---------:|--------:|---------:|:--------------------|---------:| | mylist:mytype | 1 | 123021 | 123021 | 123021 | 0 | 42452924 | 48903808 | 1.15 | 3149824 | linkedlist [100.0%] | 52053632 |
| Total: | 1 | 0 | 0 | 0 | 0 | 42452924 | 48903808 | 0.00 | 3149824 | | 52053632 |
Done in 1.58691 seconds
If this information is useful, the result JSON should be divided in 2 objects: the retrieved data and the query metadata:
{
"request" : { "duration" : 1.58691 },
"response" : {
"clustered" : ...
}
}
One thing I didn't notice at first, but that IMHO it deserves to be fixed.
Seems the utility is "summarizing" the lists. eg:
myfamily:mylist:mytype1 myfamily:mylist:mytype2
are reported as
myfamily:mylist:*
Again IMHO, it would be much better to have separate entries for each key. Regards
Thanks!
Hi
Current output format seems human-readable-friendly, but hard to process/use on scripts. Have you considered doing a JSON format version of this tool?
Regards