-
## Description
When requesting token metrics from an endpoint running a LMI container using a vLLM engine, **non-zero** values are returned for tokenThroughput, totalTokens, and tokenPerRequest (**as…
-
This will require careful consideration because we don't want to complicate the user interface with bells and whistles that don't add value but maybe we could find a way to cleanly add those additiona…
-
# Environment Details
* Helidon Version: 4.1.x
* Helidon SE or Helidon MP MP
* JDK version:
* OS:
* Docker version (if applicable):
----------
## Problem Description
Tracking issue for MP …
-
### Component(s)
processor/metricsgeneration
### Describe the issue you're reporting
**Context**
Currently the metrics generation processor simply grabs the first datapoint's value from the metric…
-
##### ISSUE TYPE
* Other
##### COMPONENT NAME
~~~
VMWARE
~~~
##### CLOUDSTACK VERSION
~~~
4.18
~~~
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
In m…
-
### Is your feature request related to a problem? Please describe
vmstorage lacks on reporting information about resource usage. Some info like cache size, number of concurrent requests, disk IO, C…
-
**Describe the bug**
The query
```
{ false } | rate()
```
returns data even though `{ false }` should match nothing:
![image](https://github.com/grafana/tempo/assets/2272392/8ab049bb-90a1-…
-
I use this command to evaluate on nuscenes with your pretrained weight:
```
python tools/test.py projects/configs/coocc_nusc/coocc_multi_r101_896x1600.py nusc_multi_r101_896x1600.pth --eval=bbox
``…
-
The metrics available under host:port/service/metrics/prometheus are not common to the prometheus format.
For example the following metric does not contain an '_sum' value to be able to calculate t…
-
During training, the parameters I use are: accumulation_steps=None, amp_opt_level='O1', base_lr=0.01, batch_size=6, cache_mode='part', cfg='./configs/swin_tiny_patch4_window7_224_lite.yaml', dataset='…