issues
search
fmperf-project
/
fmperf
Cloud Native Benchmarking of Foundation Models
Apache License 2.0
21
stars
10
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Documentations for Open Access
#36
GhaziSyed
closed
2 weeks ago
0
Resolved issue with environment variables
#35
GhaziSyed
closed
2 weeks ago
0
generator function fix
#34
GhaziSyed
closed
2 weeks ago
0
Fix paths that are broken when sweep is executed
#33
maxdebayser
closed
3 weeks ago
1
Make the base directory for requests configurable
#32
maxdebayser
closed
2 months ago
1
Move the imports of the tgis grpc stubs to where they are used
#31
maxdebayser
closed
2 months ago
4
Error in generating tokens with example_vllm.py
#30
GhaziSyed
closed
2 weeks ago
1
Could we package fmperf more like a Python-library?
#29
tdoublep
opened
4 months ago
1
Support benchmarking of vLLM advanced features
#28
jvlunteren
closed
4 months ago
0
Support benchmarking of vLLM advanced features
#27
tdoublep
closed
4 months ago
1
Add code option to workload generator
#26
tdoublep
opened
4 months ago
1
adding dump of combined results
#25
bringlein
closed
4 months ago
0
Unit tests for verifying imports
#24
GhaziSyed
closed
4 months ago
7
Enabling prometheus support with service monitors
#23
rohanarora
opened
5 months ago
0
Enable ITL, TTFT, E2E latency computation using mean rather than median
#22
tdoublep
opened
5 months ago
0
Updates to Kind with GPUs setup
#21
rohanarora
opened
5 months ago
2
Adding corresponding import for the realistic workload specification
#20
rohanarora
closed
5 months ago
1
Add heterogeneous data-driven workload generator
#19
mal-zurich
closed
5 months ago
0
Add some notebooks showing how to plot the data etc
#18
tdoublep
opened
5 months ago
1
Add linting action (black)
#17
tdoublep
closed
5 months ago
0
Rework names of workload specifications.
#16
tdoublep
closed
5 months ago
1
Replace local image with quay.io image
#15
tdoublep
closed
5 months ago
0
Support for vLLM --enable-chunked-prefill option
#14
jvlunteren
closed
5 months ago
0
fmperf crashes when vLLM server is started using --enable-chunked-prefill option
#13
jvlunteren
closed
5 months ago
0
Readme fix
#12
GhaziSyed
closed
5 months ago
0
Remove transformers dependency
#11
tdoublep
opened
5 months ago
0
fix to issue: vllm model caching #2
#10
GhaziSyed
closed
5 months ago
0
Adding the fix for issue: New workloadspec classes #1, and updating e…
#9
GhaziSyed
closed
5 months ago
0
make energy metrics as input variables
#8
rinana
closed
5 months ago
2
Energy metrics as input variables
#7
rinana
closed
5 months ago
1
removing artifcatory id
#6
GhaziSyed
closed
5 months ago
0
Add support for server deployment on MIG
#5
yuezhu1
opened
5 months ago
7
More Model API support is needed
#4
wangchen615
opened
5 months ago
1
GitHub actions need to be configured for testing code and automatic benchmarking
#3
wangchen615
opened
5 months ago
3
vllm model caching
#2
GhaziSyed
closed
5 months ago
0
New workloadspec classes
#1
GhaziSyed
closed
5 months ago
0