issues
search
FMInference
/
FlexLLMGen
Running large language models on a single GPU for throughput-oriented scenarios.
Apache License 2.0
9.18k
stars
548
forks
source link
Create requirements.txt
#59
Closed
Bazla24
closed
1 year ago
merrymercy
commented
1 year ago
closed due to inactivity. Feel free to reopen
closed due to inactivity. Feel free to reopen