EmbeddedLLM / JamAIBase

The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate on AI applications.
https://www.jamaibase.com/
Apache License 2.0
382 stars 17 forks source link

Heads up: Infinity:0.0.42 now supports multiple models #5

Closed michaelfeil closed 2 months ago

michaelfeil commented 5 months ago

Example:

port=7997
model1=michaelfeil/bge-small-en-v1.5
model2=mixedbread-ai/mxbai-rerank-xsmall-v1
volume=$PWD/data

docker run -it --gpus all \
 -v $volume:/app/.cache \
 -p $port:$port \
 michaelf34/infinity:latest \
 v2 \
 --model-id $model1 \
 --model-id $model2 \
 --port $port

Have fun!

jiahuei commented 5 months ago

Looks great! Thanks for the heads up 😀

jiahuei commented 2 months ago

Closed for now, we currently assign one GPU per model by launching separate Infinity instances.