Closed ArshaanNazir closed 7 months ago
Running embedding benchmarking with CLI
Sets Keys:
python -m langtest config set OPENAI_API_KEY=<KEY>
python -m langtest config set HUGGINGFACEHUB_API_TOKEN=<HF_TOKEN>
Example Usage (Single Model):
python -m langtest benchmark embeddings --model TaylorAI/bge-micro --hub huggingface
Example Usage (Multiple Models):
python -m langtest benchmark embeddings --model "TaylorAI/bge-micro,TaylorAI/gte-tiny,intfloat/e5-small" --hub huggingface
Description
Running embedding benchmarking with CLI
Sets Keys:
python -m langtest config set OPENAI_API_KEY=<KEY>
python -m langtest config set HUGGINGFACEHUB_API_TOKEN=<HF_TOKEN>
Example Usage (Single Model):
python -m langtest benchmark embeddings --model TaylorAI/bge-micro --hub huggingface
Example Usage (Multiple Models):
python -m langtest benchmark embeddings --model "TaylorAI/bge-micro,TaylorAI/gte-tiny,intfloat/e5-small" --hub huggingface