digitalfabrik / integreat-chat

MIT License
1 stars 0 forks source link

Evaluate MiniLLM Performance #10

Open dasgoutam opened 7 months ago

dasgoutam commented 7 months ago

Given a consistent retrieval mechanism, evaluate the performance of MiniLLM's

Select a list of MiniLLM's to evaluate and compare performances.

At a later stage, the 'best performing' MiniLLM can be used to compare with larger models(higher parameters)

svenseeberg commented 5 months ago

Currently implemented in Google Colab.

svenseeberg commented 1 month ago

Maybe we can use llama3.2:3b to classify the incoming messages as "question"/"not a question".

svenseeberg commented 1 month ago

Maybe we can use llama3.2:3b to classify the incoming messages as "question"/"not a question".

https://github.com/digitalfabrik/integreat-chat/blob/main/integreat_chat/core/settings.py#L54-L56