Open Maxim12313 opened 2 weeks ago
Ways to measure how well the model is performing: Perplexity Human Eval (optional)
This is on feat/evaluation: https://github.com/MichiganDataScienceTeam/F24-mini-copilot/tree/feat/evaluation
I will be gone this weekend, just opening this issue so anyone can get started
Perplexity is done, human eval remains if anyone is interested
Ways to measure how well the model is performing: Perplexity Human Eval (optional)
This is on feat/evaluation: https://github.com/MichiganDataScienceTeam/F24-mini-copilot/tree/feat/evaluation
I will be gone this weekend, just opening this issue so anyone can get started