Open igor-sosnowicz opened 1 week ago
Additionally, new examples could be used to measure create tests items measuring performance of the QG models.
The TinyLlama 2.1 1.1B was tested. Unfortunately, both quality of answers based on the human evaluation and performance are terrible.
The Phi 1.5 1.3B fails with the torch.OutOfMemoryError
error on Nvidia GTX1660 Super.
Original Gemma 2 is a gated model so the access is restricted. Using different version of the model is possible.
However, llama-cpp-python
runtime is required.
Compile the database of learning materials that contains longer and more reasonable materials.