Configs : The configs collection is added where the AutoConfig document is stored. This information is used to know, for now, the max lenght of a model.
For now the tokenizer endpoint in the sidecard is still used. As for the hash, for now it is matched with the hash of the tokenizer. In the future this should be separated and generate its own signature.
EvaluatorLM: OpenAI stopped supporting the echo parameter while vLLM keeps it. We will follow vLLM for now. This change impacts the get_result function that was defined for OpenAI. We will keep the get_result function so that it is compatible with vLLM.
Random seeds: In the new version, Pytohn, NumPy, and PyTorch seeds were segregated. In our code we will generate the seeds randomly. In future versions we can include the seeds as part of the request.
Tokenizer siguature: Fix a bug when saving the tokenizer and its signature.
SQL query: Now it apply a between ranges ids, intead of a list.
Configs
: Theconfigs
collection is added where theAutoConfig
document is stored. This information is used to know, for now, the max lenght of a model. For now the tokenizer endpoint in the sidecard is still used. As for the hash, for now it is matched with the hash of the tokenizer. In the future this should be separated and generate its own signature.EvaluatorLM
: OpenAI stopped supporting theecho
parameter whilevLLM
keeps it. We will followvLLM
for now. This change impacts theget_result
function that was defined for OpenAI. We will keep theget_result
function so that it is compatible withvLLM
.Random seeds
: In the new version,Pytohn
,NumPy
, andPyTorch
seeds were segregated. In our code we will generate the seeds randomly. In future versions we can include the seeds as part of the request.Tokenizer siguature
: Fix a bug when saving the tokenizer and its signature.SQL query
: Now it apply abetween
ranges ids, intead of a list.Close #89