Closed Gnurro closed 8 months ago
Works actually as intended, but backend logging might be adjusted in the logging.yaml to fulfil the new requirement.
loggers:
benchmark.run:
handlers: [ console ]
backends:
level: DEBUG
+ handlers: [ console, file_handler ]
Outputs:
2024-03-04 13:48:07,060 - backends.huggingface_local_api - INFO - Loading huggingface model config and tokenizer: llama-2-7b-chat-hf
to console and clembench.log when calling
def test_get_model_for_huggingface_local_logs_infos(self):
load_model_registry()
get_model_for("llama-2-7b-chat-hf")
I commited the change above. Does it work now for you @Gnurro ?
Yes, the backend info-level logging works as I've been used to again.
After the model registry/v1.0beta update, none of the logger calls in
backends/huggingface_local_api.py
work anymore. This is an issue for properly running the benchmark using that backend and testing added models. Logging code inhuggingface_local_api.py
has not changed, so I suspect the logger initialization or passing thename
argument inbackends/__init__.py
is the cause.