clp-research / clembench

A Framework for the Systematic Evaluation of Chat-Optimized Language Models as Conversational Agents and an Extensible Benchmark
MIT License
22 stars 31 forks source link

[backend] Missing Huggingface backend logging #63

Closed Gnurro closed 6 months ago

Gnurro commented 7 months ago

After the model registry/v1.0beta update, none of the logger calls in backends/huggingface_local_api.py work anymore. This is an issue for properly running the benchmark using that backend and testing added models. Logging code in huggingface_local_api.py has not changed, so I suspect the logger initialization or passing the name argument in backends/__init__.py is the cause.

phisad commented 7 months ago

Works actually as intended, but backend logging might be adjusted in the logging.yaml to fulfil the new requirement.

loggers:
  benchmark.run:
    handlers: [ console ]
  backends:
    level: DEBUG
+   handlers: [ console, file_handler ]

Outputs:

2024-03-04 13:48:07,060 - backends.huggingface_local_api - INFO - Loading huggingface model config and tokenizer: llama-2-7b-chat-hf

to console and clembench.log when calling

def test_get_model_for_huggingface_local_logs_infos(self):
        load_model_registry()
        get_model_for("llama-2-7b-chat-hf")
phisad commented 6 months ago

I commited the change above. Does it work now for you @Gnurro ?

Gnurro commented 6 months ago

Yes, the backend info-level logging works as I've been used to again.