GoogleCloudPlatform / localllm

Apache License 2.0
1.51k stars 113 forks source link

Log output from serving models for easier debugging #18

Closed bobcatfish closed 7 months ago

bobcatfish commented 7 months ago

llama-cpp-python uses uvicorn and it turns out there's a different way to start the running models that uses uvicorn directly, and that makes it possible to pass logging configuration to uvicorn. Unfortunately it seems that logs that come directly from llama_cpp get thrown to stderr and it's not configurable (https://github.com/abetlen/llama-cpp-python/blob/ae71ad1a147b10c2c3ba99eb086521cddcc4fad4/llama_cpp/_logger.py#L30)

This change will make it so that the llm tool is installed in the workstations impage with a default logging configuration file which writes logs to /var/log/localllm.log. I don't love how different the story is if you actually run the tool directly (you have to go out of your way to get the logging) but this seems okay for now at least.

The content of the logging config is from https://gist.github.com/liviaerxin/d320e33cbcddcc5df76dd92948e5be3b

Fixes #16

bobcatfish commented 7 months ago

thanks for the review @jerop !! i made the changes you suggested and then... FORGOT TO PUSH THEM so i'll add them into #19 instead