This change will make it so that the llm tool is installed in the workstations impage with a default logging configuration file which writes logs to /var/log/localllm.log. I don't love how different the story is if you actually run the tool directly (you have to go out of your way to get the logging) but this seems okay for now at least.
llama-cpp-python uses uvicorn and it turns out there's a different way to start the running models that uses uvicorn directly, and that makes it possible to pass logging configuration to uvicorn. Unfortunately it seems that logs that come directly from llama_cpp get thrown to stderr and it's not configurable (https://github.com/abetlen/llama-cpp-python/blob/ae71ad1a147b10c2c3ba99eb086521cddcc4fad4/llama_cpp/_logger.py#L30)
This change will make it so that the llm tool is installed in the workstations impage with a default logging configuration file which writes logs to /var/log/localllm.log. I don't love how different the story is if you actually run the tool directly (you have to go out of your way to get the logging) but this seems okay for now at least.
The content of the logging config is from https://gist.github.com/liviaerxin/d320e33cbcddcc5df76dd92948e5be3b
Fixes #16