defenseunicorns / leapfrogai

Production-ready Generative AI for local, cloud native, airgap, and edge deployments.
https://leapfrog.ai
Apache License 2.0
257 stars 28 forks source link

No logs in `*-model` pods after it starts up #1081

Open vanakema opened 1 month ago

vanakema commented 1 month ago

Environment

  1. OS and Architecture: N/A
  2. App or Package Name: vllm, text-embeddings
  3. App or Package Version: v11
  4. Kubernetes Distribution: N/A
  5. Kubernetes Version: N/A

Steps to reproduce

  1. Tail the text-embeddings pod logs
  2. Add a document to an assistant so it triggers indexing on that document
  3. Observe that there are no logs output to indicate that a request came in at all

Expected result

Actual Result

Additional Context

I believe I investigated this, and found that there are in fact logging statements, but they are not making it to the pod logs. I'm thinking maybe the logger is misconfigured or something.

justinthelaw commented 1 month ago

This only occurs in text-embeddings, correct? vLLM and llama-cpp-python should be logging metrics and params of each generation request.