Open ahuang11 opened 5 days ago
Attention: Patch coverage is 0%
with 25 lines
in your changes missing coverage. Please review.
Project coverage is 60.94%. Comparing base (
2c83a1b
) to head (1c86fd3
).
Files with missing lines | Patch % | Lines |
---|---|---|
lumen/ai/coordinator.py | 0.00% | 23 Missing :warning: |
lumen/ai/llm.py | 0.00% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Prior to this, since Llama.invoke isn't truly async, there was no loading indicator and it was confusing whether Lumen was doing anything, but this PR changes that:
Pins version and makes minor tweaks to config.
Seems promising with Qwen; couldn't get it running with Mistral 7B