Closed brandomr closed 10 months ago
Attention: 111 lines
in your changes are missing coverage. Please review.
Comparison is base (
2c32c25
) 81.78% compared to head (9fb50e2
) 84.90%. Report is 85 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
worker/operations.py | 80.73% | 84 Missing :warning: |
worker/utils.py | 88.44% | 23 Missing :warning: |
api/utils.py | 75.00% | 3 Missing :warning: |
lib/settings.py | 96.42% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Addresses #167
@Free-Quarks note that the LLM assisted code to AMR extraction isn't explicitly tested separately but we could do so down the road. Right now the way the test harness works:
dynamics
exist for the code in the scenario and if so uses thoseSince #2 defaults to using the LLM assistant in our
knowledge-middleware
service it will basically just do that. We could set it up so that for a given test scenario you can "turn off LLM assistance" but then we'd need to duplicate the scenario (e.g. have 2SIDARTHE Code
scenarios). We'd do this by addingllm_assisted.txt
which would just be abool
flag for the scenario in the scenario directory and read that into the test harness runner. Let me know what you think.@YohannParis tagging you as reviewer just for your visibility--no changes are needed on the HMI end; the default code to AMR (if no dynamics are specified) is to use LLM assisted mode now :)