guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
18.13k stars 1.01k forks source link

[Test] Environment variable for selected model #883

Closed riedgar-ms closed 3 weeks ago

riedgar-ms commented 4 weeks ago

Make debugging tests with a particular LLM easier by adding a GUIDANCE_SELECTED_MODEL environment variable.

We have the --selected_model command line argument for pytest, which loads a designated LLM in a fixture (if not set, currently gpt2cpu is the default). However, when debugging tests within VS Code, setting that command line argument is non-trivial, and our workaround was to patch the selected_model_name() fixture (which reads the command line argument). This change makes the default value for the command line argument be the GUIDANCE_SELECTED_MODEL environment variable, and if that is not set, falls back to gpt2cpu.

codecov-commenter commented 4 weeks ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 59.76%. Comparing base (ad6fbb2) to head (0b0f028).

:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #883 +/- ## ========================================== + Coverage 57.17% 59.76% +2.59% ========================================== Files 63 63 Lines 4588 4591 +3 ========================================== + Hits 2623 2744 +121 + Misses 1965 1847 -118 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

hudson-ai commented 4 weeks ago

Looks good to me.

Side note, but do we (or do we want to) expose an interface similar to get_model in the main guidance namespace? It may be a nice QoL change so that users can just swap out strings rather than changing import statements when they want to use a different model. Just a thought.