Open cedricvidal opened 8 months ago
It's a feature ask instead of extension bug. No stop
parameter described in the open_model_llm.yaml.
@dans-msft @Adarsh-Ramanathan Please help to take a look, thanks.
Hello @Joouis , thank you for your reply. Let me clarify, I classified this as a bug because as it is, the tool offers a completion api drop down menu option but cannot be used in practice to consume the Llama 2 7b completion model (as opposed to the chat model) and I believe this would affect any completion model but I haven’t tried.
As a workaround, I’m using a Python node.
@Joouis, I think this has been incorrectly assigned - based on other openLLM bugs in this repo, the correct owner is probably @youngpark.
@youngpark could you please take a look at the issue of open model llm?
Describe the bug Open LLM Tool doesn't allow specifying the 'stop' parameter when api is 'completion' when api is 'completion'. This parameter is important when using the completion api to control when to stop generating tokens.
How To Reproduce the bug Steps to reproduce the behavior, how frequent can you experience the bug: Reproduced all the time.
The screenshot bellow shows that the 'stop' parameter cannot be set when using the 'completion' api.
Screenshots
Environment Information
pf -v
:1.6.0
13.6.4 (22G513)
python --version
:3.11.8
1.87.2
v1.14.0
Additional context Add any other context about the problem here.