microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
9.51k stars 869 forks source link

[BUG] [VSCode Extension] Open LLM Tool doesn't allow specifying the 'stop' parameter when api is 'completion' #2444

Open cedricvidal opened 8 months ago

cedricvidal commented 8 months ago

Describe the bug Open LLM Tool doesn't allow specifying the 'stop' parameter when api is 'completion' when api is 'completion'. This parameter is important when using the completion api to control when to stop generating tokens.

How To Reproduce the bug Steps to reproduce the behavior, how frequent can you experience the bug: Reproduced all the time.

The screenshot bellow shows that the 'stop' parameter cannot be set when using the 'completion' api.

Screenshot 2024-03-21 at 9 07 51 PM

Screenshots

  1. On the VSCode primary side bar > the Prompt flow pane > quick access section. Find the "install dependencies" action. Please it and attach the screenshots there.
Screenshot 2024-03-21 at 9 09 09 PM

Environment Information

Additional context Add any other context about the problem here.

Joouis commented 7 months ago

It's a feature ask instead of extension bug. No stop parameter described in the open_model_llm.yaml.

@dans-msft @Adarsh-Ramanathan Please help to take a look, thanks.

cedricvidal commented 7 months ago

Hello @Joouis , thank you for your reply. Let me clarify, I classified this as a bug because as it is, the tool offers a completion api drop down menu option but cannot be used in practice to consume the Llama 2 7b completion model (as opposed to the chat model) and I believe this would affect any completion model but I haven’t tried.

As a workaround, I’m using a Python node.

Adarsh-Ramanathan commented 7 months ago

@Joouis, I think this has been incorrectly assigned - based on other openLLM bugs in this repo, the correct owner is probably @youngpark.

chjinche commented 7 months ago

@youngpark could you please take a look at the issue of open model llm?