protectai / vulnhuntr

Zero shot vulnerability discovery using LLMs
GNU Affero General Public License v3.0
1.1k stars 113 forks source link

Move response format from general user-prompt to LLM class #27

Open dguerri opened 1 week ago

dguerri commented 1 week ago

The response template is added to the user_prompt before the latter is processed by each LLM class. Some LLM APIs natively support response format (e.g., ChatGPT and Gemini) and sometimes can get confused by the json dump added to the user prompt.

One solution is to let each LLM class add the response format, maybe by implementing the user prompt creation in the base class and allowing subclasses to specialize its behaviour.