py-why / pywhy-llm

Experimental library integrating LLM capabilities to support causal analyses
MIT License
79 stars 12 forks source link

Added support for user-specified prompt templates #15

Open amit-sharma opened 10 months ago

amit-sharma commented 10 months ago

The current API for suggesters depends on hard-coded prompt templates in prompts.py. However, users may want to slightly tweak those templates or propose their own. To make it simple, this PR exposes the prompt template as an argument. Conventions followed:

  1. In each suggester method, all arguments before the prompt_template argument are the special template variables that should be included in the prompt template.
  2. Each variable should be included as "{{variable}}", as required by the guidance library. (Future TODO: add an input checker that checks the validity of a user-specified prompt template.)

To enable this functionality, some argument names were changed to ensure consistency with the variable names in guidance prompt template. Also, there are some bug fixes where wrong prompt or invalid variable name was called.

emrekiciman commented 10 months ago

@amit-sharma , I'm pretty hesitant to expose the guidance prompt template as part of the API for a couple reasons. (1) the guidance library API has changed significantly and the latest version uses a pythonic interface that does not support string templates. (2) other experimental implementations of the protocols might use other approaches that are not consistent with a single prompt template. E.g., maybe there is a better way of prompting that requires multiple prompts and calls?

What do you think about including the prompt template not in the suggester protocols but in the initialization for each protocol implementation? Then prompts can be provided in an implementation-specific fashion?