At the moment, the commands generated by the LLM are quite hacky. See the examples below:
user input:
change the colour of alpha helices to red
response:
// select and color alpha helices in red
viewer.setStyle({ss: "h"}, {cartoon: {color: "red"}});
user input:
change the color of cysteines to red
response:
viewer.setStyle({resn: "CYS"}, {cartoon: {color: "red"}});
user input:
change their display style to stick
response:
viewer.setStyle({resn: "CYS"}, {stick: {color: "red"}});
user input:
colour carbons blue
response:
viewer.setStyle({elem: "C"}, {stick: {color: "blue"}});
user input:
change colour of hydrogens to red
response:
viewer.setStyle({elem: "H"}, {sphere: {color: "red"}});
Although some examples might be useful to help with few-shot prompting, this will definitely not generalize well.
The task is to determine how to avoid prompt leaking when giving a few examples within the prompt.
Some possible solutions might be:
reducing the number of examples
careful selection of examples
specifying not to regurgitate the prompt
This issue could also ideally create a standardised way of managing examples for few-shot prompting, so one does not have to manually edit text files, but can rather test out tens of few-shot prompts quickly.
At the moment, the commands generated by the LLM are quite hacky. See the examples below:
Although some examples might be useful to help with few-shot prompting, this will definitely not generalize well.
The task is to determine how to avoid prompt leaking when giving a few examples within the prompt.
Some possible solutions might be:
This issue could also ideally create a standardised way of managing examples for few-shot prompting, so one does not have to manually edit text files, but can rather test out tens of few-shot prompts quickly.