Open j6k4m8 opened 1 year ago
I'll work on version 0.1 prompts, porting over what I already have in langchain
Awesome — assign yourself here for tracking purposes?
Wishlist of assistant functions (not all of these are feasible):
i imagine that some of these will be most easily done by having some sort of:
class agent
can_answer(user_request):
return llm("does this look like a request to convert bullets to prose?", context, user_request)
do_edit(user_request):
...
Yeah that would be cool but sounds like step 2 to me, where step 1 is to implement them in a way that works when triggered "manually"
+1!
On Thu, Jun 22, 2023 at 3:03 PM wrongu @.***> wrote:
Yeah that would be cool but sounds like step 2 to me, where step 1 is to implement them in a way that works when triggered "manually"
— Reply to this email directly, view it on GitHub https://github.com/KordingLab/llm4papers/issues/4#issuecomment-1603174068, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFJKBY4S36FRHACXQZT7I3XMSJGVANCNFSM6AAAAAAY4UDZOE . You are receiving this because you authored the thread.Message ID: @.***>
Right now, the LLM will sometimes respond with a trailing "
@user: i did it!
" or whatever, sometimes it'll strip the newline, sometimes it'll dup the whole line... A more specific prompt, and moreguidance
might do the trick.