paolorechia / learn-langchain

MIT License
275 stars 41 forks source link

langchain autogpt examples without opeanai embeddings and faiss vector store #17

Closed unoriginalscreenname closed 1 year ago

unoriginalscreenname commented 1 year ago

So I appear to have the basic integration working with oobabooga, but i've been struggling a little bit with some of your agent examples. I took a look at the langchain autogpt example here to see what some of the differences were: https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html

It does a few things differently than in your files, and it seems to not require the big prompt templates you are using. I have yet to successfully get an of the example agents in this repo to do more complex things than a basic single instruct. However, these examples are still tied to the openAI embeddings and openAI apis.

I was trying to figure out how to take this example and convert it off of openai embeddings to use the sentence transformers and chroma for the memory. These might be really really great examples to re-implement in your setup and show people how to use it with local models instead of all the openai stuff.

I'm kind of close to getting it to work? I've tried porting your Chroma example over to replace the FAISS version, but I've mostly failed.

paolorechia commented 1 year ago

Hi, which model are you using?

WizardLM 7b unquant yields the best results for me.

Sadly the most complex instruction I got it correctly in a consistent way was the simplified Chuck Norris Joke and also the Cat jokes

Anything much more complex results in failures.

I was playing with some new prompts flows, I wanted to get an AutoGPT like tooling for coding - but nothing too promising yet

Really like your initiative, I’d love to take a look there too - sadly, I’m also somewhat low on time this week.

TestUser @.***> schrieb am Mi. 3. Mai 2023 um 15:17:

So I appear to have the basic integration working with oobabooga, but i've been struggling a little bit with some of your agent examples. I took a look at the langchain autogpt example here to see what some of the differences were: https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html

It does a few things differently than in your files, and it seems to not require the big prompt templates you are using. I have yet to successfully get an of the example agents in this repo to do more complex things than a basic single instruct. However, these examples are still tied to the openAI embeddings and openAI apis.

I was trying to figure out how to take this example and convert it off of openai embeddings to use the sentence transformers and chroma for the memory. These might be really really great examples to re-implement in your setup and show people how to use it with local models instead of all the openai stuff.

I'm kind of close to getting it to work? I've tried porting your Chroma example over to replace the FAISS version, but I've mostly failed.

— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/17, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZZHY55WDXEXXV2R3L3XEJLFVANCNFSM6AAAAAAXUMZENU . You are receiving this because you are subscribed to this thread.Message ID: @.***>

paolorechia commented 1 year ago

I built an example: https://github.com/paolorechia/learn-langchain/pull/18

It runs, but sadly the model performance is not good enough, e.g., the model gets stuck into non-sense. I don't think it's currently easy to run AutoGPT on local models. If you find a good way, please share. Otherwise, I'm closing the ticket.

unoriginalscreenname commented 1 year ago

Ah, yeah. It's weird... I'm getting a lot of back and forth with itself. It's not even passing in the initial task it's just making up the human and the assistant parts. I'm not sure what that's about. I'll play around with it. But this approach does seem like it's working!

paolorechia commented 1 year ago

Which one, auto GPT or my agents examples?

If you’re using the Zero Shot ReAct agent, one thing that really wrecks the performance is not setting up the stop tokens correctly, it’s a bit hard to debug, and has happened with me in the past. Any chance that you modified it?

TestUser @.***> schrieb am Mi. 3. Mai 2023 um 22:23:

Ah, yeah. It's weird... I'm getting a lot of back and forth with itself. It's not even passing in the initial task it's just making up the human and the assistant parts. I'm not sure what that's about. I'll play around with it. But this approach does seem like it's working!

— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/17#issuecomment-1533685243, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ3V3FPAMKHBIIUMPY3XEK5EVANCNFSM6AAAAAAXUMZENU . You are receiving this because you modified the open/close state.Message ID: @.***>

unoriginalscreenname commented 1 year ago

You should check out this repo: https://github.com/flurb18/babyagi4all-api

This appears to work with the local model. the back and forth and instructions work as expected. really interesting

paolorechia commented 1 year ago

Awesome, I’ll definitely check it out when I get some time :)

TestUser @.***> schrieb am Fr. 5. Mai 2023 um 14:03:

You should check out this repo: https://github.com/flurb18/babyagi4all-api

This appears to work with the local model. the back and forth and instructions work as expected. really interesting

— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/17#issuecomment-1536157536, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ2K6VAZEJLKVHKSGITXETT7ZANCNFSM6AAAAAAXUMZENU . You are receiving this because you modified the open/close state.Message ID: @.***>

unoriginalscreenname commented 1 year ago

I feel like there's some issue with how you have the HTTPBaseLLM method. I feel like it's got to have something to do with the prompt? I think you're on the right track with a generalized wrapper for ooba.

paolorechia commented 1 year ago

Hi, what makes you think so?

It’s possible of course, do you think it could be on the output parsing? You mentioned before it only works well for a single instruction, right?

TestUser @.***> schrieb am Sa. 6. Mai 2023 um 13:23:

I feel like there's some issue with how you have the HTTPBaseLLM method. I feel like it's got to have something to do with the prompt? I think you're on the right track with a generalized wrapper for ooba.

— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/17#issuecomment-1537120692, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ4QAJ3EOFDOLPUF7RDXEYYDZANCNFSM6AAAAAAXUMZENU . You are receiving this because you modified the open/close state.Message ID: @.***>