yoheinakajima / babyagi

https://babyagi.org/
20.37k stars 2.67k forks source link

llama confusion #130

Closed iplayfast closed 1 month ago

iplayfast commented 1 year ago

Can the readme be updated to explain how to interface with llama? I've tried ./babyagi.py -l llama -m models/ggml-vicuna-13b-4bit.bin Setting the .env OPENAI_API_KEY= OPENAI_API_MODEL=llama

As far as I know there is no OPENAI_API_KEY as it is run locally on my computer.

MikoAL commented 1 year ago

do mklink "llama/main" C:\Users\User\Desktop\Projects\llama\llama.cpp\build\bin\Release\main.exe in the cmd

remember to create the folder "llama" and copy the file "main.exe" into it.

NOTE: I am a confused idiot and this may be a completely wrong interpretation of what to do, as I have been running into problems with the OPENAI_API_KEY as well

ai8hyf commented 1 year ago

What I did was changing this line cmd = cmd = ["llama/main", "-p", prompt] to pointing to my llama model. However, to fully run locally, you also need a embedding model like SBert bacause the default embedding model is OpenAI's ada model (cheap but still costs money). I haven't been able to run things fully locally, but I think I am very close.

UPDATE: LangChain has updated the solution for babyAGI: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html

UPDATE2: My very rusty implementation with LangChain+llama+babyAGI https://github.com/ai8hyf/babyagi/blob/main/langChain-llama.py

Tobias-GH-Schulz commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

another-ai commented 1 year ago

What I did was changing this line cmd = cmd = ["llama/main", "-p", prompt] to pointing to my llama model. However, to fully run locally, you also need a embedding model like SBert bacause the default embedding model is OpenAI's ada model (cheap but still costs money). I haven't been able to run things fully locally, but I think I am very close.

I've already tried error:

raise ApiValueError('Unable to prepare type {} for serialization'.format(obj.class.name)) pinecone.core.client.exceptions.ApiValueError: Unable to prepare type ndarray for serialization

ai8hyf commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

Just check this out: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html I think it solves all the problems here.

ai8hyf commented 1 year ago

What I did was changing this line cmd = cmd = ["llama/main", "-p", prompt] to pointing to my llama model. However, to fully run locally, you also need a embedding model like SBert bacause the default embedding model is OpenAI's ada model (cheap but still costs money). I haven't been able to run things fully locally, but I think I am very close.

I've already tried error:

raise ApiValueError('Unable to prepare type {} for serialization'.format(obj.class.name)) pinecone.core.client.exceptions.ApiValueError: Unable to prepare type ndarray for serialization

Check this implementation using LangChain: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html

CRCODE22 commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

Just check this out: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html I think it solves all the problems here.

Creates more problems because it is still needs the openai api and does not explain to how use local llama models instead.

ai8hyf commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

Just check this out: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html I think it solves all the problems here.

Creates more problems because it is still needs the openai api and does not explain to how use local llama models instead.

I put together a very rough solution for langchain+llama+babyAGI. See my fork here: https://github.com/ai8hyf/babyagi/blob/main/langChain-llama.py

francip commented 1 year ago

I'd love to get the llama code updated. The initial hack was based on building/running llama.cpp under Linux, so no support for Windows. Also, the embeddings are still based on OpenAI Ada, so OpenAI key is still needed.

But if you have improvements to fix those, I'd love to get a PR to integrate in.

ai8hyf commented 1 year ago

I'd love to get the llama code updated. The initial hack was based on building/running llama.cpp under Linux, so no support for Windows. Also, the embeddings are still based on OpenAI Ada, so OpenAI key is still needed.

But if you have improvements to fix those, I'd love to get a PR to integrate in.

Sorry I don't have a Windows dev machine right now so I don't really know the situation with Windows. The embeddings are also from local llama cpp model (4096 size). The only 3rd-party API used in my code snippet was the SerpAPIWrapper. I basically used LangChain's official example (https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html) and changed the models and embeddings to llama cpp.

dany-on-demand commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

Just check this out: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html I think it solves all the problems here.

Creates more problems because it is still needs the openai api and does not explain to how use local llama models instead.

I put together a very rough solution for langchain+llama+babyAGI. See my fork here: https://github.com/ai8hyf/babyagi/blob/main/langChain-llama.py

For whatever reason, I had to set embedding_size = 5120 for your script to work

ai8hyf commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

Just check this out: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html I think it solves all the problems here.

Creates more problems because it is still needs the openai api and does not explain to how use local llama models instead.

I put together a very rough solution for langchain+llama+babyAGI. See my fork here: https://github.com/ai8hyf/babyagi/blob/main/langChain-llama.py

For whatever reason, I had to set embedding_size = 5120 for your script to work

ooof, I think that's because you are using a 13b model? Mine was based on the 7b model.

EinGeist commented 1 year ago

@ai8hyf it would be great if you could share how you made run fully locally as soon as you get it done :) thx

Just check this out: https://python.langchain.com/en/latest/use_cases/agents/baby_agi.html I think it solves all the problems here.

Creates more problems because it is still needs the openai api and does not explain to how use local llama models instead.

I put together a very rough solution for langchain+llama+babyAGI. See my fork here: https://github.com/ai8hyf/babyagi/blob/main/langChain-llama.py

First I would love to thank you for this implementation.

I have tried it but it didn't generate any response at all. I'm using gpt4all-lora-quantized-ggml.bin model.

Any idea why is that?