Open qwaszaq opened 4 months ago
Great project. Was trying to get this going with Ollama and local LLM but Ollama doesn't utilize an API key to put in the secrets file. Is there a work around or a way get this working?
Great project. Was trying to get this going with Ollama and local LLM but Ollama doesn't utilize an API key to put in the secrets file. Is there a work around or a way get this working?
Did you try installing ollama model and serving it. Then using eda gpt to choose the model? Is it not working?
Ollama is being served. Initially, I got an error for no secrets.toml file after launching streamlit. I created an empty file. After launching EDA, I get on the home screen.
KeyError: 'st.secrets has no key "HUGGINGFACEHUB_API_TOKEN". Did you forget to add it to secrets.toml or the app settings on Streamlit Cloud? More info: https://docs.streamlit.io/streamlit-cloud/get-started/deploy-an-app/connect-to-data-sources/secrets-management'
In the Edit GPT screen a slightly different one.
KeyError: 'st.secrets has no key "TAVILY_API_KEY". Did you forget to add it to secrets.toml or the app settings on Streamlit Cloud? More info: https://docs.streamlit.io/streamlit-cloud/get-started/deploy-an-app/connect-to-data-sources/secrets-management'
You need to fill the api keys of the services you are using else it will give error. If you select huggingface but don't have api key it will give error. If you use the search function for llm to search the Internet then tavily api will throw error. I suggest you to make these keys. These are free.
I should mention the links for making all api keys in readme. That way anyone can use it.
Ok, I was not aware I needed to create keys for all the services to be able to then use Ollama locally. I never get to the point where I can select any specific service. I added OPEN AI to the secrets file but still get HuggingFace error.
Added a hugging face token, still error. Just want to confirm you need to have a token for every service listed? The error is for a different one. Sorry for all the questions. Starting to think this project is over my head. Thank you for your help.
Since i made the project i know how to handle. From consumer point it's difficult. Let's connect somewhere and tell me all the problems. I will fix rhose and push a new version. Are you on WhatsApp?
Send you an email, I am on whatapp.
@.***
On Tue, 30 Jul 2024, 18:57 rocket-ops, @.***> wrote:
Send you an email, I am on whatapp.
— Reply to this email directly, view it on GitHub https://github.com/shaunthecomputerscientist/EDA-GPT/issues/1#issuecomment-2258351504, or unsubscribe https://github.com/notifications/unsubscribe-auth/AYM4KZYMIV53CXNZCXNUB4DZO6IETAVCNFSM6AAAAABLKDSUS6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJYGM2TCNJQGQ . You are receiving this because you commented.Message ID: @.***>
Bingo! Got it, when I created the secrets.toml file, I did not put in all the blank entries that it was looking for so streamlit was erroring. Entered all the lines with no keys and it worked fine. Thanks for your help.
That's great. I feel like i should turn it into web app instead or a package so you can integrate the work flow in your app. Since i had less time i used streamlit for quicker interface. Tell me if ollama is working after you run that on yoru device else I'll fix that.
It's working just fine. Have been messing with the config to try to push some larger datasets and see what happens. Very nice project.
Thank you. It works well on good or decent systems. If someone has low processing power then it's challanging to optimize. I tried to use all cpu cores available to process the files for creating embeddings. It can work decently with 100-200 page of dense texts. Not sure about 1000 pages. You can try once and let me know.
How are you adding Ollama server URL? I don't see any documentation
Thank you for taking interest. It uses langchain integration of ollama. I don't have space to run ollama models on my device. But it's great if you could test the ollama feature. You need to run the app and choose ollama model which you have downloaded.
On Tue, 23 Jul 2024, 15:50 qwaszaq, @.***> wrote: