jonfairbanks / local-rag

Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
GNU General Public License v3.0
535 stars 64 forks source link

TypeError: Object is not an iterable and could not be converted to one. Object: False #40

Closed RMasamune closed 6 months ago

RMasamune commented 8 months ago

When i open the localhost:8051 in my browser, it has an error output of this. I use the docker on windows.

TypeError: Object is not an iterable and could not be converted to one. Object: False Traceback: File "/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script exec(code, module.dict) File "/home/appuser/main.py", line 31, in sidebar() File "/home/appuser/components/sidebar.py", line 16, in sidebar settings() File "/home/appuser/components/tabs/settings.py", line 23, in settings st.selectbox( File "/.venv/lib/python3.10/site-packages/streamlit/runtime/metrics_util.py", line 397, in wrapped_func result = non_optional_func(*args, **kwargs) File "/.venv/lib/python3.10/site-packages/streamlit/elements/widgets/selectbox.py", line 198, in selectbox return self._selectbox( File "/.venv/lib/python3.10/site-packages/streamlit/elements/widgets/selectbox.py", line 237, in _selectbox opt = ensure_indexable(options) File "/.venv/lib/python3.10/site-packages/streamlit/type_util.py", line 670, in ensure_indexable it = ensure_iterable(obj) File "/.venv/lib/python3.10/site-packages/streamlit/type_util.py", line 661, in ensure_iterable raise TypeError(

karnikkanojia commented 8 months ago

Experienced the same error. It's just that your application can't find your Ollama client. @RMasamune

jonfairbanks commented 8 months ago

If running within Docker on Windows, you may need to set additional parameters in your docker-compose. See the bottom of the setup guide for more details: https://github.com/jonfairbanks/local-rag/blob/develop/docs/setup.md

RMasamune commented 8 months ago

Sorry for the incovenience. I added the extra host in the yml and restart the application, but it still have this problem. When i open localhost:11434 in my browser, it says "ollama is running". and the process listening on 11434 is wslrelay.exe(which means it was started by docker, because when i try to terminate it the docker have an error). Is there a possible misconfiguration in my docker desktop or some other settings i need to try?

jonfairbanks commented 8 months ago

The stack trace is alluding to the fact that it was not able to get a list of models from Ollama. Either there is a communication problem between Local RAG + Ollama or you have not yet downloaded any models in Ollama.

raetsch commented 7 months ago

for me, the fix is, to enter the external-ip address instead of localhost. i am running ollama on the same (linux)-machine in a container.

CosmicMac commented 7 months ago

If running within Docker on Windows, you may need to set additional parameters in your docker-compose. See the bottom of the setup guide for more details: https://github.com/jonfairbanks/local-rag/blob/develop/docs/setup.md

Then set Ollama Endpoint to http://host.docker.internal:11434 instead of http://localhost:11434 in Local RAG settings