Closed Yuvraj-Dhepe closed 2 months ago
cmd_line_opt.txt This is the cmd opt with debug option
This is an issue with your Dask/distributed version. Checking the backtrace
File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/distributed/worker_memory.py", line 56, in <module>
WorkerDataParameter: TypeAlias = Union[
and the code https://github.com/dask/distributed/blob/main/distributed/worker_memory.py#L56 you can find the issue here: https://github.com/dask/distributed/issues/8349
Upgrading to Python 3.9.2 seems to be the solution.
Perfect Thanks @jhgoebbert It worked out.
I was able to use the chat option, however the leanr option seems to throw errors.
/learn chat handler resolved in 18 ms.
2024-07-27 22:13:28,026 - distributed.worker - WARNING - Compute Failed
Key: embed_chunk-b36c3fe5-b881-4826-9ca9-d4ba836341a3
State: executing
Function: execute_task
args: ((<function embed_chunk at 0x7f1439475ca0>, Document(metadata={'path': '~/work/Projects/Audio_DL/dcase2023-audio-retrieval/utils/data_utils.py', 'sha256': b'\xa9!\x9d\x88e\xef\x966R\xbc\xb0\x8d\x9bf\xfb$\x1am\xa9\xd3\xc1\xf8PJ\xa1\x84-\x82\xc7N\x10\x87', 'extension': '.py'}, page_content='import os\nimport pickle\nfrom ast import literal_eval\n\nimport h5py\nimport nltk\nimport numpy as np\nimport pandas as pd\nimport torch\nimport torch.nn.functional as F\nfrom torch.utils.data import Dataset\n\nstopwords = nltk.corpus.stopwords.words("english")\n\n\nclass Vocabulary(object):\n\n def __init__(self):\n self.key2vec = {}\n self.key2id = {}\n self.id = 0\n\n def add_key(self, key, key_vector):\n if key not in self.key2id:\n self.key2vec[key] = key_vector\n self.key2id[key] = self.id\n self.id += 1\n\n def __call__(self, key):\n return self.key2id[key]\n\n def __len__(self):\n return len(self
kwargs: {}
Exception: 'ValueError("Error raised by inference endpoint: HTTPConnectionPool(host=\'localhost\', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(\'<urllib3.connection.HTTPConnection object at 0x7f142a5f1b50>: Failed to establish a new connection: [Errno 111] Connection refused\'))")'
Traceback: ' File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/jupyter_ai/document_loaders/directory.py", line 165, in embed_chunk\n embedding = em.embed_query(content)\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 224, in embed_query\n embedding = self._embed([instruction_pair])[0]\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 199, in _embed\n return [self._process_emb_response(prompt) for prompt in iter_]\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 199, in <listcomp>\n return [self._process_emb_response(prompt) for prompt in iter_]\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 170, in _process_emb_response\n raise ValueError(f"Error raised by inference endpoint: {e}")\n'
2024-07-27 22:13:28,026 - distributed.worker - WARNING - Compute Failed
Key: embed_chunk-c20dc18c-5eac-4fc8-b0c8-8bc59710e107
State: executing
Function: execute_task
args: ((<function embed_chunk at 0x7f1439475ca0>, Document(metadata={'path': '~/work/Projects/Audio_DL/dcase2023-audio-retrieval/utils/data_utils.py', 'sha256': b'\xa9!\x9d\x88e\xef\x966R\xbc\xb0\x8d\x9bf\xfb$\x1am\xa9\xd3\xc1\xf8PJ\xa1\x84-\x82\xc7N\x10\x87', 'extension': '.py'}, page_content='class AudioTextDataset(Dataset):\n\n def __init__(self, **kwargs):\n self.audio_data = kwargs["audio_data"]\n self.text_data = kwargs["text_data"]\n self.text_vocab = kwargs["text_vocab"]\n self.text_level = kwargs["text_level"]\n\n def __getitem__(self, index):\n item = self.text_data.iloc[index]\n\n audio_vec = torch.as_tensor(self.audio_data[item["fid"]][()])\n\n text_vec = None\n\n if self.text_level == "word":\n text_vec = torch.as_tensor([self.text_vocab(key) for key in item["tokens"] if key not in stopwords])\n\n elif self.text_level == "sentence":\n text_vec = torch.as_tensor([self.text_vocab(
kwargs: {}
Exception: 'ValueError("Error raised by inference endpoint: HTTPConnectionPool(host=\'localhost\', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(\'<urllib3.connection.HTTPConnection object at 0x7f142a5f3670>: Failed to establish a new connection: [Errno 111] Connection refused\'))")'
Traceback: ' File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/jupyter_ai/document_loaders/directory.py", line 165, in embed_chunk\n embedding = em.embed_query(content)\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 224, in embed_query\n embedding = self._embed([instruction_pair])[0]\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 199, in _embed\n return [self._process_emb_response(prompt) for prompt in iter_]\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 199, in <listcomp>\n return [self._process_emb_response(prompt) for prompt in iter_]\n File "~/micromamba/envs/audio_dl/lib/python3.9/site-packages/langchain_community/embeddings/ollama.py", line 170, in _process_emb_response\n raise ValueError(f"Error raised by inference endpoint: {e}")\n'
This is my jupyter ai settings, I also do have the embeddings model running at the provided host and port.
If you check the stacktrace carefully you can see that this is more an issue belonging to langchain / docker. A quick look there will lead you for example to this: https://github.com/langchain-ai/langchain/issues/19074 """The issue is with the docker compose configuration."""
Hi,
I'm encountering a similar issue with Ollama running behind a reverse proxy. Interestingly, I haven't configured port 11434 at all and I'm still having this error:
ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fac48882d80>: Failed to establish a new connection: [Errno 111] Connection refused'))
It seems that jupyter-ai
is ignoring the base_url
specified in the interface settings. Instead, it's using the hardcoded base_url: str = "http://localhost:11434"
:
~$ find . -type f -name "*.py" | xargs grep "localhost:1111434" --color
./.conda/pkgs/langchain-community-0.2.12-pyhd8ed1ab_0/site-packages/langchain_community/llms/ollama.py: base_url: str = "http://localhost:11434"
./.conda/pkgs/langchain-community-0.2.12-pyhd8ed1ab_0/site-packages/langchain_community/embeddings/ollama.py: base_url: str = "http://localhost:11434"
./.conda/envs/MyPyEnv/lib/python3.12/site-packages/langchain_community/llms/ollama.py: base_url: str = "http://localhost:11434"
./.conda/envs/MyPyEnv/lib/python3.12/site-packages/langchain_community/embeddings/ollama.py: base_url: str = "http://localhost:11434"
Update: It seems the issue is addressed here #902.
Issue #902 has raised this issue and it is fixed in PR #904 - closing this issue.
Description
I installed the jupyter ai extension in a mamba environment using
pip install jupyter-ai
, I got jupyter lab installed automatically with this command. However, when I loaded the jupyter lab, I getIn the terminal I get the following errors when I start the jupyter lab
jupyter lab --no-browser
Reproduce
1) Create a simple mamba environment: mamba create -f environment.yaml
4) Start the jupyter lab server:
jupyter lab --no-browser
5) Later open the chat widget to see if the chat works
Expected behavior
Context
Linux <user-name> 5.15.150.1-microsoft-standard-WSL2+ #1 SMP Sun Apr 7 22:57:26 CEST 2024 x86_64 x86_64 x86_64 GNU/Linux
Troubleshoot Output
$PATH: /home//micromamba/envs/audio_dl/bin
/home//.local/bin
/home//.bun/bin
/home/linuxbrew/.linuxbrew/bin
/home/linuxbrew/.linuxbrew/sbin
/home//micromamba/condabin
/usr/local/cuda-12.4/bin
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin
/usr/games
/usr/local/games
/usr/lib/wsl/lib
/mnt/c/Program Files/Microsoft MPI/Bin/
/mnt/c/WINDOWS/system32
/mnt/c/WINDOWS
/mnt/c/WINDOWS/System32/Wbem
/mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0/
/mnt/c/WINDOWS/System32/OpenSSH/
/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common
/mnt/c/Program Files/NVIDIA Corporation/NVIDIA App/NvDLISR
/mnt/c/Program Files/usbipd-win/
/mnt/d/WSL/work/Git/cmd
/mnt/c/Program Files/PowerShell/7/
/mnt/c/Users//AppData/Local/Microsoft/WindowsApps
/mnt/c/Users//AppData/Local/Programs/Microsoft VS Code/bin
/mnt/c/Users//AppData/Local/Programs/oh-my-posh/bin
/mnt/c/Users//AppData/Local/GitHubDesktop/bin
/mnt/c/Users//AppData/Local/Programs/Ollama
/mnt/d/WSL/work/micromamba
/snap/bin
sys.path: /home//micromamba/envs/audio_dl/bin
/home//micromamba/envs/audio_dl/lib/python39.zip
/home//micromamba/envs/audio_dl/lib/python3.9
/home//micromamba/envs/audio_dl/lib/python3.9/lib-dynload
/home//micromamba/envs/audio_dl/lib/python3.9/site-packages
sys.executable: /home//micromamba/envs/audio_dl/bin/python3.9
sys.version: 3.9.0 | packaged by conda-forge | (default, Nov 26 2020, 07:57:39) [GCC 9.3.0]
platform.platform(): Linux-5.15.150.1-microsoft-standard-WSL2+-x86_64-with-glibc2.35
which -a jupyter: /home//micromamba/envs/audio_dl/bin/jupyter
pip list: Package Version
Command Line Output
Browser Output