severian42 / Mycomind-Daemon-Ollama-Mixture-of-Memory-RAG-Agents

Mycomind Daemon: A mycelium-inspired, advanced Mixture-of-Memory-RAG-Agents (MoMRA) cognitive assistant that combines multiple AI models with memory, RAG and Web Search for enhanced context retention and task management.
25 stars 4 forks source link

No module named 'llama_cpp_agent' #1

Open fahdmirza opened 1 month ago

fahdmirza commented 1 month ago

Hi, I am getting following error even after installing llama_cpp_agent :

(moa) Ubuntu@0136-ict-prxmx50056:~/Mycomind-Daemon-Ollama-Mixture-of-Memory-RAG-Agents$ python omoa.py Traceback (most recent call last): File "/home/Ubuntu/Mycomind-Daemon-Ollama-Mixture-of-Memory-RAG-Agents/omoa.py", line 13, in from llama_cpp_agent.agent_memory.memory_tools import AgentCoreMemory, AgentRetrievalMemory, AgentEventMemory ModuleNotFoundError: No module named 'llama_cpp_agent'

severian42 commented 1 month ago

Thanks for catching this! The installation for that module got lost in my different versions. Its been updated in the requirements.txt, but you can just go ahead and run 'pip install llama-cpp-agent' to get those installed. The memory uses the https://github.com/Maximilian-Winter/llama-cpp-agent library

fahdmirza commented 1 month ago

No worries. yes I also tried that and installation of llama-cpp-agent fails :

Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [98 lines of output] scikit-build-core 0.9.8 using CMake 3.30.0 (wheel) Configuring CMake... loading initial cache file /tmp/tmpcbo960jh/build/CMakeInit.txt -- The C compiler identification is GNU 12.3.0

reference to GOMP_single_start@GOMP_1.0' /home/Ubuntu/miniconda3/envs/moa/compiler_compat/ld: vendor/llama.cpp/ggml/src/libggml.so: undefined reference toomp_get_num_threads@OMP_1.0' collect2: error: ld returned 1 exit status ninja: build stopped: subcommand failed.

  *** CMake build failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects (moa) Ubuntu@0136-ict-prxmx50056:~/Mycomind-Daemon-Ollama-Mixture-

severian42 commented 1 month ago

Hmm interesting. So sorry for the issues. When I get back to my desktop I can debug better. I never ran into this issue during the process. I use Mac M2, not sure if that makes any difference at all. I'll work on it this evening and see if I can find the fix. Could be a work around with doing a direct install of llama-cpp-python since the llama-cpp-agent uses it internally

fahdmirza commented 1 month ago

All good mate, thanks. I was just looking to review this awesome concept for my channel at https://www.youtube.com/@fahdmirza . I believe I already have covered your other tools like nexus , vodalus etc. :)

severian42 commented 1 month ago

So I've been messing around trying to debug the issue and I can't seem to replicate it on my end. Everything installs fine from a fresh env on my set up. Could be something within the actual llama-cpp-python library that is preventing unbuntu builds but I see from your channel that you've already encountered that in the past and worked around it. I didn't implement any version control or anything like that but the project may need a git init on your system to retrieve version information from the Git repository, but this is just a wild guess on my end. Really wish I get this squared away for you so you can give this a test run!

Also, many thanks for covering my work on your channel! I appreciate the interest and your willingness to share it with your audience