Closed mhioi closed 3 days ago
@mhioi thanks a lot for the kind words & feedback!
are you using Wayland?
also it seems some version issue with cuda, not super familiar (i'm on mac), probably have to change your drivers or something, you be good to document this
would love any PR to make the vercel ai chatbot any useful for people to try quickly 🙏
Thank you for the reply! I'm using bspwm as WM,and I think get error on the firefox window ( i.e the screenpipe couldn't capture firefox on my window probably). I'm currently handling the response from screenpipe fetch;Hope I could figure out the solution. I'm unfortunately unfamiliar with typescript and rust,so trying to make the Vercel-AI working with ollama;as soon as I've accomplished I'll send the PR.
@mhioi thanks a lot for the kind words & feedback!
are you using Wayland?
63
also it seems some version issue with cuda, not super familiar (i'm on mac), probably have to change your drivers or something, you be good to document this
would love any PR to make the vercel ai chatbot any useful for people to try quickly 🙏
Hi sir Yesterday hopefully I managed to solve both issues of Cuda and making Vercel-AI work with Ollama. The Cuda thing has been solved by downloading nvcc-11.8 and applying its path to the $PATH:
here is the link for nvcc-11.8
and here is the path you should add:
export PATH=/path-to/downaloded-nvcc/cuda-<version>/bin:$PATH
export LD_LIBRARY_PATH=/path-to-downloaded-nvcc/folder/cuda-<version>/lib:$LD_LIBRARY_PATH
Unfortunately I'm busy today, and also the ollama-vercel codes are dirty(need notes and comments). Also,the Vercel-Ollama works with texts only and doesn't send the taken screenshots or captured videos(needs some tweaks,cause the original Chatgpt-Vercel-AI doesn't have this feature too, i think). I will try to make the codes clean and explicit,then send the PR tomorrow; because I'm new to GitHub,I would appreciate you if you give me sometime to figure out the PR process 😁 Thanks alot
Hi,
Thanks to the Mediar-AI team, I have been managing to use this piece of art. However, while building the Screenpipe engine, I encountered several issues that I think you might want to know about. I also tried the solutions I knew, but they didn't work.
While building the Screenpipe with the flag:
I received the following error:
I suspect this issue is related to the version of CUDA, but I'm not entirely sure. To test it, I built it without the features flag to see if it would work, and then I planned to resolve the build issue with the CUDA flag afterward I got screenpipe working.
After running Screenpipe, I encountered the following error intermittently:
I'm unsure why this is happening. I tested the functionality using curl (as shown in the examples section), and it seemed to work fine. Therefore, I wanted a web UI for interaction and headed to Vercel AI.
Since I don't have access to OpenAI, I tried to change the base_url, but that didn't work. Since I have Ollama running locally, I preferred to get Ollama working with Vercel AI instead of using OpenAI.
I modified the structure of the code based on a repository from Mr. Louis (Ollama of lib/chat/actions.tsx using a repo from Mr. Louis: ollama-ai-provider-fix). I managed to get Ollama working until the section of querying the user prompt! Yay!
However, I then encountered the following error:
In this example, the AI returned the queries as a string: ["meeting", "project", "deadline"], but it should have been in array format!
I have been trying to fix these issues for about three days (two days specifically for the third one), and I have no idea where the problem lies. I would greatly appreciate any guidance or solutions you could provide.
Again, thank you to the great team at Mediar-AI!