Luxadevi / Ollama-Colab-Integration

Jupyter Notebooks for Ollama integration
Apache License 2.0
113 stars 29 forks source link

Troubleshooting Kaggle Integration in Olama Collab Integration Repository #4

Open Fatin-Ishraq opened 4 months ago

Fatin-Ishraq commented 4 months ago

Hello,

I am currently facing challenges with integrating the Olama Collab Integration repository with Kaggle. Although the repository mentions compatibility with Kaggle.com, it lacks detailed instructions or notebooks for seamless integration.

Despite my efforts to adapt the code and directory structure to Kaggle's environment, I have encountered persistent issues in achieving the desired functionality. As a result, I am seeking the assistance of the repository creator to collaborate on resolving these issues and ensuring successful integration with Kaggle.

I am eager to work together with the creator to overcome these obstacles and optimize the repository's performance on Kaggle. If possible, I would appreciate direct contact through my GitHub account or any linked social media platforms to facilitate communication and collaboration.

Furthermore, I have established a separate GitHub repository titled "Ollama Colab Integration for Kaggle," housing the modified versions of the files. You can access it here.

Thank you for considering my request for assistance

Warm regards, Md.Fatin Ishraq facebook

Luxadevi commented 4 months ago

Hey,

Cool to hear there are more Kagglers out there! We never really dropped Kaggle support but the scope and usecase shifted when working on the companion part. Kaggle doesnt allow proxy-passing/having public endpoints via Cloud-Flare, and blocks alot of connections, and whenever you launch certain parts of gradio or streamllit they kill the notebook instance. I myself use Kaggle with Ollama alot but in my experience running the UI also within that ENV made it reach a couple bottlenecks. So we kept with the old way of running Ollama within a notebook defined in this Kaggle-Notebook:

https://github.com/Luxadevi/Ollama-Colab-Integration/blob/main/Old%20Version/ollama-publicV2kaggle.ipynb

Here we use R0lfs Nat python tunneling script that won't false flag for a remote connection and stays generally on. Then you have the Ollama endpoint available and what i personally do is just run somewhere on a linux install since we can just kick off companion with a one liner:

curl https://raw.githubusercontent.com/Luxadevi/Ollama-Companion/main/install.sh | sh

There are some old parts about this notebook that can be optimized like

I myself removed these things but are filled with other testing things and my variables like IP and Ports for the backend, so i dont need to do that again and again.

If you would like to try this with another way feel free, always open for a PR if works well generally. as said i did run streamlit over on kaggle but all around performance was too low with loading in big models, checking hash and the like while running inference and so on. You can also use Ngrok as so many but Ngrok kicked me out or closed connections too many times to even consider :)

I hope i informed you well enough with this and will change the mentioned optimizations when im available.

Kind regards, Luxa,

Luxadevi commented 4 months ago

Additional info - Installer,

Within the installer there are some options to make your install add or disregards some features, these can be passed with a flag but are not documented, i moved on about making the UI part and am focusing on different projects right now so not everything is as updated as it could be. Also in some parts because Streamlit was not as wel featured a while back compared to right now. Last thing i want to mention is, there will be a update soon probably since my friend is building upon Companion and adding fine tuning and more quanting options.