Open shivamordanny opened 5 months ago
Hi shivamordanny,
There is JetPack 5 version of the Docker container.
You can edit this line on the launch script for now (replace jetrag:r36.3.0
with jetrag:r35.4.1
) and give a try, after you make sure you are on JetPack 5.
https://github.com/NVIDIA-AI-IOT/jetson-copilot/blob/main/launch_jetson_copilot.sh#L40
Hi shivamordanny, I just updated the launch scripts, so you don't need to manually edit them. Please go ahead and try. A quick test on my Jetson Xavier NX running JetPack 5.1.1 shows that Jetson Copilot runs, however, the Ollama seemed to run without GPU acceleration so it was running very slow. I will look into this issue separately.
So, Ollama server in the jetrag
container was configured alright.
It was just the tight memory space of Xavier NX allowed only the portion of Llama3 model loaded on the GPU memory, effectively making it spend a long time on CPU.
Here are some of the workaround you can try.
sudo init 3
)ollama pull phi3
)Thank you Tokk for doing this I was going to ask you the same questions for the Xavier. Dustin successfully backported the Ai studio tool to r35.4 for the Xavier and I will be testing next week. I definitely was hoping to do the same for copilot on my Xavier
What all add ons I would need to make it work with Xavier NX?