Closed theurichde closed 3 weeks ago
Hi, @theurichde! Thanks for the detailed report!
It seems to be related to this issue from Web-LLM:
Could you please run the two following commands on the browser console, and share the results here?
await navigator.gpu.requestAdapter()
await navigator.gpu.requestAdapter({powerPreference: 'high-performance'})
In that same thread [1], a user also posted a workaround for Windows:
You can force Chrome in windows to use the more powerful GPU by going to the Display>Graphics>Apps page, adding chrome, clicking options, and setting to use dedicated GPU.
And another workaround is to use Ollama or any other inference engine with OpenAI-Compatible API, which you know works with your dedicated GPU, and connect to it using the 'Remote Server (API)' setting in the Menu.
Running both commands shows only the intel card. So, it's related to the WebLLM issue you shared. Meanwhile, I will go with the Ollama approach.
Thank you! Keep up the good work 👋🏻
Bug description
Steps to reproduce
Expected behavior
I would like to choose the card MiniSearch uses for inference.
Device info
Additional context
(btw: awesome project!)