Jeffser / Alpaca

An Ollama client made with GTK4 and Adwaita
https://jeffser.com/alpaca
GNU General Public License v3.0
418 stars 44 forks source link

Automatic Shutdown of Ollama for Improved Battery Life on Laptops with Discrete GPUs #264

Closed maximmaxim345 closed 2 months ago

maximmaxim345 commented 2 months ago

Is your feature request related to a problem? Please describe. Having this App open on laptops with discrete GPUs significantly reduces battery life. This is because ollama continuously runs in the background, preventing the GPU from entering sleep mode.

Describe the solution you'd like The internal ollama instance should automatically shut down when not in use. This would allow the GPU on NVIDIA Optimus laptops to completely power off, substantially increasing battery life.

Describe alternatives you've considered

Additional context To the best of my knowledge, no other LLM desktop application implements such an option. This feature would be a distinctive advantage for Alpaca.

maximmaxim345 commented 2 months ago

I've been working on a solution to automatically shut down ollama when idle. Would you be open to a pull request for this feature? I'm happy to make any necessary adjustments.

Jeffser commented 2 months ago

Hi thanks for the suggestion, I appreciate that you want to contribute but I recommend you don't do that right now, I'm rewriting most of the app including the instance manager, I don't want you to have to recode everything once I make the changes

maximmaxim345 commented 2 months ago

Sounds good, I understand. Let me know when the rewrite is complete and I'll gladly contribute then!

Jeffser commented 2 months ago

Hi, I just added this feature https://github.com/Jeffser/Alpaca/commit/daf56c2de42120cac3bdbef92941b4d6e4964d36

image

The Ollama instance is shut down after x minutes

maximmaxim345 commented 2 months ago

Hi, just tested this on main. Manually typing a value into 'instance_idle_timer' does not get saved into server.json. I therefore could not set the timeout to 1 minute without manually editing server.json. Otherwise, this seems to work. Thanks!

Keep in mind that downloading a new model should inhibit the timer, to not abort the ongoing download. (could not test this since pulling models is broken)

Jeffser commented 2 months ago

Keep in mind that downloading a new model should inhibit the timer, to not abort the ongoing download. (could not test this since pulling models is broken)

Wait what you mean pulling models is broken?

maximmaxim345 commented 2 months ago

Keep in mind that downloading a new model should inhibit the timer, to not abort the ongoing download. (could not test this since pulling models is broken)

Wait what you mean pulling models is broken?

This is d74f535, I'm using gnome-builder:

INFO    [model_widget.py | pull_model] Pulling model: phi3:mini-4k
Exception in thread Thread-4 (pull_model):
Traceback (most recent call last):
  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.11/threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "/app/share/Alpaca/alpaca/custom_widgets/model_widget.py", line 513, in pull_model
    response = self.ollama_instance.request("POST", "api/pull", json.dumps({"name": model_name}), lambda data: model.update(data))
               ^^^^^^^^^^^^^^^^^^^^
AttributeError: 'model_manager_container' object has no attribute 'ollama_instance'

Here is the full log

Jeffser commented 2 months ago

Yeah my bad, I fixed it