The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
Describe the bug
Not sure if this is twinny or an ollama issue but the chat feature seems to stop working after the machine goes in stand by.
Using M1 mac air
Ventura 13.4
To Reproduce
Steps to reproduce the behavior:
Start ollama with ollama run ...
VS code + twinny running normally
Close the laptop lid
Open back up
ollama is still responsive if using the terminal
Sending a message through twinny in VS code shows the loading indicator indefinitely
Expected behavior
Twinny chat should continue working
Restarting VS code, ollama doesn't seem to fix the problem so I'm wondering if it's also something else.
Ollama works ok through the CLI
Describe the bug Not sure if this is twinny or an ollama issue but the chat feature seems to stop working after the machine goes in stand by. Using M1 mac air Ventura 13.4
To Reproduce Steps to reproduce the behavior:
ollama run ...
Expected behavior Twinny chat should continue working
Restarting VS code, ollama doesn't seem to fix the problem so I'm wondering if it's also something else. Ollama works ok through the CLI