-
### What feature or new tool do you think should be added to DevToys?
Tools like Copilot, ChatGPT or BingChat are truly helpful from a developer perspective.
### Why do you think this is needed?
Ha…
veler updated
3 months ago
-
### What happened?
Since version 2.3.1 it is not sending the message history of a chat tab, but only the system prompt and the latest message.
This is the request of the second message:
```
[2…
-
In my setup I have a underpowered laptop where I do my coding and a beefy server to handle ollama. The server is not on the internet, and needs to be accessed via a jump host on ssh. I use `LocalFor…
-
[LM studio's Llama 3 template](https://github.com/lmstudio-ai/configs/blob/main/llama3.preset.json):
```
system
{System}
user
{User}assistant
{Assistant}
```
The [official Llama 3 temp…
-
Only successfully generate some content once.
After that ollama seems not be triggered (the RAM usage is normal). The window is empty for every without any debug message.
Other tools using o…
-
**Is your feature request related to a problem? Please describe.**
Currently, for FIM, codellama, deepseekcoder and stable-code models are only supported. Starcoder has recently released version 2 of…
-
**Describe the bug**
I run a Windows shell but code in Linux. When launching VSCode I select 'Connect to WSL'. From then on I am running in Linux. I am running ollama in a terminal window locally on …
-
### System Info
transformers 4.34.0 is ~370ms/token while 4.38.2 it is ~990 ms/token. Model size is also slightly larger in 4.38.2.
### Who can help?
_No response_
### Information
- [X]…
-
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
…
-
Sometimes the models will generate trailing parenthesis even though they are already in the source code.
Can it be done by prompt engineering?