-
### What happened?
The llama.cpp tokenizer for Phi-3 has odd behavior, where re-tokenizing the same text over and over keeps adding whitespaces to the first non-BOS token. This has several issues:
…
-
We've been testing running assistants with ollama. This is of interest for some folks using agency-swarm with assistants but may be interesting to others as well (assistants-api with local models).
…
phact updated
2 months ago
-
### What is the issue?
I'm getting the following error when testing the new 128k versions of phi3-medium:
```sh
$ ollama run phi3:14b-medium-128k-instruct-q4_0
Error: llama runner process has te…
-
### What happened?
The new Copilot+PCs with Qualcomm Snapdragon X processors (in my case a Surface 11 Pro with Snapdragon X Plus and 16GB RAM) are fast, and run llama.cpp on the CPU w/o issues. The…
-
Hello,
many thanks for this very nice piece of work!
I couldn't get the finetune/finetune_lora scripts to run on a freshly launched ubuntu ec2 instance without a substantial refactoring of the …
-
**Describe the bug**
Is there an equivalent C API or method to the python "logits = generator.get_output("logits")" API that allows us to get the logit values of the output?
The documentation only…
-
Hi!
I do the fine tuning by full training, and get config.json, model.safetensors, special_tokens_map.json tokenizer.jsson, training_args.bin, generation_config.json, preprocessor_config.json, toke…
-
Hi, I noticed that the following script produces different results depending on the backend. On my machine, the output is:
```julia
cpu: [18.0; 18.0; 18.0; 18.0; 18.0; 18.0; 18.0; 18.0; 18.0; 18.0…
-
### What is the issue?
环境:
直接下的release 0.2.5 的ollama,显卡rx 570 gfx803 win10 64位
运行 ollama run qwen2:1.5b 或者 ollama run phi3 到报错, 请问是需要自己重新编译么 或者我的环境缺失什么依赖
日志
```
2024/07/19 14:10:24 routes…
-
I'm having problems compiling tmLQCD with SSE intrinsics on my computer. I run configure with the following arguments
```
${dir}/srcs/tmLQCD/configure \
--prefix=$HOME/.usr \
--enable-mpi \
…