-
I can't figure out where the video output should be from, I launched all parts of the application but there was no understanding
I looked at the localhost that open in the console, but they are emp…
-
### Bug description
Either I'm doing something dumb or QLoRA seems to be broken. Tried it with different models:
# LoRA (fine)
```
gemma_2 ~/litgpt litgpt finetune_lora --devices 1 --config co…
rasbt updated
3 months ago
-
### What is the issue?
Type `ollama show gemma:7b`: the parameters are 9b, not 7b.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_
-
Hi, so I've been trying to convert the Gemma model to mlx and can't understand why the model sizes decrease more than expected (which I believe is the source of the error below when running in Xcode).…
-
I was using `mistral` model for my PDF chatbot. With the arrival of gemma model, I am trying to use this model. But it gives me an issue: ***After embedding external PDF document, when I ask question,…
-
# Issue
When I press `shift` + `alt` + `w`, this extension repeat my code from the beginning of the file with things like instead of completing the code.
Before completion:
![Screenshot before t…
-
It would be great if it would be possible to use local llms with ollama and not only claude/openai models.
-
- [ ] [tabby/README.md at main · TabbyML/tabby](https://github.com/TabbyML/tabby/blob/main/README.md?plain=1)
# tabby/README.md at main · TabbyML/tabby
# 🐾 Tabby
[![latest release](https://shield…
-
I test google/gemma-1.1-2b-it on gsm8k with following command
`
CUDA_VISIBLE_DEVICES=3 lm_eval --model vllm \
--model_args pretrained=gemma-1.1-2b-it
,dtype=auto,gpu_memory_utilization=0.8, \…
-
`llm_inference` on local is throwing the below error for `gemma-2b-it-cpu-int8.bin` only `gpu` backend type is supported?
```
calculator_graph.cc:892] INVALID_ARGUMENT: CalculatorGraph::Run() fail…