-
Hi,
I am running the example in Gemma in Pytorch and encountered the following issue:
Part of the example code
![image](https://github.com/google/gemma_pytorch/assets/54340185/88a671d6-6a1b…
-
When I run Ollama on my local PC with model gemma:2b I get a response.
My rest call works, below is a print screen:
![image](https://github.com/OpenDevin/OpenDevin/assets/19372922/307fbce0-9599-48…
-
### What is the issue?
Mistral:7b works fine, so I suppose that the issue is realted to the model
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32
-
This is a. net 8.0 console project. When I was running it, the following error occurred,'Attempted to read or write protected memory. This is often an indication that other memory is corrupt'. Can any…
-
### System Info
Docker: ghcr.io/huggingface/text-generation-inference:1.4
Platform:
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
First, I want to express my gratitude for this fantastic tool that provides a powerful way to visualize the workings of multimodal models. It’s incredibly helpful for understanding and analyzing compl…
-
In Table 1 of [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf) `Feedforward hidden dims` are listed as 32768 and 49152 for the 2B and 7B models, respe…
-
chat-with-mlx works with Gemma-2b-it here, but error for MoE 8*7B
all error is below
`
taozhiyu@TAOZHIYUdeMBP chat-with-mlx % chat-with-mlx
Starting MLX Chat on port 7860
Sharing: False
Ru…
-
I was not able to run the example after I added the model.bin to my device.
I followed the details on the [fix](https://github.com/google/mediapipe/issues/5326) and added
in AndroidManifest.xml…