-
Update: I used to run ollama on this chromebook when tinyllama came out and it ran great.
### What is the issue?
![image](https://github.com/ollama/ollama/assets/13264408/e37d1a70-8d92-4281-88…
-
What are ways (interfaces) where config is able to directly connecting with HuggingFace such that we don't need to download models or the code can automatically download models for us?
For example, i…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
### What is the issue?
I tried 1xH100 box and got an error during installation. Got the same output from another bigger 2xH100 box too:
```
root@C.11391672:~$ curl -fsSL https://ollama.com/instal…
-
I want to try DSPy using a local LLM served using vLLM. I followed the instructions from https://dspy-docs.vercel.app/docs/deep-dive/language_model_clients/local_models/HFClientVLLM The model was down…
-
Hi, as per this constructor:
class RealDataset(Dataset):
def __init__(self, data_dir='/home/ubuntu/workspace/zero123/3drec/data/real_images/'):
self.meta = []
for model_name in…
-
I have tried to convert llama 2 model from .gguf to .bin
```
~/llm_inferences/llama.cpp/models/meta$ ls
llama-2-7b.Q4_K_M.gguf
python3 export.py llama2_7b.bin --meta-llama /home/####/llm_inf…
-
### What happened?
`GGML_ASSERT: D:\a\llama.cpp\llama.cpp\ggml.c:12853: ne2 == ne02`
### Name and Version
```
version: 2965 (03d8900e)
built with MSVC 19.39.33523.0 for x64
```
### What operati…
-
```bash
num=1
gpu_index=$(($num - 1)) # Calculates the 0-indexed GPU number
# Setting the specific GPU to be visible to the torchrun command
# Run the PyTorch distributed script with specified …
-
Implement an Importer so that semantic models conformant to https://github.com/eclipse-esmf/esmf-semantic-aspect-meta-model can be used as semantic description for a submodel. Aspect Models are used f…