-
### What happened?
Training checkpoints for large models (num_channels equal or greater than 912) become unreadable by pytorch and hence can't be used to resume or fork runs.
Note - for now thi…
-
Our collection of corelm models is expanding, and we need an automated system to download them efficiently. We can integrate smaller models, such as n/s (and possibly m, if it isn't too large), direct…
-
I'm on an Apple Silicon Mac trying to convert a CoreML model for `large-v3-turbo-q5_0`.
What is needed in order to convert this model?
```
./models/generate-coreml-model.sh large-v3-turbo-q5_0
…
-
- Advanced type of Language Model using Deep learning techniques using heavy text data.
- Capable of generating human like text. QnA, Text2Text
- Concepts like n-gram to Neural Networks are used. …
-
# Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
I don't know if this is considered a bug or an expected behavior.
When I compose two models using `onn…
-
I am working with a large optimization model using Plasmo.jl, but I encountered a performance issue when calling `set_to_node_objectives(graph)` on a graph with many nodes.
```julia
graph = Opti…
-
When running cog predict on large models (SDXL for example), users with slow internet connections, or far away from weight storage (Australia seems to be quite far from r8.im storage), experience time…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Linux
### P…
-
The stack tool cannot support large models with a .pth extension downloaded from Meta. It throws an error during runtime. Does it have to use models downloaded from Hugging Face? Is this setup unreaso…
-
# 1. Ollama
## 1. use Ollama CLI:
```
ollama serve
ollama run llama2:7b, llama3, llama3:70b, mistral, dophin-phi, phi, neural-chat, codellama, llama2:13b, llama2:70b
ollama list
ollama show
…