-
## Enviroments
```
#!git clone https://github.com/time-series-foundation-models/lag-llama/
#cd lag-llama
#!pip install -r requirements.txt --quiet
#!huggingface-cli download time-series-foundat…
-
The complete code on Colab:
Using data from Case 1 in GluonTS examples:
https://ts.gluon.ai/stable/tutorials/data_manipulation/pandasdataframes.html#Use-case-1---Loading-…
-
## Dataset using
[Dataset Link](https://www.kaggle.com/datasets/sydjaffy/historical-product-demand
)
## Code I am using to load the df
```import pandas as pd
import numpy as np
from gluonts.d…
-
Is there interest in implementing a rate limiter in the `pull` command? I'm open to working on this, this is the syntax I have in mind for now:
`ollama pull modelname --someflagname 1024`
-
### What is the issue?
Experience a hang issue consistently.
Device information:
OS: Ubuntu, CPU: AMD Threadripper [AMD Ryzen Threadripper 7960X, 24 Cores, 48 Threads, 4.2GHz Base, 5.3GHz Turbo…
-
**Is your feature request related to a problem? Please describe.**
I'm one of the maintainers of the [llm](https://github.com/rustformers/llm) project, and we're looking for a robust, cross-platform …
-
Not an issue but I can't see any discussion board for this project...
I've now got this working directly with the Ollama chat completion API endpoint so it's now possible to use it with Local LLM i…
-
I have Intel Core i7-12700K (on Windows 10), it has 8 main «Performance» cores with hyperthreading, and 4 energy «Efficient» cores, giving in total 16+4=20 virtual cores.
The problem is, if I just …
-
First, I just wanted to say this project is very exciting and I plan to follow along with as it begins to mature, so thank you!
My request would be to have a comparison of Lag-Llama and Meta's [Pro…
-
```
ckpt = torch.load("lag-llama.ckpt", map_location=torch.device('mps'))
estimator_args = ckpt["hyper_parameters"]["model_kwargs"]
```
**code above is ok but then run following code. occur a …