-
I have tried the zero-shot prediction on my own dataset, which is related to Healthcare. However, I am not satisfied enough with the performance and perform fine-tuning.
Then things get weird as the …
-
First of all thank you for this very interesting model and paper! However I was a bit confused does the model solely used the lags predefined as covariates or we can add our own covariates ie : season…
-
Hi! and thankyou for your excellent contribution to the world of time series!
I am currently using lag llama for finetuning, and was wondering if you had any rules of thumb for fine tuning yet?
I …
-
Hey there,
I'm running
**LocalAI version:**
`docker run --rm -ti --gpus all -p 8080:8080 -e DEBUG=true -v $PWD/models:/models --name local-ai localai/localai:latest-aio-gpu-nvidia-cuda-12 -…
-
First off, thank you for your proposed model. I am working on a uni project based on your work, the goal is to fine-tune lag-llama for a custom dataset. I have some questions for the train/val/test sp…
-
Trying to test the prediction with the minimal code from
https://github.com/marcopeix/time-series-analysis/blob/master/lag_llama.ipynb
https://medium.com/@odhitom09/lag-llama-an-open-source-base-m…
-
Hi, I wanted to fine tune the model on my own dataset however with my own custom loss. Could you give an example on how to do that?
Thanks
-
In GluonTS version 0.14.4, without making additional modifications to Demo1, the function `get_lag_llama_predictions`
forecasts, tss = get_lag_llama_predictions(backtest_dataset, prediction_length, …
-
Depends on: https://github.com/ggerganov/llama.cpp/issues/5214
The `llamax` library will wrap `llama` and expose common high-level functionality. The main goal is to ease the integration of `llama.…
-
Hello, thank you for sharing the initial code. I aim to replicate the experiment outlined in your paper before proceeding to fine-tune the model with additional datasets. Could you kindly assist me in…