-
Hi, I am very interested in your work on PipeInfer!
However, the current implementation does not seem to support multiple GPUs. Are there any upcoming plans or suggestions for integrating support for…
-
Hi there,
I have a very interesting problem here when I want to test the model using my data.
The original dataset has 5835 rows, i.e., 5835 time series and It includes 39 timesteps. I understand…
-
I have tried the zero-shot prediction on my own dataset, which is related to Healthcare. However, I am not satisfied enough with the performance and perform fine-tuning.
Then things get weird as the …
-
Hi! and thankyou for your excellent contribution to the world of time series!
I am currently using lag llama for finetuning, and was wondering if you had any rules of thumb for fine tuning yet?
I …
-
As discussed in https://github.com/ggerganov/llama.cpp/issues/71#issuecomment-1483907574
The idea is to achieve a naive implementation for infinite output generation using a strategy that simply cl…
-
I tried to install using the default `requirements.txt` with python 3.10 and run `lag-llama/train.py` with the default setting and `--seed 0`. The run failed.
Apparently, the error indicates the us…
-
First off, thank you for your proposed model. I am working on a uni project based on your work, the goal is to fine-tune lag-llama for a custom dataset. I have some questions for the train/val/test sp…
-
## 🐛 Bug
I‘m interesting to use mlc-llm try new models on OpenLLM Leaderboard. Since Qwen does not yet support multi GPU inference, I tried multiple Qwen-72b based llamafied models. I hope this hel…
-
Depends on: https://github.com/ggerganov/llama.cpp/issues/5214
The `llamax` library will wrap `llama` and expose common high-level functionality. The main goal is to ease the integration of `llama.…
-
Hi, thank you for your contribution. I have been trying to fine-tune your model in a univariate time series forecasting task with C-MAPSS turbine datasets, the goal is to learn the trajectory pattern …