-
I have two questions about pretraining LLaMA-2 13B with litGPT:
Configuration for epoch, max_tokens, and max_steps: In the litgpt/config_hub/pretrain/config.yaml, I see options for epoch, max_token…
-
作者您好,用自己的数据集微调diffusion模型生成的权重文件是.pth.tar而不是.pt后续合成异常数据时该如何修改
![image](https://github.com/user-attachments/assets/dd1a059a-e561-4610-84eb-e3f90fb949ca)
![image](https://github.com/user-attachments/a…
-
As recommended by @benjamincburns , the goal is to implement a Firebase Checkpoint Saver as a child of the abstract BaseCheckpointSaver class.
Currently, there are 3 such implementations in the lan…
-
Hello and thank you for the great work! Is there any chance you will be uploading the trained model checkpoint for fine-tuning support?
-
I am very interested in your research results. When will your checkpoints be made public?
-
The code does not currently support loading from a checkpoint. To add this functionality, we would need to modify the surrogate config file by adding an extra argument, `load_checkpoint.`
```
if l…
-
Dear team, I want to be able to use your model in node environment and I was wondering if you have your weights required for TFJS format.
-
Hi,
Thank you for the great work and the dataset. I was wondering if you could release the checkpoints for both RealBSR-RGB and RealBSR-Raw models, along with the predictions on the testing set for…
-
Although in issue #3 you claimed that the pre-trained models were available, by now they have been deleted in dropbox. Would you please recover them?
-
Hi authors! Thanks so much for releasing such a great work! I was wondering would you like to provide the pytorch checkpoints (as `.pth` files) instead of the onnx checkpoints?