Closed m6129 closed 1 week ago
Hi!!
The acknowledgement is also present in the latest version of the preprint, in the acknowledgements section. We uses the same compute in all versions of our paper; which is a single Nvidia Tesla-P100 GPU with 12 GB of memory, 4 CPU cores, and 24 GB of RAM
. This was probably not clearly mentioned in the older Nov 2023 preprint.
In any case, I'd suggest referring to the newer version of the preprint for the details.
As for the PatchTST dataloader, I expect our model will not work with it, since we do not rely on the concept of patches. I'd suggest using our provided dataloaders (you can adapt any dataset as the tutorials suggest) to benchmark our model.
Hi! Just checking if this is resolved? @m6129
Thanks!!!
Hello, Dear developer.
Question about the computing power expended: in the 20 November, 2023 year preprint you write
in the latest version of the preprint you state
Could you explain this difference?
I also wanted to ask you if your model's prediction would be affected by the data_loader from the PatchTST repository (https://github.com/yuqinie98/PatchTST/tree/main/PatchTST_supervised/data_provider)?