-
Hi Paranioar! Could you please include this recent work about video-language pretraining and noisy correspondence? Thanks a lot.
**(ICLR2024_Norton) Multi-granularity Correspondence Learning from L…
-
庆祝大佬拿下ICLR2024,可以询问一下,如果我想用他来做文本摘要的prompt的工作,是不是按照新加task部分设置数据集和相关评价指标就可以啦,祝大佬多发paper
-
Hi! Really appreciate your project! I found your repo from openreview-python official repo and I have sketched across your code. I found that you achieve the results without using the official api. So…
-
![image](https://github.com/GR1-Manipulation/GR-1/assets/33491471/f694caec-c52e-4fff-8b5e-e9ace5fda675)
一人血书求开源权重和模型!
-
Congratulations that the paper has been recieved by ICLR2024. But when to publish the code?
-
If you do a specify the file, pandoc will get caught in an infinite loop.
pandoc --version
pandoc.exe 2.9.2
Compiled with pandoc-types 1.20, texmath 0.12.0.1, skylighting 0.8.3.2
pandoc .\l…
-
注意到scripts 中有些模型指定了 batch size 的大小,请问这么做的原因是什么呢?另外论文中对比的结果是否控制了batch size大小相同呢?
-
Hi there, thanks for your fabulous contribution regarding the rapid progress of this domain!
Our work EmerNeRF might also be interesting to your audience, which has already been accepted to ICLR202…
-
谷歌的新模型TSMixer: An all-MLP architecture for time series forecasting在DLinear模型的基础上进一步用于和Transformer时间序列预测的基线进行比较,能否将该模型纳入到本库这个统一的框架中方便进行一些性能对比呢?
-
作者您好:
在DLinear中有指出基于transformer的方法通常在seq_len=96时有着最好的表现,但对于PatchTST来说,论文中明确提到了更长的seq_len对于model是有益的,那么对于baseline的选择不应该选择表现最好的超参数吗,我对比了将seq_len设置为512的PatchTST实验结果,基本上PatchTST是优于seq_len=96以及seq_len…