-
### Describe the feature
就类似于期刊名称到期刊简称一样,比如会议名称是International Conference on Machine Learning,我希望将它在会议集名称自动转换为ICML
### Additional context
_No response_
-
如题,如果想重建特征,为什么不能阻止模型是全1的平凡形式(即强行让输入等于输出)输出一个loss为0的解?即使说模型非常复杂,很难达到平凡参数,但是模型趋近于平凡参数,loss也会接近0.这样的参数是没有意义的。无论说中间模型的结构多么复杂,我们怎么能够确保模型是在学习数据本身的pattern而非单纯复刻数据呢?请问可以帮我解答一下这个问题吗?感谢!
-
Hi Tao,
Thank you for making a comprehensive list of literature on Flow matching!
- I'd like to recommend two works related to bridge matching:
**Diffusion Bridge Mixture Transports, Schrödin…
-
Hi Ethan,
Congrats on your paper’s acceptance at ICLR and thank you for sharing the great work! I’m reaching out with a question about Eqn. 2 in the paper. I was going through the 2D-SSM derivation…
-
Thanks for your great work in ICLR 2024, I am quite interested in your work. As I want to compare the performance of RL and LLM, Could you please also share the code for the GRAD algorithm?
-
Does the size of the input and output images have to be fixed, or can the original content image size be maintained?
-
-
Hi,
Thanks for this cool work. Would you mind appending our work on diffusion image codec?
idempotence and perceptual image compression, ICLR 2024
Arxiv: https://arxiv.org/abs/2401.08920
Code:…
-
The entry for our paper Corrective Machine Unlearning has the following issues:
1. It misspells author name, it should be Goel et al. not Geol et al.
2. Codebase: https://github.com/drimpossible/cor…
-
Hi, thank you for sharing your work!
I am also interested in similar works.
Are you planning to conduct experiments such as the comparison with ANT [1] or the compatibility with DTR [2]?
[1] A…