jabir-zheng / TCD

Official Repository of the paper "Trajectory Consistency Distillation"
https://mhh0318.github.io/tcd
290 stars 14 forks source link

On Plagiarism of "Trajectory Consistency Distillation" #13

Open Kim-Dongjun opened 3 months ago

Kim-Dongjun commented 3 months ago

We sadly found out our Consistency Trajectory Models (CTM, ICLR24) was plagiarized by Trajectory Consistency Distillation (TCD)! See Twitter and Reddit.

We are deeply disappointed of TCD author's inappropriate reaction. Accordingly, we reported their plagiarism issue to their affiliated universities, hugging face team, and ICML. *Speaking on behalf of myself

ilicnon commented 3 months ago

I would choose plagiarized poop any day that actually does help my QoL with SDXL models rather than some ML paper with a diffusion model that can only be used to generate ImageNet images.

Also, notify huggingface team for what? Removing those useful LORAs that are certainly more useful than your models?

Not affiliated with authors in any way BTW

advenTure423 commented 3 months ago

I would choose plagiarized poop any day that actually does help my QoL with SDXL models rather than some ML paper with a diffusion model that can only be used to generate ImageNet images.

Also, notify huggingface team for what? Removing those useful LORAs that are certainly more useful than your models?

Not affiliated with authors in any way BTW

Well done! Another poop that just works on poop datasets like CelebA-HQ and CIFAR-10: https://arxiv.org/abs/2006.11239 . By the way, "Not affiliated with authors in any way BTW", lol

ilicnon commented 3 months ago

Well done! Another poop that just works on poop datasets like CelebA-HQ and CIFAR-10: https://arxiv.org/abs/2006.11239 .

I didn't call CTM nor the DDM paper poop, but if you insist, that poop you were talking about will be nothing if Stability.AI or NAI didnt come around and "plagiarize" it. Will someone or sony or CTM team actually release something based on CTM that will be as useful as this LORA released by TCD? Doubt it, in that case, the LORA that has been released by TCD team has more net positive to the community than any from CTM, and to not remove such net positive is the hill i will die on.

Still not affiliated with authors in any way.

mhh0318 commented 3 months ago

We staunchly oppose any forms of plagiarism as well as the unwarranted accusations. At current stage, we will maintain the open status of this issue, yet we sincerely hope that there will be a focus on the technical aspects of the issues.

MoonRide303 commented 3 months ago

@Kim-Dongjun I've looked at TCD paper, and I see clearly stated there that the proof comes from your work (CTM), listed in the sources. Are you sure you didn't overreact a bit, here?

advenTure423 commented 3 months ago

@Kim-Dongjun I've looked at TCD paper, and I see clearly stated there that the proof comes from your work (CTM), listed in the sources. Are you sure you didn't overreact a bit, here?

It is actually a disgraceful trick, which means, oh, I just borrowed this part, that's all I get from CTM. But the truth is that the core idea of TCD is highly identical to CTM. Given that TCD has copied word by word in many places, the authors of TCD should have been well aware of CTM, so such behavior is obvious plagiarism. I would say TCD would be a good technical report based on CTM, but the author of TCD apparently did not plan to do so.

yoyololicon commented 3 months ago

I would say TCD would be a good technical report based on CTM, but the author of TCD apparently did not plan to do so.

Totally agree. TCD gives some interesting information, but there should be more novelty if they want to submit it for peer review. Publishing papers contributes to science, not making products. The idea of open science should focus on transparency and reproducible research, not favouring any community.

advenTure423 commented 3 months ago

Totally agree. TCD gives some interesting information, but there should be more novelty if they want to submit it for peer review.

Not even the lack of novelty. It is acceptable if one paper lacks novelty but is still published, given it really contributes to any community. But it is disgraceful if the paper deliberately turns a blind eye to already published papers and takes all credit itself.

MoonRide303 commented 3 months ago

From TCD paper (in A. Related Works):

Kim et al. (2023) proposes a universal framework for CMs and DMs. The core design is similar to ours, with the main differences being that we focus on reducing error in CMs, subtly leverage the semi-linear structure of the PF ODE for parameterization, and avoid the need for adversarial training.

It doesn't quite look like turning blind eye to me. Seems like they gave the credit (or at least tried to), and described how they improved the CTM method. To my taste this should more clearly stated in the introduction, not in the appendix at the end of TCD paper. Reducing the computational cost of the previously used methods can be still seen as scientific improvement (IMO) - especially in the area of ML / AI, where computational costs are often huge.

It's just my high-level attempt to understand the problem here, from layman perspective and without assuming bad faith. Instead of hasty judgement of my own I would rather see results of cross-check from reviewers with solid math and latent diffusion background, who would be able to fully understand all the math details of both papers, and evaluate if TCD method really is the improvement over CTM and contribution to the science, and to what degree.