Open canyuchen opened 6 months ago
Thank you very much for your detailed description. This really helps a lot!
We have added the two works to our survey and the GitHub Repository. If there are any other works that we miss, please let us know.
Again, thank you very much for your great work for the Reasoning Community. Thank you very much for your attention to our work, and wish you a good day!
Congratulations on your recent solid survey paper! I am impressed by the depth and comprehensiveness of the survey paper.
I would greatly appreciate it if you could consider citing our work [1][2] in
LLMs can contribute to the dissemination of misinformation, both intentionally and unintentionally
in "Section 6.2 Interpretability and Transparency", or the "Hallucinations" section of "Section 5 Discussion: Challenges, Limitations, and Risks", orVarious intended attacks have been identified, including the ... disinformation
in "Section 6.1 Safety and Privacy"You could also check out our project website: https://llm-misinformation.github.io/ Thanks a lot!
[1] Combating Misinformation in the Age of LLMs: Opportunities and Challenges https://arxiv.org/abs/2311.05656
[2] Can LLM-Generated Misinformation Be Detected? https://arxiv.org/abs/2309.13788