zjunlp / Prompt4ReasoningPapers

[ACL 2023] Reasoning with Language Model Prompting: A Survey
MIT License
826 stars 65 forks source link

Some new papers with logical reasoning #6

Closed 14H034160212 closed 1 year ago

14H034160212 commented 1 year ago

Hi,

Thanks for the great work! We are the team from Strong AI Lab, University of Auckland, New Zealand. Here are three papers about deductive logical reasoning and abductive logical reasoning. Please feel free to consider adding them in the future ArXiv version paper.

Deductive Logical Reasoning

We construct logical equivalence data augmentation for contrastive learning to improve language model's logical reasoning performance and we achieved #2 on the ReClor leaderboard (One of the hardest logical reasoning reading comprehension dataset, the data was collected from LSAT and GMAT) and we also achieved better performance than other baseline models on different logical reasoning readining comprehension tasks and natural language inference tasks. Here is the details for the paper.

Our paper (Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu) "Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text" [Paper link] [Source code] [Model weights] [Leaderboard].

Multi-Step Deductive Logical Reasoning

This paper from our lab has been published on IJCLR-NeSy 2022. It is a new conference that specifically focuses on learning and reasoning and Prof. Zhi-hua Zhou is one of the co-organizers. This paper focused on multi-step deductive reasoning and it proposed a larger deep multi-step deductive reasoning dataset over natural language called PARARULE-PLUS which addresses the reasoning depth imbalance issue for the Ruletaker dataset. Our proposed PARARULE-Plus dataset has been collected and merged by LogiTorch.ai and OpenAI/Evals.

Our paper (Qiming Bao, Alex Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu) "Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation" has been accepted for presentation to the 2nd International Joint Conference on Learning & Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy-22) [Paper link] [Source code and dataset] [Presentation recording].

Abductive Logical Reasoning

This paper from our lab has been published on ACL 2022 Findings. This paper focused on abductive logical reasoning and it proposed a new abductive logical reasoning dataset over natural language called AbductionRules which is to help transformers explain and generate the reason by given the observation. Our proposed AbductionRules dataset has been collected by LogiTorch.ai.

Our paper (Nathan Young, Qiming Bao, Joshua Ljudo Bensemann, Michael J. Witbrock) "AbductionRules: Training Transformers to Explain Unexpected Inputs" has been accpeted for publication in the Findings of 60th Annual Meeting of the Association for Computational Linguistics (ACL-22) [Paper link] [Source code].

GoooDte commented 1 year ago

Thank you for your attention to our work and for your essential contributions to deductive and inductive reasoning. We will consider incorporating your work into future versions as soon as possible. Looking forward to more excellent works from you in the future.

14H034160212 commented 1 year ago

Thanks @GoooDte!

zxlzr commented 1 year ago

Hi, we have updated the arXiv version.

14H034160212 commented 1 year ago

Thanks a lot! I just found a typo in the arxiv paper that the above papers are focusing on deductive reasoning and abductive reasoning instead of deductive reasoning and inductive reasoning.

zxlzr commented 1 year ago

Thanks, we will fix it soon.

zxlzr commented 1 year ago

It has been fixed now.

14H034160212 commented 1 year ago

Thanks @zxlzr!