nuric / deeplogic

DeepLogic: Towards End-to-End Differentiable Logical Reasoning
https://arxiv.org/abs/1805.07433
BSD 3-Clause "New" or "Revised" License
23 stars 7 forks source link

The issue of figure generation #4

Closed 14H034160212 closed 4 years ago

14H034160212 commented 4 years ago

Hi,

Thanks a lot for your suggestion! Now I can generate the figure after deleting the plt.tight_layout(). But there are two questions. One is the figure overlapped. I double-check the source code, there is a self.figure.tight_layout() in the backend_interagg.py. Here is the figure that I got. Another question is I cannot get the figure in the plug-in from the PyCharm and I can only use the argument --outf to save the figure in the drive. Do you have any idea about this issue?

OUTF

Besides, how do you get the figure 2 from the paper? I am curious how to get backward chaining and forward chaining from the IMA with the softmax attention model. I found from the source code using imasm as backward chaining and fwimarsm as forward chaining. Here is the diagram that I got. Backward chaining is more evident than forward chaining. Cause for the forward chaining, step 2 from the left first forward chaining table might show r(X):-s(X) rather than p(X):-q(X). plot_dual_attention_imasm_fwimarsm

Is the deepest reasoning depth is 4? I see there are some questions finished before step 4. What if the question cannot be solved in 4 steps? Besides, for the test cases, Can the model solve some more complex problems? The model is trained by multi-task learning, but testing on each test cases. I see there are different difficulty level of question, from easy to hard. What is the main difference between them? Can the model solve a problem that has all of the property from those 12 different tasks?

Many thanks!

nuric commented 4 years ago

I'm not sure about PyCharm related issues, they don't sound related to any bugs in the source code since you are able to reproduce the results and the diagrams. I would recommend looking at PyCharm related documentation and similar to tight_layout adjust the visualisation code.

The details for forward chaining case are explained in the paper, end of experiments section. There are certain training conditions under which that is achieved.

Yes, at training time the models are iterated at most 4 times to cover 4-step reasoning. Questions regarding longer chains of reasoning at test time and difficulty of questions are evaluated and discussed in the paper, namely in section analysis and details of the easy, medium, hard datasets are given in section named experiments.