Open maxeeem opened 5 months ago
@bowen-xu this is related to the conversation here. When writing it as a test case it actually passes since it looks at all of the derived tasks but Console isn't reporting it so somewhere in the control part there is a broken link.
def test_multistep_1(self):
tasks_derived = process_two_premises(
'<a --> b>.',
'<b --> c>.'
)
tasks_derived.extend(process_two_premises(
'<c --> d>.',
'<a --> d>?',
200
))
self.assertTrue(
output_contains(tasks_derived, '<a --> d>. %1.00;0.73%')
)
@bowen-xu Hm... I just tried ConsolePlus and got the right answer....
Which is the right console to use? Console, ConsoleMC or ConsolePlus? Should we continue maintaining all of them or updating the basic one since there's clearly some inconsistency?
EDIT:
Actually not really consistent results for ConsolePlus. Seems to report both and alternate. Something strange with the control mechanism I think.
after running the first 2000 cycles...
.....
Input: <a --> d>?
0.90 0.90 1.00 IN :<a --> d>?
Input: 10
INFO :Run 10 cycles.
...
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.86 0.90 0.86 ANSWER:<a --> d>. %1.000;0.729%
0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287%
0.86 0.90 0.86 ANSWER:<a --> d>. %1.000;0.729%
@maxeeem Console
is the default one. ConsoleMC
was implemented by Tangrui@MoonWalker1997 for MultiChannel, and that is outdated. ConsolePlus
was implemented by @ARCJ137442 as an optional one.
@bowen-xu Hm... I just tried ConsolePlus and got the right answer....
Which is the right console to use? Console, ConsoleMC or ConsolePlus? Should we continue maintaining all of them or updating the basic one since there's clearly some inconsistency?
EDIT:
Actually not really consistent results for ConsolePlus. Seems to report both and alternate. Something strange with the control mechanism I think.
after running the first 2000 cycles... ..... Input: <a --> d>? 0.90 0.90 1.00 IN :<a --> d>? Input: 10 INFO :Run 10 cycles. ... 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.86 0.90 0.86 ANSWER:<a --> d>. %1.000;0.729% 0.96 0.90 0.64 ANSWER:<a --> d>. %1.000;0.287% 0.86 0.90 0.86 ANSWER:<a --> d>. %1.000;0.729%
@maxeeem I tried the old engine (GeneralEngine), and it output the answer within 500 cycles:
input:
<a --> b>.
<b --> c>.
<c --> d>.
<a --> d>?
500
output:
...
0.88 0.03 0.09 OUT : <<$1-->a><=><$1-->d>>. %1.000;0.287%
0.88 0.03 0.10 OUT : (&&, <#1-->a>, <#1-->d>). %1.000;0.403%
0.83 0.28 0.47 OUT : <c-->c>?
0.98 0.90 0.86 ANSWER: <a-->d>. %1.000;0.729%
0.93 0.50 0.65 OUT : <a-->d>. %1.000;0.297%
@maxeeem I tried the old engine (GeneralEngine), and it output the answer within 500 cycles:
Right, very possible. And if you set compositional_enabled
to False, then KanrenEngine also produces a result. However, this alone does not explain why the answer isn't produced with that enabled even thought there is a statement with the correct truth value, or why ConsolePlus can give the right answer in some cases but not in others. I still think the control part requires our attention if the goal is to migrate to the new inference engine.
I would recommend reviewing inference_step
code and seeing if there is some issue we can identify.
@bowen-xu I have another theory. Because of the many more derived statements produced by the new engine, there are many more possible combinations and some relevant items may be pushed out of memory since we default to n_memory = 100
in Console.py run_nars
method. If you increase it to 1000 for example and run the inference for 11000 steps like in OpenNARS test case, we get the expected result of 73% confidence.
So to me this once again points to a control thing more than the inference engine i.e. how we pick what to pass to the inference engine at every cycle and how we allocate resources.
@maxeeem
ConsolePlus
was implemented by @ARCJ137442 as an optional one.
As @bowen-xu said, ConsolePlus
is an improved version of the default Console
written by myself.
You can get more details and enhancements in PR#27.
Which is the right console to use? Console, ConsoleMC or ConsolePlus? Should we continue maintaining all of them or updating the basic one since there's clearly some inconsistency?
It's already an issue #35 which is still open yet.
I'm not sure how these 'Console' implementations will be handled, maybe they will eventually be merged into one.
Actually not really consistent results for ConsolePlus. Seems to report both and alternate. Something strange with the control mechanism I think.
I checked codes in
and
One of the difference in the reasoning effect between the two consoles lies in the default values of the parameters n_memory
and capacity
when building the inference engine. The default parameters in Console
is smaller than ConsolePlus
in n_ roomy
and ConsolePlus
(100 vs 500)
I also checked codes in
and
where the NARSOutput
in definited as
and used as
(the code here is a little complicated, and the main purpose is to adapt to some terminals that do not support ANSI escape sequences).
I think there is no functional difference between the two Console
s in printing outputs.
Describe the bug
When trying multistep reasoning example from OpenNARS, the system is able to derive the correct conclusion but it is not reported in Console as ANSWER but as regular OUT.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Higher confidence statement should be the best available answer.