Closed automenta closed 2 years ago
also i think the ! symbol for negation is an awful syntax choice, given that it applies perfectly for goal punctuation. even NARS original '(--,x)' is better, or my shorthand '--x'
The negation rule can be found here: https://github.com/opennars/OpenNARS-for-Applications/blob/master/src/NAL.h#L160
Hm if it is negated at input, an exception for goals would have to be made, since G! %f% and (-- G)! %1-f% are not the same! Also there are some issues with introducing negation if it's removed, but this is solvable.
Regarding syntax: ONA supports the (! a) notation and the (-- a) notation and the original (--,a) negation and (!,a)
look, its something i dont not want. you understand my goal?
I understand the idea to remove negated concepts, which I think is valid. It would need more work though, to not loose ability to learn to reach negated outcomes. It would need to unify the table structures of negated and non-negated into the concept (as <A ==> (-- B)>
is not the same as (-- <A ==> B>)
). But this would essentially assume that each concept needs an implicit negated version as well, while in practice only a very small portion (less than 5% typically) needs a negated version. Hence, this would actually reserve even more space for negations via table structures, rather than less.
Regarding the latter:
I admit it's a bit counter-intuitive, but goals are conceptually implication statements:
G! = <G ==> D>
Hence
(-- G)! = <(-- G) ==> D>
which is not the same as
(-- <G ==> D>)
in my view, <A ==> --B> IS the same as --<A ==> B>. this is another reduction that comes with the auto-unneg.
you can pretend that what i want is not what i dont not want, but there isnt much point in not doing what you dont want to not do
just signals oscillating between 0 and 1. how else do you control a motor.
The Victor 884 works by recieving a PWM signal input from a robot controller, which may include the (full) Robot Controller, the Robovation controller, a Vex Controller. Depending on the range of the PWM signal - with 0 being full reverse, 127 being neutral, and 254 being full forward - the Victor 884 adjusts the output of the motor accordingly. This achieves a variable speed control for such applications as drivetrains, arms, or elevators.
...or beliefs
While this isn't the case with ==> I actually agree that <A =/> --B>
and (-- <A =/> B>)
are actually equal, and we are talking about temporal implication tables which indeed hold =/>.
In ONA currently negation is only "unwinded" with the inference rule above, but not introduced.
So when positive events are presented as
(-- P). :|: %0%
(-- Q). :|: %0%
it will actually derive both
P =/> (-- Q) %0%
P =/> Q %1%
while when they are presented in the natural way
P. :|:
Q. :|:
it will only derive
P =/> Q %1%
Maybe it should indeed do both, as it would allow it to learn how to prevent events by default, rather than just how to realize them, and not just when it can perceive the negation. Now, whether that should happen in the same concept or not, that's a separate issue, but maybe it should (though it wouldn't gain much space, probably 10% at most, since the table space for the negated cases would still need to exist) I will experiment with this once the new version is out (almost done)!
Regarding the goals: Here I disagree. "getting rich doesn't make me happy" = rich! %0% is simply a different thing than "not getting rich makes me happy" = (-- rich)! Only the latter leads to "avoid getting rich" behavior while in the former case the system doesn't care.
i've mentioned that i have a problem with Deduction truth fn used in impl syllogism. i will explain it briefly here:
in Deduction, if either the implications are negative you will get a negative freq result with very low if not zero confidence.
what i discovered i wanted is to preserve the freq polarity of the ultimate postcondition. so i invented the 'Conduct' truth fn which simply preserves the frequency but attenuates the confidence as appropriate. this allows a positive impl to interact with a negative impl and derive a negative impl with high confidence. i know this breaks the symmetry with NAL1's syllogistic rules but maybe those need adjusted this way too (and i have tried).
this also makes sense in "strong deduction", the reaction of an event and an implication - which also is not exactly "Deduction" either.
i dont know why you're confusing rules and desires here. if you want X or you don't, this is your happiness. if happiness is something separate then you need an implication to relate it to the goal.
You seem to ignore the actual formalization, of G! as G ==> D where D stands for "desired state".
"what i discovered i wanted is to preserve the freq polarity of the ultimate postcondition. so i invented the 'Conduct' truth fn which simply preserves the frequency but attenuates the confidence as appropriate. this allows a positive impl to interact with a negative impl and derive a negative impl with high confidence."
This sounds fundamentally wrong, deduction can't do that. Generally, you can't just make up truth function as the wind currently goes. You have to consider how evidence actually works. You might want to have a look at NAL-1 again where NAL deduction is introduced.
i absolutely can and did invent new truth functions. i didnt mention some of the others, like Divide which is a ideal Decompose involving division - the opposite of intersection's multiplication.
Deduction can't do it because of its biased dependence on the multiplied frequencies in attenuating conf which will be ZERO if either premise component is zero. that's why i said i replaced it with Conduct. this covers all the polarity cases without unnecessary redundant reified negation cluttering the entire memory.
impl_conduct {
( S ==> X), ( X ==> P), --var(X) |- implSyl(S,P,1,1,i), (Belief:ConductPPX, Time:Union)
( X ==> P), ( S ==> X), --var(X) |- implSyl(S,P,1,1,o), (Belief:ConductPP, Time:Union)
( S ==> X), (--X ==> P), --var(X) |- implSyl(S,P,1,1,i), (Belief:ConductNPX, Time:Union)
(--X ==> P), ( S ==> X), --var(X) |- implSyl(S,P,1,1,o), (Belief:ConductPN, Time:Union)
}
Conduct is simply a strong form of Induction:
/** propagates frequency, attenuates conf */
public static Truth conduct(Truth dir, Truth mag) {
float mf = mag.freq();
double c = mag.conf() * dir.conf() * mf;
return truth(dir.freq(), c);
}
why separate G from D? all i need is to compare belief and goal truth to determine if the event=condition=state is satisfied or not, and whether that satisfaction desires positive (f=~1) or negative (f=~0) or somewhere in between. and if not satisfied, what direction (freq difference) the belief would have to move.
i have my doubts about classic NAL1 also, and made changes there too.
“Language creates spooks that get into our heads and hypnotize us.” – Robert Anton Wilson https://climateviewer.com/2014/04/06/the-anatomy-of-political-slavespeak/
i think you basically understand my position on this now. can you make one example showing what can be represented only in nars and not narchy? :jack_o_lantern:
I don't have the resources currently to do so but might analyze these considerations again in the future. In the meanwhile you could try to see if Narchy can get similar scores like ONA in the included procedure learning tasks, I think this is the best empirical test on whether the basic truth functions lead to proper credit assignment and contextually applicable hypothesis usage. Also, feel free to read up on the discussion in https://groups.google.com/g/open-nars/c/ILfG8OFVxN8/m/33toHH4rrxYJ regarding deduction in NAL, or to join the discussion there. It's not up to ONA to build a new theory.
Closed for now as not in accordance with the theory implemented by ONA.
you make such good examples. are you sure you cant think of just 1 that illustrates a need for "low density" negation-cluttered memory? lol
While auto-unnegate is not the full solution I actually agree to your idea to combine negated and non-negated concepts. I will implement this after the next release because I also want to have precondition implication tables for avoidance learning, as to essentially maintain both A =/> B and A =/> (-- B) in concept B.
Thanks for bringing this up btw., I think this will be useful.
A simple normalization step for negated input tasks that un-negates the term and inverts the truth frequency.
this should be functionally equivalent to the Truth_Negation rule - but without needing to create any separate concepts, ex: X vs. !X.
I was unable to test the effect of this code because I could find no example that actually seems to make use of negation, or for that matter, any belief or goal with a frequency other than 1.0. assistance would be appreciated here. maybe this step is already being applied somewhere but i doubt it - having grepped through code occurrences of NEGATION i can not find it.
the lack of this normalization can been seen as a "combinatorial memory leak" in NARS implementations like ONA. while NAL clearly follows fuzzy logic t-norm formulas which generally preserve "de morgan's law" in negations, there ought not to be separate concepts where the semantics are meant to be equivalent. this will produce a redundant explosive subset of negated 'shadow' concepts that could otherwise be collapsed, beliefs and goals included, into their unnegated core term.
there are other semantic symmetries like this that i could illustrate, but this is probably the most widely applicable and far-reaching. assuming negation is applied anywhere at all.
the performance effects of applying this normalization ought to be noticeable, towards better or worse. maybe the shadow negation concepts, if they are meant to exist, provide some benefit, but it will not be without memory cost. so i included a variable to be assigned by a configuration option to toggle it for comparison.