Closed less-and-less-bugs closed 1 year ago
These are the different formulations we tried for the DNF layer but ended up using only the WeightedDNF in the paper. The BaseDNF
sets up the permutation and computation and other classes override how the semi-symbolic layer is computed.
RealDNF uses soft_minimum
to compute conjunction and DNF (plain version) uses your regular multiplication to compute conjunction. You can trace the differences in the compute_conjunction
, reduce_existential
and compute_disjunction
functions.
With RealDNF the assumption was truth values have no bounds (they are real numbers) with plain DNF they are constrained to fuzzy logic [0,1] ranges and Weighted DNF constrains them to [-1,1] and uses tanh
.
Hope that helps.
Thanks for your reply!
Thanks for your great work!
What's the difference between Base DNF, weight layer, and real-value layer? I have not seen this point in your paper yet.
Lookforward to your reply!