Closed SrGonao closed 10 months ago
Hm, I wonder if the select_k_features
is not applying the selection mask to the X_units
and y_units
... Does it work without that set?
Also, I would consider lowering the dimensional_constraint_penalty
. Sometimes when it is too harsh of a penalty, it prevents the search from exploring efficiently.
Hm, it looks like I indeed remembered to apply the selection mask to the units as well: https://github.com/MilesCranmer/PySR/blob/57dd7d2f9a2b214599e482cd7335ab0e815e8f1c/pysr/sr.py#L1541-L1543
Actually maybe I misunderstood the problem. It could already be working but maybe the output is unclear. Could you describe:
The function chosen, with a loss of 0.8, is not dimensionally consistent with itself, and is not even close to J/V.
what you mean here with an example? Note that any constants found during the search actually have their own units. The string [⋅]
basically means it can take on any units that make the equation work. (I'd consider adding an option to remove this "wildcard dimensions" functionality if you want)
Also, I would consider lowering the
dimensional_constraint_penalty
. Sometimes when it is too harsh of a penalty, it prevents the search from exploring efficiently.
Thanks for the tip, I will try it, I had put it to such high value because it is the one suggested in the example of dimensional, but I guess in that case it was so high because the values were also very high.
Actually maybe I misunderstood the problem. It could already be working but maybe the output is unclear. Could you describe:
The function chosen, with a loss of 0.8, is not dimensionally consistent with itself, and is not even close to J/V.
what you mean here with an example? Note that any constants found during the search actually have their own units. The string
[⋅]
basically means it can take on any units that make the equation work. (I'd consider adding an option to remove this "wildcard dimensions" functionality if you want)
I was thinking about exactly that, that maybe this was not a bug, but just a misunderstanding from my part about the functionality. From what I gathered, there is no way to get the individual units of each constant term, which could be important.
About your suggestion, the removal of the wildcard, or at least a way to distinguish between unitless constants and constants with units (by having them have a chosen complexity for instance), would be very applicable. Maybe I'm misunderstanding the algorithm backstage, but if you can have any constant be any unit then it seems that it will most likely bypass the unit search (maybe in my case it was more pressing, because some of my variables were indeed constants).
This means that clearly this was not a bug, but just a misunderstanding of my part.
Right, as an example, the expression:
"y[m s⁻² kg] = (M[kg] * 2.6353e-22[⋅])"
is actually dimensionally consistent, because the ⋅
, when solved, can take on the units of m s⁻²
.
However, the expression:
"y[m s⁻² kg] = (M[kg] * 2.6353e-22[⋅] + m[kg])"
would not be dimensionally consistent, because there does not exist any such units inserted into the ⋅
that could make this expression work.
So, you may be asking: why not show units in the ⋅
instead of just leaving it blank (and having the user figure it out afterwards)? The reason is basically that I found it is much faster to check dimensional consistency this way. If we were to solve exactly what units should be used in each ⋅
, it would be a bit slower (not to mention sometimes there are multiple solutions). The reason is: dimensional check, you basically just have to trace from leaves of the expression upwards, recording if there is a "wildcard" dimension or not. But for getting the specific units, you would have to first trace from leaves to root, and then from root to leaves to fill in the units.
Since we need to very rapidly evaluate dimensional consistency, the tradeoff just did not seem worth it, compared to the user figuring out the units afterwards.
But maybe there is a fast way to do it, and we could display the units instead of ⋅
. The dimensional analysis portion of code was fairly recent and I'm open to suggestions/changing it!
You can see the dimensional analysis code here:
For example, the code for addition and subtraction operators is given here (op
is either +
or -
)
@eval function $(op)(l::W, r::W) where {Q,W<:WildcardQuantity{Q}}
l.violates && return l
r.violates && return r
if same_dimensions(l, r)
return W($(op)(l.val, r.val), l.wildcard && r.wildcard, false)
elseif l.wildcard && r.wildcard
return W(
constructor_of(Q)($(op)(ustrip(l), ustrip(r)), typeof(dimension(l))),
true,
false,
)
elseif l.wildcard
return W($(op)(constructor_of(Q)(ustrip(l), dimension(r)), r.val), false, false)
elseif r.wildcard
return W($(op)(l.val, constructor_of(Q)(ustrip(r), dimension(l))), false, false)
else
return W(one(Q), false, true)
end
end
You can see there are five branches (after first checking if either the left or right argument is already dimensionally invalid):
The type used for "wildcard" quantities is this one:
"""
WildcardQuantity{Q<:AbstractQuantity}
A wrapper for a `AbstractQuantity` that allows for a wildcard feature, indicating
there is a free constant whose dimensions are not yet determined.
Also stores a flag indicating whether an expression is dimensionally consistent.
"""
struct WildcardQuantity{Q<:AbstractQuantity}
val::Q
wildcard::Bool
violates::Bool
end
Q
is a quantity-like type. The quantity objects are from the units package DynamicQuantities.jl, but this WildcardQuantity
is defined in SymbolicRegression.jl
Also, as a quick and dirty way to avoid learned constants, you can use complexity_of_constants=100
. Then constants are prohibitively expensive so the search will avoid them.
What happened?
Hi! I'm experimenting with dimensional constraints and finding non-expected behaviour.
When initializing the model I set "dimensional_constraint_penalty=10**5" and "select_k_features=8". Then when fitting the model I'm using a pandas data frame (with 8 columns) and using as X_units: ["mol","V","nm","nm","J / V^2 / nm","mol / nm^3","J"] and setting as y_units ["J / V"].
The first 2 equations found by the model have, as expected, a 10000 penalty to the loss, and they have the wrong units ([nm] and [nm]+[J / V ^ 2]. The next equation is a constant and does not have the 10000 penalty loss. All following equations also do not have the 10000 penalty loss even though they are not in units J / V (which in SI units should be A*s).
The same is happening when setting the units to be J / V ^ 2 for instance. Is this the expected behaviour, or am I missing something? The function chosen, with a loss of 0.8, is not dimensionally consistent with itself, and is not even close to J/V.
Thank you
Version
0.16.3
Operating System
Windows
Package Manager
pip
Interface
Jupyter Notebook
Relevant log output
No response
Extra Info
This is what I'm using to run, the data here is random, just so it could be run, in my actual code I'm using real data.