Open pasquierjb opened 1 year ago
I think I'm inclined to call this not a bug, or at least "won't fix" on the pvlib side. The example is only one ulp away from returning a positive projection:
>>> import math
>>> irradiance.aoi_projection(surface_tilt-math.ulp(surface_tilt), surface_azimuth, solar_zenith, solar_azimuth)
6.938893903907228e-18
float64 precision for numbers of order 1 is ~2e-16, so I don't think we can call any values within 1e-16 of zero "incorrect" (in the IEEE-754 sense) here. I'm also skeptical that we could reasonably ensure sign consistency for these small numbers even if we wanted to. I think the onus is on pvfactors to be robust to this kind of annoying trait of limited precision calculations.
Fom the documentation for aoi_projection:
Usage note: When the sun is behind the surface the value returned is negative. For many uses negative values must be set to zero.
I think it is a similar issue to values greater than 1 in https://github.com/pvlib/pvlib-python/issues/1185 adressed by clipping the result. A not ideal but similar fix could be done by replacing the result with very close to zero with np.isclose()
One would expect aoi_projection
to be exactly equal to 0 when the module is perfectly orthogonal to the sun.
import pvlib
zenith = 89.26778228223463
azimuth = 60.932028605997004
print(pvlib.irradiance.aoi_projection(zenith+90, azimuth, zenith, azimuth))
>> 6.071532165918825e-17
One would expect aoi_projection to be exactly equal to 0 when the module is perfectly orthogonal to the sun.
Mathematically yes, but what I'm saying is that I don't think the word "perfect" is generally meaningful in the context of floating point calculations. Here's a demonstration:
In [27]: (1e-15 + 90) == (-1e-15 + 90) == 90
Out[27]: True
Clearly this should evaluate to False in an abstract mathematical context, but these limited precision floats don't let us distinguish these values. This shows that we cannot say that +90
necessarily produces a perfectly orthogonal angle with floats -- otherwise we would be forced to accept that 90
is simultaneously perfectly orthogonal to 1e-15
, -1e-15
, and 0
, which of course is nonsense.
I think really all we can say about this zenith+90
example is that the two vectors are orthogonal to within the precision afforded to us by the floating point type (~16 decimal digits for float64). And if we can't say that the inputs are perfectly orthogonal, then we can't say that this small but nonzero output value is "wrong" as long as it is consistent with the precision of the inputs.
I think it is a similar issue to values greater than 1 in https://github.com/pvlib/pvlib-python/issues/1185 adressed by clipping the result.
The underlying cause (limited precision floating point operations) is the same for #1185, but there is an important difference: in #1185 it is certain that values outside [-1, +1] are incorrect, and so clipping to that range can only improve the answer, while in this case I don't think we have sufficient evidence to claim incorrect results.
Since you're expecting a value in [-1, +1], you could round to a fixed number of decimal places to get both clean zeros and clean ones.
If the goal is to have aoi_projection(a, b, a+90, b) == 0
(which I remain unconvinced should be a goal) then yes. But it doesn't address the original bug report of inconsistent projection signs between front and back of module.
I would view this function as being more robust if it achieved that goal. Rounding would have to be to 3, 7, or 15 decimal places for float 16, 32 and 64 respectively to have the same precision on all values in the range [-1,1] and I believe that would address the original concern (as long as -0.0 is acceptable, which it should be because -0.0 == 0.0
).
I'm not proposing a change, but I would not object to such a change.
Rounding the projection won't change its sign, so if the signs are identical (and thus inconsistent) between the front- and rear-side projections like they are in the example at the top of this thread, they'll still be identical/inconsistent after rounding, which I think was the original concern. We would be returning -0.0
(instead of -3.642919299551295e-17
) for both cases, but to resolve the inconsistency we would need to return 0.0
for one of them.
-0.0
would not solve the sign inconsistency but it would solve the pvfactors issue as-0.0 < 0
returns False
perez_diffuse_luminance would not recursively call the function again and again.
. I think the onus is on pvfactors to be robust to this kind of annoying trait of limited precision calculations.
But maybe you are right, this should be done on their side
Since now PVLIB has its own version of pvfactors (solarfactors if it hasn't been renamed) should this be solved in that side? I would be open to try that.
should this be solved in that side?
Yes, I think so. I've opened https://github.com/pvlib/solarfactors/issues/6 to track it. Feel free to take a look and I'll be happy to help answer questions.
Describe the bug Inconsistent sign in
aoi_projection
when solar vector and surface normal are almost orthogonal for the front and the back side of a module. This leads to a recursion error when callingpvfactors_timeseries
in bifacial modeling (See: https://github.com/SunPower/pvfactors/issues/146)To Reproduce
Expected behavior The results should be opposite (ie with a different sign) or both be equal to 0.
Versions:
pvlib.__version__
: '0.9.2'pandas.__version__
: '1.5.2'Additional context
To reproduce the error with bifacial modeling: