clicking on a region (maybe later selecting a region by dragging...) and
filling out two coefficients for the reflectivity of the a. visible and b. infrared channels as described in this post
A calibration image might look like this (esp. if we had an overlay as i've doodled on it) and might even be of a printed sheet that you line up with the guidelines:
This could be repeated as many times as we'd like but we might start with 2 targets minimum as Ned mentioned in the linked post above. Starting with 2 targets is a good initial step because the values could be used to map (or stretch, see this code) rather than doing a linear regression. Just to take things one step at a time.
So for each of 2 points clicked on, we would have values as follows (data from this post):
Visible reflectivity
Infrared reflectivity
Target paper 1 (printer paper)
0.86696300
0.90032700
Target paper 1 (tar paper)
0.04748605
0.05665055
But by clicking on the image, we've collected a matching set of data through this particular camera, so we can see how different it's reading of these targets is (here, i just made up some data):
Visible reflectivity
Visible pixels in photo
Infrared reflectivity
Infrared pixels in photo
Target paper 1 (printer paper)
0.86696300
0.8
0.90032700
0.9
Target paper 1 (tar paper)
0.04748605
0.2
0.05665055
0.3
Now, we can use the map() function above to adjust any value in the image, in two steps, one per channel:
every Visible pixel gets mapped from a range of 0.2-0.8 (see table above) to a range of 0.04748605-0.86696300
every Infrared pixel gets mapped from a range of 0.3-0.9 to a range of 0.05665055-0.90032700
Planning this out...
visible
and b.infrared
channels as described in this postA calibration image might look like this (esp. if we had an overlay as i've doodled on it) and might even be of a printed sheet that you line up with the guidelines:
(looking at selection of targets in this post)
This could be repeated as many times as we'd like but we might start with 2 targets minimum as Ned mentioned in the linked post above. Starting with 2 targets is a good initial step because the values could be used to map (or stretch, see this code) rather than doing a linear regression. Just to take things one step at a time.
So for each of 2 points clicked on, we would have values as follows (data from this post):
But by clicking on the image, we've collected a matching set of data through this particular camera, so we can see how different it's reading of these targets is (here, i just made up some data):
Now, we can use the
map()
function above to adjust any value in the image, in two steps, one per channel:0.2-0.8
(see table above) to a range of0.04748605-0.86696300
0.3-0.9
to a range of0.05665055-0.90032700
(in a future version we could use a linear regression as Ned did, instead of just a simple map: https://github.com/Tom-Alexander/regression-js)
the visible and infrared values are then displayed and made usable in the usual NDVI equation of
(B-R)/(B+R)
(or equivalent)Finally, this whole thing could be in a tray that opens above the image. Later, it could be used in Image Sequencer as a step like this:
https://sequencer.publiclab.org/examples/#steps=dynamic{red:r*0.8|green:g|blue:b*0.4|monochrome%20(fallback):r%20%2B%20g%20%2B%20b},ndvi{filter:red}