terraref / computing-pipeline

Pipeline to Extract Plant Phenotypes from Reference Data
BSD 3-Clause "New" or "Revised" License
24 stars 13 forks source link

Convert hyperspectral exposure image to reflectance #88

Closed czender closed 7 years ago

czender commented 8 years ago

This is a draft algorithm to retrieve spectral reflectance based on my understanding of the (possibly soon-to-be) available inputs. Suggestions and correction welcome (the more specific, the better). For simplicity, the algorithm description currently omits the time dimension. It is implicit in all quantities below except rfl_wht.

NOTE: proposal below has been migrated to documentation which supports Latex for ease of reading. https://terraref.gitbooks.io/terraref-documentation/content/hyperspectral_data.html But the commenting isn't that great,

So please comment below on this issue, or propose changes to the algorithm text as a pull request: https://github.com/terraref/documentation/blob/master/hyperspectral_data.md


Inputs and Outputs

syntax: Variable(dimensions) [units] $$$ % Someone please make LaTeX typeset this... Ip(\lambda,y,x) = F{\lambda}(\lambda) R_p(\lambda) C(\lambda) $$$

Inputs: Required known or measured inputs:**

  1. uint16 xps_img(wavelength,y,x) [xps] =Exposure from experiment image (i.e., plants) (known, VNIR, SWIR sensors)
  2. uint16 xps_wht(wavelength,x) [xps] =Exposure from white reference sheet/panel (measured by VNIR, SWIR sampling period? location?)
  3. float32 rfl_wht(wavelength) [fraction] =Reflectance of white reference (factory calibration) (assume time-constant?)
  4. float32 flx_dwn(wavelength) [W m-2 um-1] =Downwelling spectral irradiance (measured by environmental sensor. units?)

    Intermediate derived quantities:

  5. float32 cst_cnv(wavelength) [xps/(W m-2 um-1)] =Proportionality constant between reflected spectral flux and Exposure (derived)
  6. float32 flx_upw(wavelength) [W m-2 um-1] =Upwelling spectral flux (derived. and possibly measured for closure?)

    Outputs

  7. float32 rfl_img(wavelength,y,x) [fraction] =Reflectance of image (i.e., plants)

    Proposed Algorithm to retrieve reflectance from measurements:

  8. Assume image exposure linear with incident spectral flux, this implies
    • xps_wht=flx_dwn*rfl_wht*cst_cnv
  9. Derive proportionality constant from calibration:
    • cst_cnv=xps_wht/(flx_dwn*rfl_wht)
  10. Assume proportionality constant for calibration sheet and plant image are identical, this implies
    • xps_img=flx_dwn*rfl_img*cst_cnv
  11. Derive plant reflectance from exposure
    • rfl_img=xps_img/(flx_dwn*cst_cnv)
  12. (Optional) Derive upwelling spectral flux from reflectance and compare it to measured upwelling spectral flux (if available) for closure/validation
    • flx_upw=flx_dwn*rfl_img
  13. (Optional) Apply PAR-sensor SRF to downwelling irradiance, integrate, compare to measured PAR for closure (integration would require detailed information or assumptions about bandpass of each spectral channel).
  14. More?

Before implementing this, I would like feedback and/or sign-off by @dlebauer @nfahlgren @max-zilla @pless @solmazhajmohammadi and @LTBen.

Next steps

More assumptions, input measurements, and/or more sophisticated algorithms would incorporate these additional sources of information:

  1. BRDF (angular reflectance properties) of plant/leaves
  2. 3D orientation of reflective surfaces (plant/leaves)
  3. BRDF (angular reflectance) of calibration sheet
  4. Direct/diffuse partitioning of downwelling flux

    Notes

    • Units of exposure are vague. Exposure is similar to a photon counter modulated by the spectral response function (SRF) of the sensor. The units are denoted [xps] and range from [0..2^16-1] = [0..65535].
    • add QAQC tests for sensitivity and saturation in each band
    • cross validate with other sensors with know spectral response functions.
    • NB: Three required inputs (xps_wht, rfl_wht, flx_dwn) are not ready to use. Their location, units, and/or sampling intervals are unknown. Only xps_img is ready. Please tell me how to get the other three in the notes below. (@LTBen, @markus-radermacher-lemnatec)
dlebauer commented 8 years ago

@ashiklom added some comments to the documentation https://terraref.gitbooks.io/terraref-documentation/content/hyperspectral_data.html

but the commenting system doesn't work well so I am pasting them here:

What are x and y here?

OK, I'm assuming that x and y correspond to pixels...

indeed, this appears to be the case here. Not to be confused with the fact that we are using an x,y coordinate system with units of meters east and north of the SW corner of the gantry (see terraref/documentation#7 and terraref/documentation#9).

why does the white reference only have an x dimension? If there is some averaging taking place, that's a good opportunity for calculating the uncertainty in the reflectance calibration.

"Wavelength" should be defined more precisely, if possible. Is this the wavelength of peak reflectance? Or the midpoint of the bandwidth? What are the bandwidths? This is also related to the optical point about spectral response functions (SRFs) -- unless we know what these are, I personally think wavelength should be treated as an effectively ordinal variable that should only be used in across-instrument comparisons (as well as comparisons with radiative transfer models) with caution.

Related to above -- I think step 6 is critical to using this data in a radiative transfer modeling context. I would actually suggest making it mandatory and, if SRFs are not available, to add a detailed flag to the data describing exactly the assumptions made in reporting "reflectance" at a given "wavelength" (e.g. assume Gaussian SRF with reported bandwidth indicating full-width half-maximum).

czender commented 8 years ago

Good questions. x and y are currently pixels. when the transformation is available from pixel to geographic horizontal dimension we may change from x,y to lat,lon or some other system like the gantry x,y.

White reference in theory only has a wavelength dimension but in practice the one snapshot (called whiteReference) I know of is dimensioned wavelength by x. it's as if the camera scanned only one line.

Currently "wavelength" is the wavelength associated with each channel in the .bil image file. Would welcome higher resolution SRFs and/or bandpasses.

TinoDornbusch commented 8 years ago

Who of you is expert in hyperspectral analysis? My current idea is to do the dark reference measurement before each scan and have the white target mounted on the gantry, such that one part (~10%) of the image is always "white". Any other ideas on that?

czender commented 8 years ago

Glad to know there is a dark reference available too. I can modify the calibration algorithm to account for that.

pless commented 8 years ago

My understanding of the headwall VNIR sensor is that the sensor has a grid of pixels. The x-axis of the grid of pixels are all the same, but as you move up the sensor, the y-axis of the pixels are sensitive to difference frequencies. The "lens" of the camera may not look like this, but you can think of it as a slit that is capturing the world one row at a time.

This slit is rotated in order for the hyperspectral camera to see a whole image Alternatively the slit can be used the scan the hyperspectral camera over the ground.

The meta-data for one of the VNIR data captures on 4/7 has fields:

  "current Setting Startpos": "-70",

  "current Setting Stoppos": "70"

(although it also says "use rotating mirror": 0).

However this particular bit of data was captured, the reason that there is one row of white-balance is that the sensor has one pixel for each (wavelength, x)

the "y" coordinate relates to either how far the sensor has rotated and translated as the data is captured.

On Thu, Apr 28, 2016 at 10:04 AM, Charlie Zender notifications@github.com wrote:

Glad to know there is a dark reference available too. I can modify the calibration algorithm to account for that.

— You are receiving this because you were assigned. Reply to this email directly or view it on GitHub https://github.com/terraref/computing-pipeline/issues/88#issuecomment-215455399

TinoDornbusch commented 8 years ago

In Scanning mode the mirror is always at position 0 and does not move. Therefore you have the entry "use rotating mirror":0.

markus-radermacher-lemnatec commented 8 years ago

Danke für die Klarstellung, du hast genau recht, die Start/Stop Einstellungen sind irrrelevant, wenn der Spiegel nicht benutzt wird. -Markus

rmgarnett commented 8 years ago

I think that last comment was meant to be a private email, but I'll translate:

Thank you for the clarification. You're exactly right. The start/stop settings are irrelevant if the mirror is not being used.

dlebauer commented 8 years ago

@TinoDornbusch or @rjstrand have you been doing hyperspectral calibration / images of the white / dark references? If so, where are these?

TinoDornbusch commented 8 years ago

I have not had time to work on the data. I am bringing "home" data and we want to look at them next week.

dlebauer commented 8 years ago

Is it in the gantry data stream? On Tue, May 17, 2016 at 1:04 PM TinoDornbusch notifications@github.com wrote:

I have not had time to work on the data. I am bringing "home" data and we want to look at them next week.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/terraref/computing-pipeline/issues/88#issuecomment-219802858

czender commented 8 years ago

Tino on 4/28 you mentioned the need for a Darkreference. My understanding is that this is the exposure (in "counts") measured when the whitereference is covered by a black surface of known (factory-calibrated) reflectance. What about the exposure (in "counts") measured from the whitereflector with no incident light? In other words, is there dark current/noise in the hyperspectral camera similar to the environmental logger?

TinoDornbusch commented 8 years ago

Charlie,

Dark current are photoelectric events (counts) triggered randomly on the sensor chip without incident radiation. Typically this is sensor-specific and temperature dependent. I see your point trying to estimate dark counts without actual measurements from other sensors. Worth comparing dark refs between sensors. Lets first see a time-course of dark counts during the day. If in regular dark measurements are required, we need to implement a solution in the winter upgrade session.

Tino

Dr. agr. Tino Dornbusch

dlebauer commented 8 years ago

@czender visiting MAC last week, I learned the following:

  1. @TinoDornbusch has been taking measurements of a spectralon white reflectance panel. He can point out the dates / times / locations of these
  2. Some or all of the field of view will be in the shade. Thus, the downwelling spectrometer may be of limited use.
  3. The scanner box acts as a blackbody, though even at 150C the flux below 2000 nm is close to zero (based on this online calculator) and this should be captured by the white reflectance panel. (more an issue for the thermal IR camera but wanted to bring it up here).

@TinoDornbusch could you please point to some of the white reflectance measurements that you have taken? And can you clarify the extent to which these sensors will be imaging both sunlit and shaded leaves?

TinoDornbusch commented 8 years ago

@dlebauer please look for the MetadataKey in user_given_metadata:

"mission or scan": "3d_scan_4m"

between 15-30.5.2016 you should find the white target there.

TinoDornbusch commented 8 years ago

@czender There has been a cap on the fibreoptics for measurement of dark reference between 6.6.2016 4PM and 7.6.2016 9AM.

Please let me know whether you wish more dark measurements during off scan times. Please note that this requires climbing on top of the gantry.

czender commented 8 years ago

Latest logger data on Roger is 6/3. I assume the dark logger runs will show-up in 3-4 days, and we will look then...

czender commented 8 years ago
  1. Does this "TinoDornbusch has been taking measurements of a spectralon white reflectance panel." refer to both hyperspectral imagers?
  2. Are we to assume that any images with this metadata ("mission or scan": "3d_scan_4m") have the spectralon somewhere in the image?
  3. What is the wavelength-dependent factory-calibrated reflectance of the spectralon?
ghost commented 8 years ago

@czender -where are we with this? Are you waiting for feedback from someone?

czender commented 8 years ago

Yes, with regards to (1) I would like to know when/whether we have images of the spectralon for both imagers. (2) is a straightforward question. @TinoDornbusch can you answer? The answer to (3) may have some wavelength dependence, but still needs an answer. Not sure if the spectralon is the "99%" or "95%" (or some other) nominal reflectance. @TinoDornbusch or @markus-radermacher-lemnatec or @solmazhajmohammadi can you answer any/all of these questions?

solmazhajmohammadi commented 8 years ago

@smarshall-bmr is running the scans, he might know if all the "3d_scan_4m" scan/missions have spectralon in the image. I think spectralon is 95% nominal reflectance, Tino or Stuart can correct me if it is not.

smarshall-bmr commented 8 years ago

@solmazhajmohammadi is correct on both accounts. The 3d_scan_4m has the Spectralon target in it until today's scan (7/7) because the plants have overtopped the highest setting on the tripods. The Spectralon target is 95% nominal reflectance.

ghost commented 8 years ago

@czender - can this issue be closed?

czender commented 8 years ago

Not yet. We need "white reference" reflectance from the spectralon panel. Right now the reflectance we are using is based on fake numbers. Specifically, we need both the wavelength-depdent, factory calibrated reflectance of the spectralon at all VNIR+SWIR wavelengths (currently we just use 95%), and then we need at least one new good image of a spectralon panel from each camera. reminding @solmazhajmohammadi and @TinoDornbusch

TinoDornbusch commented 8 years ago

Got it...We are chasing the documentation of Spectralon target.

solmazhajmohammadi commented 8 years ago

Documentation is available in: https://github.com/terraref/reference-data/issues/53#issuecomment-250442822

czender commented 8 years ago

@solmazhajmohammadi please send me a number where we can call you Monday 10 AM PT. @FlyingWithJerome and I will be reachable at 9498912429.

solmazhajmohammadi commented 8 years ago

@czender sounds good, you can reach me at 9063706659

solmazhajmohammadi commented 8 years ago

@czender @FlyingWithJerome We did the dark measurement for VNIR camera at the following exposure times: 20, 25, 30, 35, 40, 45, 50, and 55 ms Data are in the same hypercube format with (180-220) lines, 955 bands and 1600 pixel samples. You can use the average over lines. You can download them from here: https://drive.google.com/file/d/0B9h5V5JdLLXmSkdpTmd6QmN3dTQ/view?usp=sharing

@max-zilla Data has been uploaded in the gantry cache server in the following path "/gantry_data/VNIR-DarkRef/"

czender commented 8 years ago

which piece of metadata is the exposure time? is it frameperiod or exposure? are the time units ms? if so, why do you say nm above?

"sensor_variable_metadata": { "current setting frameperiod": "50", "current setting exposure": "45",

solmazhajmohammadi commented 8 years ago

Dark Measurment has been done through the main software(headwall software). So there is no associated Json file for the dark measurments. Name of the folder is showing the exposure time that has been used to take the dark measurement. It was a type. corrected. In the regular scans, "current setting exposure" is showing the exposure time in ms.

markus-radermacher-lemnatec commented 8 years ago

"exposure" is the image exposure time, "frameperiod" is the timespan between two images, The difference 5ms is needed to handle the data.

czender commented 8 years ago

@markus-radermacher-lemnatec thank you for the clarification. @solmazhajmohammadi it is unfortunate that there are no JSON files for the calibrations. If there were, we could use the existing workflow to process the calibration files. Without JSON files we must write custom workflow for the calibration files.

czender commented 8 years ago

@markus-radermacher-lemnatec @solmazhajmohammadi I was told that dark counts for VNIR must be handled by using the calibrations Lemnatec just made, and that dark counts for SWIR are handled internally, so that no calibration files for SWIR would be necessary. Why can't dark counts for VNIR be done the same way as SWIR? i.e., be handled upstream of the hyperspectral workflow?

solmazhajmohammadi commented 8 years ago

Here is the raw files from white spectralon reference. Measurement has been done for the following exposure times 20, 25, 30, 25, 40, 25, 50, 55 ms.

https://drive.google.com/file/d/0ByXIACImwxA7akhfLTdTS01vTTA/view?usp=sharing

Exposure time is included in the folder name. Also, in the file named "settings.txt" you can check the exposure time as well. Data are 1600 sample, 955 bands and 268-298 lines. White reference is located in the lines between 60 to 100 and in the samples between 600 to 1000. For the calibration, this needs to be subtracted from the dark current in the same sample, band and exposure time.

In the following file, I stored an extra file named "CorrectedWhite_raw". This file includes only a single white pixel( one line, one sample) in 955 bands for each exposure time. Data is stored in the similar format but it doesnot include any extra files like frameIndex, image, header ,..

https://drive.google.com/file/d/0ByXIACImwxA7dVNHa3pTYkFjdWc/view?usp=sharing

Let me know if you have issue with opening the files. The white reference scans was done at around 1pm ( one hour after solar noon). I don’t see the saturation with 20ms and 25ms exposure time. @smarshall-bmr can you please run the scans with 20ms.

dlebauer commented 8 years ago

@czender could you work with @solmazhajmohammadi and @craig-willis to make sure that these files are moved to an appropriate place (Roger / Clowder ) where they are available to the hyperspectral workflow and documented?

solmazhajmohammadi commented 8 years ago

@czender White reference data doesn’t have JSON file. @craig-willis Please move the following dataset which has similar format: White Reference data: https://drive.google.com/file/d/0ByXIACImwxA7akhfLTdTS01vTTA/view?usp=sharing Dark reference data: https://drive.google.com/file/d/0B9h5V5JdLLXmSkdpTmd6QmN3dTQ/view?usp=sharing

czender commented 8 years ago

@craig-willis @solmazhajmohammadi Here is the script I used to process the HS white and dark reference values

# White reference, single pixel (px1), created by Solmaz (NB: pre-compensated for dark counts)
# "Single pixel" (px1) values, can be compared to the difference (white-dark) of the following area-averaged calibrations
for drc in 2016_10_21_13_14_32_20ms 2016_10_21_13_16_20_30ms 2016_10_21_13_17_21_40ms 2016_10_21_13_18_22_50ms 2016_10_21_13_15_21_25ms 2016_10_21_13_16_47_35ms 2016_10_21_13_17_51_45ms 2016_10_21_13_18_52_55ms ; do
    /bin/rm -f ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir*px1*
    xps_tm=$(echo ${drc} | cut -d '_' -f 7)
    ncks -O --trr_wxy=955,1,1 --trr typ_in=NC_USHORT --trr typ_out=NC_USHORT --trr ntl_in=bil --trr ntl_out=bsq --trr ttl="Spectralon target with nominal visible reflectance = 0.95, as exposed to VNIR single pixel, single scanline on 20161021 ~13:15 local time in ${drc}" --trr_in=${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/CorrectedWhite_raw ~/terraref/computing-pipeline/scripts/hyperspectral/hyperspectral_dummy.nc ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir_wht_px1_${xps_tm}.nc
done    

# White reference, image:
for drc in 2016_10_21_13_14_32_20ms 2016_10_21_13_16_20_30ms 2016_10_21_13_17_21_40ms 2016_10_21_13_18_22_50ms 2016_10_21_13_15_21_25ms 2016_10_21_13_16_47_35ms 2016_10_21_13_17_51_45ms 2016_10_21_13_18_52_55ms ; do
    /bin/rm -f ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir*img* ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir*cut* ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir*avg*
    xps_tm=$(echo ${drc} | cut -d '_' -f 7)
    hdr_fl=${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/raw.hdr
    ydm_nbr=$(grep '^lines' ${hdr_fl} | cut -d ' ' -f 3 | tr -d '\015')
    echo "Calibration file ${drc}/raw has ${ydm_nbr} lines"
    ncks -O --trr_wxy=955,1600,${ydm_nbr} --trr typ_in=NC_USHORT --trr typ_out=NC_USHORT --trr ntl_in=bil --trr ntl_out=bsq --trr ttl="Spectralon target with nominal visible reflectance = 0.95, as exposed to VNIR full image 1600 pixel and 268-298 lines on 20161021 ~13:15 local time in ${drc}. Spectralon is located in lines ~35-90 and samples (pixels) 600-1000." --trr_in=${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/raw ~/terraref/computing-pipeline/scripts/hyperspectral/hyperspectral_dummy.nc ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir_wht_img_${xps_tm}.nc
# Visual inspection shows following hyperslab matches Spectralon location and shape
  ncks -O -F -d x,600,1000 -d y,35,90 ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir_wht_img_${xps_tm}.nc ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir_wht_cut_${xps_tm}.nc
  ncwa -O -a x,y ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir_wht_cut_${xps_tm}.nc ${HOME}/Downloads/VNIR_SpectralonRef_SinglePixel/${drc}/vnir_wht_avg_${xps_tm}.nc
done

# Dark reference:
# 7z l VNIR-DarkRef.7z # List files
# 7z e VNIR-DarkRef.7z # Extract all files to ${CWD}
# 7z x VNIR-DarkRef.7z # Extract files with full paths
for drc in 2016_10_19_02_58_32-20ms 2016_10_19_03_00_27-30ms 2016_10_19_03_01_51-40ms 2016_10_19_04_16_44-50ms 2016_10_19_02_59_35-25ms 2016_10_19_03_01_07-35ms 2016_10_19_04_16_16-45ms 2016_10_19_04_17_07-55ms ; do
    /bin/rm -f ${HOME}/Downloads/VNIR-DarkRef/${drc}/vnir*img* ${HOME}/Downloads/VNIR-DarkRef/${drc}/vnir*avg*
    xps_tm=$(echo ${drc} | cut -d '_' -f 6)
    xps_tm=$(echo ${xps_tm} | cut -d '-' -f 2)
    hdr_fl=${HOME}/Downloads/VNIR-DarkRef/${drc}/raw.hdr
    ydm_nbr=$(grep '^lines' ${hdr_fl} | cut -d ' ' -f 3 | tr -d '\015')
    echo "Calibration file ${drc}/raw has ${ydm_nbr} lines"
    ncks -O --trr_wxy=955,1600,${ydm_nbr} --trr typ_in=NC_USHORT --trr typ_out=NC_USHORT --trr ntl_in=bil --trr ntl_out=bsq --trr ttl="Dark counts as exposed to VNIR full image 1600 pixel and 182-218 lines on 20161019 ~3-4 AM local time in ${drc}." --trr_in=${HOME}/Downloads/VNIR-DarkRef/${drc}/raw ~/terraref/computing-pipeline/scripts/hyperspectral/hyperspectral_dummy.nc ${HOME}/Downloads/VNIR-DarkRef/${drc}/vnir_drk_img_${xps_tm}.nc
#   20161031 Dark image data fits well. No hyperslabbing necessary. Some wavelength-dependent (though unpredictable) structure in X. Little-to-no Y structure.
    ncwa -O -a x,y ${HOME}/Downloads/VNIR-DarkRef/${drc}/vnir_drk_img_${xps_tm}.nc ${HOME}/Downloads/VNIR-DarkRef/${drc}/vnir_drk_avg_${xps_tm}.nc
done
czender commented 8 years ago

The above script reduces the ~10 GB raw image files to a series of area-averaged white and dark reference data that we actually use in the HS workflow. xps_tm is the exposure time in ms. The minimal files to retain are these 17 kb files:

vnir_whtavg${xps_tm}.nc vnir_drkavg${xps_tm}.nc

the img files are essentially the same ~10 GB as the raw files. the cut files are each ~50 MB and contain only the portion of the image that contains the target. These files are averaged to produce the avg values used in the calibration. Keep as much of this as @dlebauer and @solmazhajmohammadi would like for provenance reasons.

craig-willis commented 8 years ago

@czender @solmazhajmohammadi

Thanks for the script details. I'm just now reading through this full thread and have a few questions/clarifications.

solmazhajmohammadi commented 8 years ago

@craig-willis yes sure it makes sense to collect them separately for now.

czender commented 8 years ago

I'm planning to store the area-averaged files in the hyperspectral workflow script directory, i.e., the same directory as hyperspectral_workflow.sh, so they can easily be found and modified until they are stable. It would be good to have a location not in the scripts directory for the other files.

czender commented 8 years ago

@yanliu-chn please update the NCO build/module on roger to 4.6.2-beta03 which contains some new features helpful to the hyperspectral calibration. Thank you.

yanliu-chn commented 8 years ago

Done. set default to 4.6.2-beta03. To use:

$ module purge
$ module load gdal-stack-2.7.10 nco
$ echo $NCO_HOME
/sw/nco-4.6.2-beta03
czender commented 8 years ago

@max-zilla or @yanliu-chn @craig-willis hyperspectral workflow now requires that the eight VNIR calibration *.nc files just added to HS scripts directory reside in the same directory as hyperspectral_workflow.sh. Since they all are in the same git directory, this should be automatic. But if extractors ever separate them, some paths will need to be modified. HS workflow now requires NCO 4.6.2-beta03 or later. Maybe don't pull the new stuff until/unless your sure these requirements are met.

solmazhajmohammadi commented 7 years ago

Please find the white spectralon measurement using SWIR camera here:

https://drive.google.com/drive/folders/0B9h5V5JdLLXmRHZJN0d1VXJLNVE?usp=sharing

Each folder contains one line of dark reference measurement. The raw file is the scan of spectralon target and 8 calibrated color targets on the top of the spectralon target, please choose your region of interest only from white target.
Please note that you apply same procedure for SWIR camera as well (subtract the dark measurement from data and white reference).

czender commented 7 years ago

I'll defer this until the camera is repaired and someone affirms that the SWIR spectralon measurements apply to the repaired camera. - https://github.com/terraref/reference-data/issues/50

max-zilla commented 7 years ago

Just updated the extractor for NCO 4.6.2-beta03 FYI.

yanliu-chn commented 7 years ago

Saw the issue. Thanks! This is consistent with the ROGER nco deployment. Charlie has now pushed a few new releases, let’s update when the next official version comes out. Thanks! -Yan

dlebauer commented 7 years ago

closing per suggestion from @czender - other issues e.g. #208 (uncertainty, reflectance, etc) cover outstanding calibration issues