Closed DavidBrainard closed 9 years ago
I made some updates to ieLUTLinear, which mainly includes:
1) Variable name gTable is changed to invGammaTable. This change might be helpful to those who don't like to read through the comments very carefully.
2) For consistency, if invGammaTable (gTable previously) is a scalar, the input should also be inverse gamma, e.g. 1/2.2 instead of 2.2
3) When input is 3x1 or Nx3 matrix (or N-by-nPrimaries for multi-primaries displays), we treat it as inputs for different channels. Then, we apply RGB gamma correspondingly instead of just red gamma to all of them. If RGB is 1D or 2D with other size, we still just use first column in the gamma table.
4) Because in previous point we change the size of input RGB, I added some code to make sure the output DAC value of same size as original input RGB.
I updated the related tutorial scripts as well. It seems like this function is not called elsewhere. I hope this update is fine and does not break the stuffs.
Thanks.
Best, HJ
On Sun Jan 04 2015 at 下午12:43:14 David Brainard notifications@github.com wrote:
I read through ieLUTLinear. I wanted to make sure that expanding the columns of the inverse gamma wouldn't make it behave badly. That part looks fine, since it was already written to expect three columns and indeed simply expanded a one column inverse gamma table to have three columns as its first move, if a one column inverse gamma table was passed. So that part is OK.
But I am quite worried about how it handles the dimensionality of the input RGB values to be linearized. It seems to expect these to be N by M by 3, and to convert each plane by the corresponding column of the inverse gamma table. When the input RGB is one-dimensional or two-dimensional, however, what it does may be unexpected. And there are no arg checks on the dimension of RGB, and the help text does not specify the form of the expected input.
This is not completely idle, since t_colorMatching and t_colorSpectrum exercise both of the worrisome cases. t_colorMatching passes a one-dimensional array to be linearized, and t_colorSpectrum passes an N by 3 array (with RGB across the columns). I think from my read of the code that in both these cases the first column of the inverse table will be used for the inversion. This might be good behavior for the one-dimensional case, but seems wrong for the N by 3 case. [For the one-d case, the other obvious option, which I slightly prefer, is to invert the passed single number three times so that the answer comes back as RGB, not as a single vector.]
I think this needs attention, but I don't want to touch this routine since a lot of other stuff may depend on it.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/51.
This looked good to me.
I went ahead and changed the resolution convention in ieLUTInvert so that one passes the number of levels in the inverse table. It's possible this is a mistake, depending on how you like to think about your LUT bit depth. The new convention decouples it a bit from the measured length of the input LUT.
I also cleaned up the comments a bit more in ieLUTInvert.
This all would be worth a review before we close the issue.
Thanks for updating the convention. It looks good. I made some small fix to ieLUTInvert. The main update is to change the second parameter name from resolution (reported as not used) to nSteps. Hope this will be fine.
Best, HJ
On Wed Jan 07 2015 at 下午6:06:47 David Brainard notifications@github.com wrote:
This looked good to me.
I went ahead and changed the resolution convention in ieLUTInvert so that one passes the number of levels in the inverse table. It's possible this is a mistake, depending on how you like to think about your LUT bit depth. The new convention decouples it a bit from the measured length of the input LUT.
I also cleaned up the comments a bit more in ieLUTInvert.
This all would be worth a review before we close the issue.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/51#issuecomment-69125766.
That name change caused a problem because nSteps was used already to mean the number of steps in the input table, and with the change that screwed up some things (e.g., the extrap val as well as the indices used in the input to the interpolation). I fixed these, I think. Also changed the default behavior to be an inverse table of 2048 which will be fine-grained enough for most applications.
I also modified displayGet(d,'inverse gamma') to optionally take the steps argument.
I think this is OK now. At some point I will write a little validation that compares the behavior of this code to what the equivalent code in PTB produces.
Thanks a lot for the updates. Looks good to me now.
Best, HJ
On Thu Jan 08 2015 at 6:12:08 PM David Brainard notifications@github.com wrote:
That name change caused a problem because nSteps was used already to mean the number of steps in the input table, and with the change that screwed up some things (e.g., the extrap val as well as the indices used in the input to the interpolation). I fixed these, I think. Also changed the default behavior to be an inverse table of 2048 which will be fine-grained enough for most applications.
I also modified displayGet(d,'inverse gamma') to optionally take the steps argument.
I think this is OK now. At some point I will write a little validation that compares the behavior of this code to what the equivalent code in PTB produces.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/51#issuecomment-69283777.
Closing for now, but will remember that writing a validation against PTB is a good idea for the future.
I read through ieLUTLinear. I wanted to make sure that expanding the columns of the inverse gamma wouldn't make it behave badly. That part looks fine, since it was already written to expect three columns and indeed simply expanded a one column inverse gamma table to have three columns as its first move, if a one column inverse gamma table was passed. So that part is OK.
But I am quite worried about how it handles the dimensionality of the input RGB values to be linearized. It seems to expect these to be N by M by 3, and to convert each plane by the corresponding column of the inverse gamma table. When the input RGB is one-dimensional or two-dimensional, however, what it does may be unexpected. And there are no arg checks on the dimension of RGB, and the help text does not specify the form of the expected input.
This is not completely idle, since t_colorMatching and t_colorSpectrum exercise both of the worrisome cases. t_colorMatching passes a one-dimensional array to be linearized, and t_colorSpectrum passes an N by 3 array (with RGB across the columns). I think from my read of the code that in both these cases the first column of the inverse table will be used for the inversion. This might be good behavior for the one-dimensional case, but seems wrong for the N by 3 case. [For the one-d case, the other obvious option, which I slightly prefer, is to invert the passed single number three times so that the answer comes back as RGB, not as a single vector.]
I think this needs attention, but I don't want to touch this routine since a lot of other stuff may depend on it.