Closed DavidBrainard closed 9 years ago
I moved s_initISET into isettools/scripts. The commit comment is:
I am not quite sure where this belongs. I would probably not call it a script, as we use it like a function. But a script it is, just not a script like many of the others. So I put it into isettools/scripts. It is the only thing in that directory. I guess conceptually, that directory should contain scripts that we are treating as functions. If all the other scripts in the main scripts directory go away, then this would be clear enough.
Maybe we should add the same functionality as a new script, ieInit, and slowly replace all the s_initISET calls until they are gone.
Brian
Stanford Sent from mobile device
On Dec 31, 2014, at 9:59 AM, David Brainard notifications@github.com wrote:
I moved s_initISET into isettools/scripts. The commit comment is:
I am not quite sure where this belongs. I would probably not call it a script, as we use it like a function. But a script it is, just not a script like many of the others. So I put it into isettools/scripts. It is the only thing in that directory. I guess conceptually, that directory should contain scripts that we are treating as functions. If all the other scripts in the main scripts directory go away, then this would be clear enough.
— Reply to this email directly or view it on GitHub.
In my endless attempt to avoid work, I added ieInit.m. In doing so, I noticed that it did stuff that wasn't necessary any more. I tried to clean it up.
I also noticed (and maybe this should be a separate issue) that at some point sceneGet() was modified to no longer check that the argument 'scene' was non-empty. This caused problems in closing windows. So, I put the check back in, along with a comment. The current behavior is that when scene is empty, the returned val is also empty. Let me know if that causes some damage. validateFastAll still runs for me, so I am sticking it in and hoping for the best.
validateFastAll hash of the data remains inconsistent with the Penn numerical data. But other things are very close and consistent.
Finally, I edited s_display2Scene with comments and such. It is still in xNeedsChecking. Not sure what to do with these scripts after they are checked. I think this is a good one to keep as a tutorial. But we should vote or something.
On Dec 31, 2014, at 2:35 PM, Brian Wandell notifications@github.com wrote:
validateFastAll hash of the data remains inconsistent with the Penn numerical data.
- Do you know if the hash data is consistently inconsistent with the Penn data across different Stanford machines/OSX ?
- I can regenerate fast validation data with rounding to 11 or less significant digits to see if this fixes the issue.
Nicolas
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
Yes, that is the situation. I just ran validateFullAll.
As per my previous suggestion, rather than just dialing back the precision until it seems to work (which I referred to as "pulling out of a hat"), a principled choice seems safer to me. The reason is precisely what happened in this thread -- as more testing conditions are explored, the "dial back until it works" number is likely to change.
IEEE 754 32bit single float is 6-9 digits of precision, depending on how you count:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
From this I think the magic number might be 9.
This is nice because we can imagine this precision actually surviving a text file IO cycle.
ej
On Dec 31, 2014, at 12:29 PM, David Brainard notifications@github.com wrote:
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
— Reply to this email directly or view it on GitHub.
We have doubles not singles. Anyway I pushed 11-digit rounded fast ground truth data before I got EJ’s email. Brian, can you please retry after a git pull?
Thanks, Nicolas
On Dec 31, 2014, at 4:04 PM, E.J. Chichilnisky notifications@github.com wrote:
As per my previous suggestion, rather than just dialing back the precision until it seems to work (which I referred to as "pulling out of a hat"), a principled choice seems safer to me. The reason is precisely what happened in this thread -- as more testing conditions are explored, the "dial back until it works" number is likely to change.
IEEE 754 32bit single float is 6-9 digits of precision, depending on how you count:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
From this I think the magic number might be 9.
This is nice because we can imagine this precision actually surviving a text file IO cycle.
ej
On Dec 31, 2014, at 12:29 PM, David Brainard notifications@github.com wrote:
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469132.
What I was trying to say was that rather than keep coming up with numbers that work in the current testing situation, but might not work in a future one, it might be worth going straight to a lowest common denominator that seems reliable. Less likely to have to change it, which will surely be a pain in the butt.
ej
On Dec 31, 2014, at 1:12 PM, Nicolas Cottaris notifications@github.com wrote:
We have doubles not singles. Anyway I pushed 11-digit rounded fast ground truth data before I got EJ’s email. Brian, can you please retry after a git pull?
Thanks, Nicolas
On Dec 31, 2014, at 4:04 PM, E.J. Chichilnisky notifications@github.com wrote:
As per my previous suggestion, rather than just dialing back the precision until it seems to work (which I referred to as "pulling out of a hat"), a principled choice seems safer to me. The reason is precisely what happened in this thread -- as more testing conditions are explored, the "dial back until it works" number is likely to change.
IEEE 754 32bit single float is 6-9 digits of precision, depending on how you count:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
From this I think the magic number might be 9.
This is nice because we can imagine this precision actually surviving a text file IO cycle.
ej
On Dec 31, 2014, at 12:29 PM, David Brainard notifications@github.com wrote:
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469132.
— Reply to this email directly or view it on GitHub.
OK. Thanks.
By the way, changing the precision is really simple. Just a constant in @UnitTest and a regeneration of the ground truth data. Takes less than 2 minutes with the current state of affairs.
Nicolas
On Dec 31, 2014, at 4:18 PM, E.J. Chichilnisky notifications@github.com wrote:
What I was trying to say was that rather than keep coming up with numbers that work in the current testing situation, but might not work in a future one, it might be worth going straight to a lowest common denominator that seems reliable. Less likely to have to change it, which will surely be a pain in the butt.
ej
On Dec 31, 2014, at 1:12 PM, Nicolas Cottaris notifications@github.com wrote:
We have doubles not singles. Anyway I pushed 11-digit rounded fast ground truth data before I got EJ’s email. Brian, can you please retry after a git pull?
Thanks, Nicolas
On Dec 31, 2014, at 4:04 PM, E.J. Chichilnisky notifications@github.com wrote:
As per my previous suggestion, rather than just dialing back the precision until it seems to work (which I referred to as "pulling out of a hat"), a principled choice seems safer to me. The reason is precisely what happened in this thread -- as more testing conditions are explored, the "dial back until it works" number is likely to change.
IEEE 754 32bit single float is 6-9 digits of precision, depending on how you count:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
From this I think the magic number might be 9.
This is nice because we can imagine this precision actually surviving a text file IO cycle.
ej
On Dec 31, 2014, at 12:29 PM, David Brainard notifications@github.com wrote:
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469132.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469770.
Well ... thinking ahead: if we find ourselves in a situation later where something is causing problems, and we think it's the precision, then to change it and test it, we have to regenerate all the ground truth data. But since we're having problems that caused us to question the precision in the first place, do we really want to re-generate all the ground truth data at that uncertain time? I see it being more than 2 minutes if/when that kind of problem happens.
Please understand that I'm perhaps too-paranoid about such time-waster future scenarios, which is why I made the suggestion, since we're already dialing back the precision. In all likelihood your plan is fine. I'll shut up now!
ej
On Dec 31, 2014, at 1:23 PM, Nicolas Cottaris notifications@github.com wrote:
OK. Thanks.
By the way, changing the precision is really simple. Just a constant in @UnitTest and a regeneration of the ground truth data. Takes less than 2 minutes with the current state of affairs.
Nicolas
On Dec 31, 2014, at 4:18 PM, E.J. Chichilnisky notifications@github.com wrote:
What I was trying to say was that rather than keep coming up with numbers that work in the current testing situation, but might not work in a future one, it might be worth going straight to a lowest common denominator that seems reliable. Less likely to have to change it, which will surely be a pain in the butt.
ej
On Dec 31, 2014, at 1:12 PM, Nicolas Cottaris notifications@github.com wrote:
We have doubles not singles. Anyway I pushed 11-digit rounded fast ground truth data before I got EJ’s email. Brian, can you please retry after a git pull?
Thanks, Nicolas
On Dec 31, 2014, at 4:04 PM, E.J. Chichilnisky notifications@github.com wrote:
As per my previous suggestion, rather than just dialing back the precision until it seems to work (which I referred to as "pulling out of a hat"), a principled choice seems safer to me. The reason is precisely what happened in this thread -- as more testing conditions are explored, the "dial back until it works" number is likely to change.
IEEE 754 32bit single float is 6-9 digits of precision, depending on how you count:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
From this I think the magic number might be 9.
This is nice because we can imagine this precision actually surviving a text file IO cycle.
ej
On Dec 31, 2014, at 12:29 PM, David Brainard notifications@github.com wrote:
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469132.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469770.
— Reply to this email directly or view it on GitHub.
Thanks. I think it is good to have a lowest common denominator. At least this will allow us to look elsewhere if fast validation does not seem to work across different machines even at 9 decimal digits.
Nicolas
On Dec 31, 2014, at 4:33 PM, E.J. Chichilnisky notifications@github.com wrote:
Well ... thinking ahead: if we find ourselves in a situation later where something is causing problems, and we think it's the precision, then to change it and test it, we have to regenerate all the ground truth data. But since we're having problems that caused us to question the precision in the first place, do we really want to re-generate all the ground truth data at that uncertain time? I see it being more than 2 minutes if/when that kind of problem happens.
Please understand that I'm perhaps too-paranoid about such time-waster future scenarios, which is why I made the suggestion, since we're already dialing back the precision. In all likelihood your plan is fine. I'll shut up now!
ej
On Dec 31, 2014, at 1:23 PM, Nicolas Cottaris notifications@github.com wrote:
OK. Thanks.
By the way, changing the precision is really simple. Just a constant in @UnitTest and a regeneration of the ground truth data. Takes less than 2 minutes with the current state of affairs.
Nicolas
On Dec 31, 2014, at 4:18 PM, E.J. Chichilnisky notifications@github.com wrote:
What I was trying to say was that rather than keep coming up with numbers that work in the current testing situation, but might not work in a future one, it might be worth going straight to a lowest common denominator that seems reliable. Less likely to have to change it, which will surely be a pain in the butt.
ej
On Dec 31, 2014, at 1:12 PM, Nicolas Cottaris notifications@github.com wrote:
We have doubles not singles. Anyway I pushed 11-digit rounded fast ground truth data before I got EJ’s email. Brian, can you please retry after a git pull?
Thanks, Nicolas
On Dec 31, 2014, at 4:04 PM, E.J. Chichilnisky notifications@github.com wrote:
As per my previous suggestion, rather than just dialing back the precision until it seems to work (which I referred to as "pulling out of a hat"), a principled choice seems safer to me. The reason is precisely what happened in this thread -- as more testing conditions are explored, the "dial back until it works" number is likely to change.
IEEE 754 32bit single float is 6-9 digits of precision, depending on how you count:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
From this I think the magic number might be 9.
This is nice because we can imagine this precision actually surviving a text file IO cycle.
ej
On Dec 31, 2014, at 12:29 PM, David Brainard notifications@github.com wrote:
If the full validations pass and the fast ones don't then dropping rounding before hashing precision to fewer significant bits seems like a good bet to me, or at least the first thing to try.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469132.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68469770.
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/isetbio/isetbio/issues/45#issuecomment-68470433.
I read s_display2scene, tweaked just a little, renamed as t_display2scene, and moved to tutorials/display.
9 sig digits is fine w/ me for fast.
I am closing this thread, because it is very long. I will open a much shorter one that just notes that the FAST validations are not yet working on Brian's machine.
A) I moved all of the tutorials (in directory tutorials) into a subfolder xNeedChecking, and same with all scripts (in directory scripts). This was already set up in validationscripts, although I changed the name of the subfolder to match.
The idea is that we should, eventually, go through each tutorial/script/validation in the xNeedChecking folder and make sure they run and clean them up, then move into the appropriate subdirectory directly under tutorials/scripts/validationscripts.
This way, we can be reasonably confident than when a user executes a tutorial/script/validation it will actually run. [Based on very small n of about 5 tutorials I cleaned up last night and this morning, I'd say the current probability that one will run and that it will be consistent with current isetbio usage is about 0.6, but that in general the needed fixes are very small.]
B) I find that the cleanup and checking generally goes fast and can be done as recreational activity while watching TV. I am going to try to keep going, but others may join in the fun if they like. I don't think perfection should be the goal, but that at the very least they should all run without throwing an error.
D) Tutorials are conceptually distinct from validations in that tutorials are meant to illustrate how to do things, either in isetbio or in terms of teaching underlying concepts. Tutorials should not save or compare to ground truth data or contain any UnitTest stuff. But they should be well-commented and be set up to publish nicely. I am not currently checking this latter fact as I work through them.
E) I do think we want a mechanism to autopublish all of our tutorials, using a scheme parallel to that we use for validation scripts. Nicolas, can you set this up? There should just be a publish all tutorials script that descends the tutorials subdirectories (except for xNeedChecking) and publishes them according to category. We can use the same Info.txt scheme as well as the same first comment line is summary scheme as for validations.
F) I think that scripts may be a category we want to get rid of, with the idea that each script should either be well commented and become a tutorial, or that it should brought into the UnitTest mode and become a validation. Or some scripts may simply not belong in isetbio because they were really part of some substantive iset project that belongs outside of the scope of the core isetbio repository itself. If candidates for the latter are encountered, perhaps they should be moved to a separate subdir of scripts ('xMoveOut') temporarily, for such consideration.