Closed weaverba137 closed 1 year ago
Basic functionality appears to work as advertised to make it easier to add units/comments to a fits file. Thanks. I have several comments, followed by my opinions on the questions you posed in the original PR post. In approximate order of descending priority:
annotate_fits ... --test INFILE OUTFILE
writes outfile, despite the --test
flag whose doc says "Test mode; show what would be done but do not change any files"annotate_fits
prints DEBUG-level messages regardless of $DESI_LOGLEVEL
, even if --verbose is not set and $DESI_LOGLEVEL=info
I went slightly beyond the scope of https://github.com/desihub/desiutil/issues/191 to support adding units to images (BUNIT), but this gets into tricky territory, because then do we also have to support adding units to compressed images? I'm leaning toward only supporting adding units to tables, i.e. strictly fits.BinTableHDU.
That sounds fine.
Comments on table columns are currently added using the TTYPEnnn= 'COLUMN_NAME' / COMMENT convention, but we could also support TCOMMnnn= 'COMMENT'. The latter is not officially part of the FITS standard, but might be in the near future.
For this PR, let's just stick with TTYPEnnn= 'COLUMN_NAME' / COMMENT
and not expand the scope to TCOMMnnn.
No matter which comment convention we choose, sometimes comments will be too long to fit. What do we do in that case? Just issue a warning? If we choose the TCOMMnnn convention, then a long comment can be split across multiple lines, but then the LONGSTRN keyword has to be present.
My preference for this PR: if a comment is too long, log an error (or critical) message and stop, unless a new option --truncate-comments
is specified by the user.
If that is too much of a hassle, just printing a warning message and moving on would be ok too, as long as there is some way to completely turn off adding comments while still adding units (see earlier bullet).
annotate_fits() adds a checksum to the HDU it is modifying, but should it only do so if a checksum is already present?
I think it is fine to add a CHECKSUM no matter what.
To what extent do we want to try to hide warnings about non-standard FITS units? As noted in https://github.com/astropy/astropy/issues/15313, the message is misleading, but there are also inconsistencies in how the warning is issued that may make it difficult to hide the warning.
Let's do a best effort to hide specific units warnings about cases that we know about and still want to consider valid (nanomaggie, nannomaggy, nanomaggys, nanomaggies, possibly others) while still generically letting warnings through to catch common mistakes like 'deg' vs. 'degree' vs. 'degrees'.
Thank you @sbailey, these are all good comments. I think that satisfies all the questions I had. It will probably take another day or so for final implementation.
@sbailey, in the current implementation, the --test
option actually does nothing, so that explains your observation about files being written out with that flag set.
I'm considering not supporting a --test
mode at all, since ultimately the test is: does the output file have the correct headers? If the original file is not overwritten, the output file can be removed and regenerated as needed.
annotate_fits
prints DEBUG-level messages regardless of$DESI_LOGLEVEL
, even if--verbose
is not set and$DESI_LOGLEVEL=info
.
When you made that observation, did you have --test
set? Because despite what I said above, that will activate DEBUG messages.
astropy.io.fits
automatically truncates comments to 46 characters and it emits a warning when it does so. I'm not sure if that's a warnings.warn()
message or a logging.warning()
message, but I don't think we need to go to extra effort to warn about truncated comments if there's already such a warning.
Additional detail on the last comment: As I mentioned, a warning about truncating a comment is already emitted, but the warning doesn't say which comment, so we could add that, or convert the warning to an exception while also providing more information about the comment that triggered it. I checked and the warning is a real warnings.warn()
, so it can be caught.
annotate_fits prints DEBUG-level messages regardless of $DESI_LOGLEVEL, even if --verbose is not set and $DESI_LOGLEVEL=info. When you made that observation, did you have --test set? Because despite what I said above, that will activate DEBUG messages.
Indeed, I was using --test
, and I verify that without --test
it follows the expected $DESI_LOGLEVEL. I think that is fine, though perhaps the --test
help string should mention something like "implies --verbose"
astropy.io.fits automatically truncates comments to 46 characters and it emits a warning when it does so. I'm not sure if that's a warnings.warn() message or a logging.warning() message, but I don't think we need to go to extra effort to warn about truncated comments if there's already such a warning. Additional detail on the last comment: As I mentioned, a warning about truncating a comment is already emitted, but the warning doesn't say which comment, so we could add that, or convert the warning to an exception while also providing more information about the comment that triggered it. I checked and the warning is a real warnings.warn(), so it can be caught.
astropy appears to issue a single "WARNING: VerifyWarning: Card is too long, comment will be truncated. [astropy.io.fits.card]" even if it is truncating multiple comments, which implies that it is a warnings.warn() and is perhaps the motivation for why it doesn't include which comment is being truncated. In this case, I think it would be very helpful to know which comment is being truncated so let's add that info with our own warning message.
Resummarizing what I think we want regarding comments:
--disable-comments
option to update units but not comments--truncate-comments
that would override that default behavior and allow truncating comments, while still logging every key/comment that is too long so that you know which ones you might want to shorten and try again.i.e. make truncating comments "opt-in", and augment the logging to say which keys/comments are being truncated.
Thank you. As I noted above, it might be easier to eliminate --test
altogether. Do you have thoughts on that?
@sbailey, et al., this is ready for final review. I removed the --test
option entirely for reasons mentioned above.
Dropping --test
is fine, but I added more logging to say what it is doing when --verbose
is selected, as well as a final INFO log that it actually wrote the file. I see tests failed while I was writing this so I will also check on that after finishing this comment.
One remaining gotcha: I appreciate the concept of reporting all comments that are too long before exiting so that the user doesn't have to find,fix,rerun one-by-one, but as currently implemented the code will error+exit for long comments on columns that aren't even in the input file, i.e. long comments that don't matter. I think it should pre-select only the columns that are in the input file and check only those. Since this is a little more surgery than just adding verbose logging, I'm checking with you first. How about something like:
with fits.open(filename, mode='readonly') as hdulist:
...
column_comments = dict()
for i in range(1, 1000):
ttype = f"TTYPE{i:d}"
if ttype in hdu.header:
colname = hdu.header[ttype]
if comments and colname in comments:
column_comments[colname] = comments[colname]
else:
break
n_long = check_comment_length(column_comments, error=(not truncate))
...
Yes, that's fair to check only the comments on columns that are actually in the file. However, both that change and the change to the logging in annotate_fits
will require some moderately extensive changes to the test suite.
Actually, I'm having second thoughts about that code snippet. Please do not attempt to insert that, I don't think it actually works with the way that function is planned out.
Among other things, raising an exception inside a with
block could result in memory not being freed. So if you're importing and running the function inside a larger pipeline, rather than as a stand-alone script, that could be a big problem.
And yes, there are other places where exceptions can be raised inside the with
block. This is making me rethink all of them.
OK, I fixed the tests I broke by changing the details of the log messages, but I'll leave it to you @weaverba137 to investigate how to only check the columns that actually appear in the data file rather than all of them in the csv/yaml file whether they are needed or not.
Ah, I just discovered another gotcha: independently of checking the comments, there is also nowhere where the units are validated, either in annotate_table()
or annotate_fits()
. I don't think I'll be able to get something out this afternoon.
Rechecking latest version:
--allow-invalid-units
(or equivalent) so that if we find other cases like nanomaggy that don't pass the units checker but we want to force in anyway, we still have a way to do that without requiring a desiutil code update. This would be opt-in; having the current default of not allowing invalid units is good.Sounds reasonable. I'll try to get that out tomorrow.
@sbailey, last suggested change is in. I will plan to merge later today.
This PR
Please test by running the script
annotate_fits
. Input column definitions can be in CSV format (like desidatamodel'scolumn_descriptions.csv
), YAML (like desitarget'sunits.yaml
) or directly on the command line with aCOLUMN=value:COLUMN=value
notation. Seeannotate_fits --help
for more information.Additional questions that need to be answered before this PR can be completed:
BUNIT
), but this gets into tricky territory, because then do we also have to support adding units to compressed images? I'm leaning toward only supporting adding units to tables, i.e. strictlyfits.BinTableHDU
.TTYPEnnn= 'COLUMN_NAME' / COMMENT
convention, but we could also supportTCOMMnnn= 'COMMENT'
. The latter is not officially part of the FITS standard, but might be in the near future.TCOMMnnn
convention, then a long comment can be split across multiple lines, but then theLONGSTRN
keyword has to be present.annotate_fits()
adds a checksum to the HDU it is modifying, but should it only do so if a checksum is already present?