STScI-Citizen-Science / MTPipeline

Pipeline to produce CR rejected, astrodrizzled, png's of HST WFPC2 solar system data.
6 stars 1 forks source link

Setup AstroDrizzle for ACS, WFC3 #134

Closed ktfhale closed 10 years ago

ktfhale commented 10 years ago

Max has provided us with most of the configuration files for ACS and WFC. We have currently have:

acs_center.cfg acs_wide.cfg acs_hrc_wide.cfg wfc3_center.cfg

and we should get wfc_wide.cfg tomorrow, although according to Max the only difference are output diemsnsions of 5000x5000. Max also had the following comments:

The main thing I haven't considered is how these output sizes would slice up (choose an overlap, all slices equal size, etc), but we agreed that is less important right now.

and

Also, if you are redrizzling the WFPC2 images, recall we thought we should make square output there to: expand the shorter output dimension (y) to match the longer one (x).

I've creaed a new branch, more_drizzle, for making the necessary changes to run_astrodrizzle.py, and possibly imaging_pipeline.py, to get AstroDrizzle to run on the new inputs.

ktfhale commented 10 years ago

Looks like the following changes need to be made:

That's all I see at the moment! Seems fairly simple.

ktfhale commented 10 years ago

I've made the changes listed above. But running the pipeline on WFC data (and probably the others) produces a familiar critical exception in the log:

 <type 'exceptions.ValueError'> Undefined variable `jref' in string `jref$v8q14451j_idc.fits' 110

Also, this was printed for one of the WFC files:

Warning:  There are files with zero exposure time: keyword EXPTIME = 0.0
Warning:  Removing the following files from input list
     /Users/khale/testfiles/WFC/j9fw05s7q_flt.fits
No valid input, quitting ...

This file does indeed have an EXPTIME card of 0. It's got plenty in it, however, so that's a little confusing.

EDIT: Wally reminded me that he had already helped me fix the first error for WFPC2: all of these lines are now in my .bash_profile:

export uref=/grp/hst/cdbs/uref/
export iref=/grp/hst/cdbs/iref/
export jref=/grp/hst/cdbs/jref/
ktfhale commented 10 years ago

AstroDrizzle is running on images from all six detectors! It seems to be going okay; the files are at least being written. I'm going to put some of the most common potentially problematic things I see in the standard output when I run the pipeline on these new input files, though.

For HRC:

Distortion model is not available: IDCTAB file jref$q692007bj_idc.fits not found

For WFC:

Distortion model is not available: IDCTAB file jref$v8q14451j_idc.fits not found

For SBC:

Distortion model is not available: IDCTAB file jref$s5d1409dj_idc.fits not found

and

WARNING: No cte correction will be made for this SBC data.

I only get this running on SBC, which, not being a CCD, doesn't need cte correction. Is it possible AstroDrizzle is somehow attempting it for the other detectors?

For UVIS:

Distortion model is not available: IDCTAB file iref$x5h1320ei_idc.fits not found

and

 Kw D2IMFILE exists in primary header but file /grp/hst/cdbs/irefx5o1555ji_d2i.fits not found

                     Detector to image correction will not be applied

For IR:

Distortion model is not available: IDCTAB file iref$w3m18525i_idc.fits not found

and

D2IMFILE keyword not found in primary header

I believe every one of the desired output files have been written, but I'm not sure how qualified I am to judge if the files are what we want or not. I'll give it a gander.

ktfhale commented 10 years ago

I think the pipeline doesn't know to not to reprocess the new outputs if they already exist. It seems like it's happy to rerun AstroDrizzle on UVIS data, at least. I imagine there are updates we need to make to our table of expected output products?

acviana commented 10 years ago

Maybe? Can you provide an example?

ktfhale commented 10 years ago

This is a set of all the pipeline's inputs and outputs after running cr rejection and astrodrizzle. The pipeline seems to always retry the astrodrizzle step. Additionally, there's no _wide_ image here. Do we want one?

Inputs (UVIS files)

ib2k77rrq_cr_flt.fits
ib2k77rrq_flt.fits

Actual drizzle outputs:

ib2k77rrq_cr_sci1_single_mask.fits
ib2k77rrq_cr_single_sci.fits
ib2k77rrq_cr_single_wht.fits

and

ib2k77rrq_sci1_single_mask.fits
ib2k77rrq_single_sci.fits
ib2k77rrq_single_wht.fits

In contrast, the outputs expected by our output_file_dict are:

ib2k77rrq_flt_wide_single_sci.fits
ib2k77rrq_flt_center_single_sci.fits
ib2k77rrq_cr_flt_wide_single_sci.fits
ib2k77rrq_cr_flt_center_single_sci.fits

We expect both a _wide_single_sci.fits and a _center_sci.fits from each _flt.fits input image. It looks like we don't check for the presence of secondary products, like _wht.fits or _mask.fits files. What we're actually producing from these new inputs is just one _single_sci.fits images, no _wide_single_sci or _center_sci . I'd go ahead and start changing things, but I want to make sure from Max that what we're producing from the new inputs is actually what we want.

ktfhale commented 10 years ago

I think there might be two problems: AstroDrizzle is only outputting a single _sci.fits file, a single_sci.fits file. It seems we want to two _sci.fits files, a _wide_single.fits and a center_single.fits file. But there's also potentially in part with the function renaming_files, and not just with the files coming out of AstroDrizzle. We're not renaming any of the new output files.

ktfhale commented 10 years ago

My talk with Max was very helpful. Because the renaming_files() script isn't working, we are only getting the _wide_single.fits image (although it's not named that because the renaming_files() script, again, isn't working), and that overwrites the previously written _center_single.fits image (because that's not named that either, because etc).

So I'm going to get renaming_files() working on the new inputs. Also, Max suggested we might try letting AstroDrizzle determine the dimensions for the wide images, since setting them ourselves can lead to huge off-pixel swaths in the outputs, as visible in these UVIS wide images below:

screen shot 2014-07-29 at 3 50 26 pm

ktfhale commented 10 years ago

The reason renaming_files() isn't working is because AstroDrizzle, for some reason, appears to strip the _flt out of the filenames when it spits out products. It keeps _c0m, but not _flt, and so the glob regex we use to find the outputs doesn't yield any results for the new inputs.

acviana commented 10 years ago

We use regex?

ktfhale commented 10 years ago

I thought python's glob took regular expressions as its input argument, but apparently glob has/is its own wildcard notation, and which is simpler than regular expressions. I should simply have said glob search string.

ktfhale commented 10 years ago

I've made changes to the filename handling in two places to accomodate the fact that AstroDrizzle keeps _c0m, but strips _flt, from its output filenames.

In run_astrodrizzle.py, I've made modifications so the filenames of AstroDrizzle's outputs are rewritten to include _wide_ or _center_ as appropriate.

In image_pipeline.py, I've made changes so that the pipeline products from the new _flt.fits inputs are in output_dict, so the pipeline will not try to run AstroDrizzle if the results are already present.

Note that the build has failed, but this is just because _flt_ is still included in the test's hard-coded expectation dictionary. Since we have decided that we don't want 'flt' in our output filenames past cosmic ray rejection, I'll edit the test.

ktfhale commented 10 years ago

These are the resulting _wide_single_sci.fits UVIS images if you don't set the output dimensions in the configuration files.

screen shot 2014-07-31 at 9 44 30 am

I can't see anything particularly wrong with them. Sometimes, though, I think there's been more than just a simple rotation.

screen shot 2014-07-31 at 9 47 28 am

That looks as if the image has been slightly... tilted? The actual image certainly doesn't seem exactly square anymore, and seems more like a parallelogram. Is this AstroDrizzle's built-in distortion correction? I suppose I'm wondering whether this is desirable or not. Of the 15 images above, the non-squareness is most noticeable in this image.

ktfhale commented 10 years ago

Here are some comparisons of the "wide" and center cuts, with the wide dimensions set automatically by AstroDrizzle. The wide images are on the left, the center images on the right.

screen shot 2014-07-31 at 10 46 51 am

I actually think that, pretty often, the AstroDrizzle wide images do a better than our manually-specified center cuts at their own jobs. Our specified center cut can often leave a sizable black border around the smaller subarray images, but that never happens when AstroDrizzle sets the dimensions. And we can't really make the center cut any smaller without making it too small for the larger subarrays, like the Jupiter image.

Overall, I think might it be better if we just used the "wide" images, set by AstroDrizzle, at least for WFC3. I don't see much utility to the center images.

ktfhale commented 10 years ago

Testing AstroDrizzle on the WFC inputs, I found that AstroDrizzle fails on j9fw05s7q_flt.fits, due to it having an EXPTIME keyword of 0.0. We should do a search of the headers and see how prevalent this is.

acviana commented 10 years ago

Yeah, that would be good.

ktfhale commented 10 years ago

So I've run AstroDrizzle on inputs from all six detectors, and tried to compare the wide and center images for each. Wide images, with the dimensions set automatically by AstroDrizzle to include the whole image, are on the left, whereas the center image with the dimensions we set manually are on the right.

ACS/HRC

hrc

For HRC, there's almost no difference between the center slice and the wide slice with dimensions chosen by AstroDrizzle. Producing both seems like duplication of effort.

ACS/SBC

sbc

Same goes for SBC. Typically, the two slices are nearly identical.

ACS/WFC

wfc

WFC has a little bit more variation between the slices. It's often hard to tell in our faint WFC fields what the target is, but in other cases, especially when a subarray is used, it's fairly clear (as in the third- and second-to-last images). The center slice for those images unfortunately don't quite catch them in their entirety. For WFC, when the full CCD array is used, we can't trust the source to be in the center of the image, where the chip gap is. Overall, I still think the wide images are more useful.

WFC3/UVIS

uvis

As noted two comments above, I don't think the center slice for UVIS images offers us much that the wide slice does not. When the taret is obvious in the center slice, it's just as obvious in the wide slice as well. And when the full chip is used, with the resulting chip gap in the center of the image, the center slice becomes much less useful.

WFC3/IR

ir

The center slice might be most valuable for IR. When the full CCD is used, there's no chip gap, and so the target is more likely to be near the center of the image, and thus within the center slice. The second image illustrates how a center slice can help clarify the the target. However, I still wouldn't call the center slice essential.

You might have noticed the black dots that are abundant in the IR images. Thankfully, these are not due to overzealous CR rejection, but occur in AstroDrizzle. Here's one of the IR images before and after CR rejection, and of the wide slice _sci and _wht flt files.

irblackdots

The biggest black circle is over the Death Star, which gives me hope that maybe AstroDrizzle knows what it's doing, and is just eliminating bad data? Still seems very harsh, though.

Overall, my feeling is that we should ditch outputting the center slice, and just use the dimensions AstroDrizzle chooses. This will also halve our drizzling computation time, althought that's not too important. More significantly, I just feel like the center slices aren't very useful if we let AstroDrizzle choose the dimensions for the wide image.

ktfhale commented 10 years ago

My script to find how many of our files have an EXPTIME flag set to 0 hit some kind of IOError and crashed, only looking at 15,167 of our files. I'll get it running again, but so far we've got 80 files with an EXPTIME of 0. We need to figure out what to do with them, as they'll cause AstroDrizzle to crash.

ktfhale commented 10 years ago

Alright. It seems like we've decided to not use the center slices for the new image. However, for the sake of expediency in getting the pipeline running, we'll still make them, and modify the code to dispose of center images at a later date.

As for the 80 images with an EXPTIME flag of 0, we're just going to ignore them. The pipeline will try to run on them, it will fail, and it will keep going. We should keep them in mind when we run any scripts in the future to check for completeness in our data products. Here's a list of the bad files:

/astro/mtpipeline/archive/wfc3/11573_URANUS/ib3805v2q_flt.fits
/astro/mtpipeline/archive/wfc3/11573_URANUS/ib3802tpq_flt.fits
/astro/mtpipeline/archive/wfc3/11573_URANUS/ib3802ttq_flt.fits
/astro/mtpipeline/archive/wfc3/11573_URANUS/ib3805v1q_flt.fits
/astro/mtpipeline/archive/wfc3/11573_URANUS/ib3802tsq_flt.fits
/astro/mtpipeline/archive/wfc3/12237_174567/ibjb04p0q_flt.fits
/astro/mtpipeline/archive/wfc3/12237_174567/ibjb04p1q_flt.fits
/astro/mtpipeline/archive/wfc3/12237_2005EF298/ibjb07jdq_flt.fits
/astro/mtpipeline/archive/wfc3/12237_2005EF298/ibjb07jcq_flt.fits
/astro/mtpipeline/archive/wfc3/12237_2005EF298/ibjb07jbq_flt.fits
/astro/mtpipeline/archive/wfc3/12468_04VY130/ibtp14dxq_flt.fits
/astro/mtpipeline/archive/wfc3/12468_04VY130/ibtp14dyq_flt.fits
/astro/mtpipeline/archive/wfc3/12468_04VY130/ibtp14e2q_flt.fits
/astro/mtpipeline/archive/wfc3/12468_04VY130/ibtp14e0q_flt.fits
/astro/mtpipeline/archive/wfc3/12468_08SO266/ibtp22weq_flt.fits
/astro/mtpipeline/archive/wfc3/12468_08SO266/ibtp22wgq_flt.fits
/astro/mtpipeline/archive/wfc3/12468_08SO266/ibtp22wdq_flt.fits
/astro/mtpipeline/archive/wfc3/12468_08SO266/ibtp22wiq_flt.fits
/astro/mtpipeline/archive/acs/11970_TITAN-PRE-INF-CONJ/jb9z04dbq_flt.fits
/astro/mtpipeline/archive/acs/11970_TITAN-PRE-INF-CONJ/jb9z04dcq_flt.fits
/astro/mtpipeline/archive/acs/11970_TITAN-PRE-INF-CONJ/jb9z04d6q_flt.fits
/astro/mtpipeline/archive/acs/11970_TITAN-PRE-INF-CONJ/jb9z04d7q_flt.fits
/astro/mtpipeline/archive/acs/11970_TITAN-PRE-INF-CONJ/jb9z04daq_flt.fits
/astro/mtpipeline/archive/acs/11970_TITAN-PRE-INF-CONJ/jb9z04d8q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q301g4q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q308lyq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q301g3q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q301fyq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q301g1q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q310mkq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q310mhq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q310mlq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q308m0q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q308m2q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q308lvq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q306l2q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q306l7q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q310mmq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q306l6q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q306l1q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q308m1q_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q301fzq_flt.fits
/astro/mtpipeline/archive/acs/10805_URANUS/j9q306l4q_flt.fits
/astro/mtpipeline/archive/acs/10860_2002UX25/j9qs12hrq_flt.fits
/astro/mtpipeline/archive/acs/10860_2002UX25/j9qs12hsq_flt.fits
/astro/mtpipeline/archive/acs/10800_01OG109/j9rpa3ghq_flt.fits
/astro/mtpipeline/archive/acs/10800_01OG109/j9rpa3ggq_flt.fits
/astro/mtpipeline/archive/acs/10800_01OG109/j9rpa3giq_flt.fits
/astro/mtpipeline/archive/acs/10800_01OG109/j9rpa3gjq_flt.fits
/astro/mtpipeline/archive/acs/10514_ANY/j9fw33qzq_flt.fits
/astro/mtpipeline/archive/acs/10514_ANY/j9fw33r1q_flt.fits
/astro/mtpipeline/archive/acs/10514_ANY/j9fw05scq_flt.fits
/astro/mtpipeline/archive/acs/10514_ANY/j9fw33r4q_flt.fits
/astro/mtpipeline/archive/acs/10514_ANY/j9fw05s9q_flt.fits
/astro/mtpipeline/archive/acs/10514_ANY/j9fw05s7q_flt.fits
/astro/mtpipeline/archive/acs/10800_04PA112/j9rpd2n4q_flt.fits
/astro/mtpipeline/archive/acs/10545_IXION/j9fs14wcq_flt.fits
/astro/mtpipeline/archive/acs/10545_IXION/j9fs14wfq_flt.fits
/astro/mtpipeline/archive/acs/10545_IXION/j9fs14weq_flt.fits
/astro/mtpipeline/archive/acs/10545_IXION/j9fs14wdq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608yvq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608yxq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608yzq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608ywq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608z0q_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608yyq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608yuq_flt.fits
/astro/mtpipeline/archive/acs/10508_2002CR46B/j9f608z1q_flt.fits
/astro/mtpipeline/archive/acs/10800_05EF304/j9rp26qaq_flt.fits
/astro/mtpipeline/archive/acs/10800_05EF304/j9rp26q9q_flt.fits
/astro/mtpipeline/archive/acs/10800_05EF304/j9rp26q8q_flt.fits
/astro/mtpipeline/archive/acs/10800_05EF304/j9rp26qbq_flt.fits
/astro/mtpipeline/archive/acs/10557_ASTEROID-7719/j9f304xcq_flt.fits
/astro/mtpipeline/archive/acs/10557_ASTEROID-7719/j9f304xbq_flt.fits
/astro/mtpipeline/archive/acs/10514_82075/j9fw05s6q_flt.fits
/astro/mtpipeline/archive/acs/10514_82075/j9fw05sbq_flt.fits
/astro/mtpipeline/archive/acs/10514_82075/j9fw05s8q_flt.fits
/astro/mtpipeline/archive/acs/10514_99RB216/j9fw33r0q_flt.fits
/astro/mtpipeline/archive/acs/10514_99RB216/j9fw33qyq_flt.fits
/astro/mtpipeline/archive/acs/10514_99RB216/j9fw33r3q_flt.fits]

I figure we can keep this ticket open, for when I get around to getting rid of the center slice code.

ktfhale commented 10 years ago

Running AstroDrizzle over the weekend was more or less successful. I discovered that, due to an mistake on my part in my script which runs the pipeline, a few project folders weren't run. I've been trying to run the pipeline this morning, but I get this error:

Traceback (most recent call last):
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/util.py", line 219, in wrapper
    func(*args, **kwargs)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/astrodrizzle.py", line 171, in run
    imgObjList, outwcs = processInput.setCommonInput(configobj)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/processInput.py", line 190, in setCommonInput
    inmemory=virtual)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/processInput.py", line 320, in createImageObjectList
    image.compute_wcslin(undistort=undistort)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/imageObject.py", line 1163, in compute_wcslin
    wcslin = distortion.utils.output_wcs([chip_wcs],undistort=undistort)
  File "/usr/stsci/ssbx/python/lib/python2.7/site-packages/stwcs-1.2.1.dev33816-py2.7.egg/stwcs/distortion/utils.py", line 26, in output_wcs
    fra_dec = np.vstack([w.calc_footprint() for w in list_of_wcsobj])
AttributeError: 'HSTWCS' object has no attribute 'calc_footprint'

Searching through the logs from the weekend, AstroDrizzle never hit this error. I just tried rerunning the pipeline on some files which were successfully processed over the weekend, and it failed on those too. Seems like something has changed, like drizzlepac updated or something.

ktfhale commented 10 years ago

Examination of the logs from over the weekend suggest we were using drizzlepac-1.1.16, whereas now when I run the pipeline the errors are occurring in drizzlepac-2.0.0. I'll try to figure out how to backup to the older version, maybe in setup.py (although it's not like I've rerun setup.py since this weekend, so I'm unsure if that can help). In the meantime, is there someone I can contact about this?

acviana commented 10 years ago

So there are 3 "flavors" of STPython at ST: ssb, ssbx, abd ssbdev. ssbx is a weekly build and ssbdev is a nightly build. It's possible that something in ssbx was updated over the weekend. But I'm also wondering if you are pointing at different versions on your different machines.

If you really think something was switched on you try Megan Sosey in SSB (sosey at stsci, @sosey).

Also, just to amuse me, could you put a number on "more or less successful"?

ktfhale commented 10 years ago

This is all on science4. I haven't tried running AstroDrizzle today on my laptop.

I'll try Wally's script to see if I can quantify what's missing. We'll need to ignore the lack of png outputs.

sosey commented 10 years ago

Drizzlepac in dev is 2.0 now and dev also pulls astropy 1.0. X is also using drizzlepac 2.0 and astropy 0.4. I just ran astrodrizzle from X on a linux machine with wfc3 data without error, so I'm thinking the problem is your environment. Try using a fresh terminal, make sure ssbx is set and go again on a fresh dataset...I don't know how your scripts work, but make sure any output they produced before is deleted and try again fresh? Also make sure your PYTHONPATH doesn't have anything upfront that's pointing to older software.....

ktfhale commented 10 years ago

Looks like there are 190 input files on which AstroDrizzle outputs are missing. The majority are acs files. They're scatterred in various projects, and typically the projects have many other input files on which astrodrizzle ran successfully.

80 of these are, predictably, the bad exposure time files. 1 is a file whose drizzle outputs I deleted to get drizzling working again on science4. 109 (not too many!) must have other problems. I'll figure them out after I figure out why AstroDrizzle doesn't seem to be working at all. The first thing I did was to start a fresh ssh and virtual environment instance. I'll check my python path.

sosey commented 10 years ago

just added information.. that error for HSTWCS should be coming from astropy - where HSTWCS is imported from, something to check on your path

acviana commented 10 years ago

Thanks @sosey!

ktfhale commented 10 years ago

Indeed, backdating to version 1.1.16 of drizzlepac changed nothing. I'm surprised that astropy could have changed on us, however, because we have that fixed to use version 0.3.2. Could some sub-version of that have changed?

EDIT: curiously, AstroDrizzle seems to be running just fine on my laptop. I'll try comparing the PYTHONPATH between the machines.

sosey commented 10 years ago

ah - that's probably the problem, if you're forcing your pipeline to use 0.3.2 astropy there's probably an update in astrodrizzle which requires 0.4 astropy. You should try taking your datasets and running them through the standard X versions and see if they still fail, and while you're at it verify that the versions you think are getting imported really are.... i.e. open a terminal and import stuff you use and hand check the verison. It's a bad idea to force astropy to a lower version than what's in X unless you really know all your software dependencies. A quick look at the changelog for astropy0.4 shows updates to the wcs package. Python will import the packages in order from your PYTHONPATH, so it depends on how you're forcing.

ktfhale commented 10 years ago

I'm using setuptools in our code's setup.py file to fix astropy's version to 0.3.2. That's also where I tried fixing drizzlepac to version 1.1.16. I re-ran the setup script after making that change to it, of course, and the version number printed in the output error suggests that our code was indeed using version 1.1.16, and not version 2.0.0.

If there had been an update to astrodrizzle which required astropy verion 0.4, would not backdating to version 1.1.16 of astrodrizzle have solved the problem? That's the version of astrodrizzle it was successfully running on Saturday. Do you know whether it's possible that setting astrodrizzle's version number to 1.1.16 in setup.py is not completely guaranteeing that I'm running the same astrodrizzle as I was on Saturday?

ktfhale commented 10 years ago

Within my virtual environement on science4, after running setup.py with both astrodrizzle and drizzlepac fixed to versions 0.3.2 and 1.1.16, respectively (a combination that I believed ran successfully on Saturday), my PYTHONPATH lists drizzlepac version 1.1.16 before drizzlepac version 2.0.0. Fixing the version of drizzlepac does indeed seem to be doing what I want it to.

ktfhale commented 10 years ago

I have stopped fixing either astropy and drizzlepac on science4 AstroDrizzle appears to run just fine. The reasons why we fixed astropy are documented here, but they are likely no longer be relevant- the problem I think I noticed in the development release 0.4rc1 (and it is still mysterious why setuptools was grabbing a development release) were, I think, resolved.

I'll try no longer fixing them on my mac as well.

sosey commented 10 years ago

setting install_requires in setup.py to a certain version only makes that the minimum version required I believe ( I could be wrong, I'm not a setuptools expert) setuptools will most likely grab and install the most recent version of the software past the required version if that version is not already available on your path. Also, when you run setup.py it installs to your default install area, which will be different on X and DEV, or if you point it somewhere else it will go there. So it's possible to install your package and have drizzlepac or astropy of a different version picked up at runtime depending on your environment. If you are always starting from an empty environment and explicitly installing only the versions of all the code you need each time it's possible you are hitting on the correct versions. However, if you are also referencing ssbx or ssbdev then there are other software dependencies which might be more hidden and cause error. I think that with drizzlepac 1.1.16 stwcs was still used instead of astropy.wcs. Going from 1.1.16 to 2.0 is a huge update, so just setting drizzlepac to point to its older version without also making sure STWCS, or any other packages it referenced pointed to their correct version could also produce errors. The best thing is to make sure you install a single release package from Ureka, which should be guaranteed to have all the dependencies correctly included should make things work.

sosey commented 10 years ago

you might also check your easy-install.pth file in your lib/ from your install directory. I'm fuzzier on how all this works but I think setuptools updates this file with installations and uses that to reference which installed version to pickup at runtime

ktfhale commented 10 years ago

Thanks for all your advice, @sosey! This is very helpful. Now that I've stopped fixing the versions, things seem to be working now.

I just reran the pipeline on the the 109 remaining mysteriously un-drizzled files. All but three of them were successfully processed. That is to say, only 83 of our 25,055 input flt's or c0ms have been successfully drizzled.

Other than the 80 EXPTIME = 0 files, these are the offending three:

/astro/mtpipeline/mtpipeline_outputs/wfc3/12537_ANY/ibu521krq_flt.fits
/astro/mtpipeline/mtpipeline_outputs/wfc3/12537_ANY/ibu521kvq_flt.fits
/astro/mtpipeline/mtpipeline_outputs/wfc3/12537_ANY/ibu521kyq_flt.fits

I haven't figured out why they don't like being drizzled. Plenty others in their project are just fine with it. All of the images in this project seem kind of odd, though, like

screen shot 2014-08-04 at 4 48 02 pm It's a Jackson Pollock! Anyway, the three failing files all look like this: screen shot 2014-08-04 at 4 40 33 pm

Are those.. dust grains? What's up with the columns? I don't know. The given error is:

Traceback (most recent call last):
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/util.py", line 219, in wrapper
    func(*args, **kwargs)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/astrodrizzle.py", line 171, in run
    imgObjList, outwcs = processInput.setCommonInput(configobj)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/processInput.py", line 113, in setCommonInput
    **configObj['STATE OF INPUT FILES'])
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/processInput.py", line 508, in process_input
    updatewcs=updatewcs, **workinplace)
  File "/tmp_mnt/stsciEWS6_x64/ssbx/python/lib/python2.7/site-packages/drizzlepac-2.0.0.dev33816-py2.7-linux-x86_64.egg/drizzlepac/processInput.py", line 672, in buildFileListOrig
    ivmlist, filelist = zip(*ivmlist)
ValueError: need more than 0 values to unpack

Anyway, I don't think it's all too concerning, given it's only three files. I'm going to move on to working on the pngs.

sosey commented 10 years ago

Glad it's working. Those 3 files are all in the same visit (you can tell by the name), looks like they were parallel exposures taken during the Venus transit with the moon, so there was bright background and a harsh, moving, wcs applied to the headers, astrodrizzle probably can't align them (I and I think I remember getting calls from the archive that some of the exposures from this program saturated). Anyways, that top image looks like a nice blurry image of the moon. My guess for the bottom image is those are rings caused by mineral deposits on the uvis detector window when it was contaminated on the ground....they are being illuminated by the bright background. funtimes. I would trash all three images, at 60 seconds I don't think you'll recover useful features from the individual images either.

acviana commented 10 years ago

:clap:

ktfhale commented 10 years ago

I never really did a vetting process for AstroDrizzle outputs, and just assumed whatever it spits out is correct. But this IR frame gave me some pause after it was drizzled

screen shot 2014-08-05 at 9 45 49 am

I'm fairly confident those are stars, so it's not necessarily bad, but it seems strange.

ktfhale commented 10 years ago

Actually, I feel pretty okay about this behavior. It seems to happen when AstroDrizzle runs into a clearly saturated portion of the detector. The weight map shows that these pixels are being detected as useless, and are being are getting set to 0.

screen shot 2014-08-05 at 9 55 11 am

sosey commented 10 years ago

those are really saturated stars whose centers are being tagged as bad pixels, either by the pipeline because they are really horribly saturated (if they saturate in the first read not much to do, though I think we fixed the pipeline to not use badpixel dqs there so they'd look prettier, hmm) or more likely by drizzle when it does the blot compare. That wht image is your clue that the drizzle is bad, there are too many objects showing up in it (most of those tags look like saturated star cores actually - in the IR you shouldn't see very many cosmic rays tagged in the drizzle weight masks because they should be rejected in the calibration pipeline when the FLT image is made) the drizzle parameters should be futzed with if you want to use these images for science. it looks like a crowded galactic center field?)

ktfhale commented 10 years ago

With all AstroDrizzle products created (except for those 83 files) I think this ticket can be closed.