bcalden / ClusterPyXT

The Galaxy Cluster ‘Pypeline’ for X-ray Temperature Maps
BSD 3-Clause "New" or "Revised" License
27 stars 8 forks source link

norm map & free paremater #26

Closed duyhoang-astro closed 1 year ago

duyhoang-astro commented 4 years ago

Hi.

  1. I see that the best-fit parameters are saved in acb/*spectral_fits.csv files. Is there any existing way to export the norm obtained from the spectral fitting as a fits image? Probably like the way the temperature map is generated.

  2. In the code, the abundance and other parameters are kept fixed. Only temperature and normalization are free. Is there any plan to do spectral fitting letting the abundance as free parameters as well?

Thanks.

bcalden commented 4 years ago

To answer your first question, I should be committing a new version of the dev branch tomorrow that includes this feature. I just finished coding it and need to test it a bit before pushing to github. This feature will be found after clicking the 'Make Final Products’ button. It should function as requested (making normalization maps from the already fit values). If not, please let me know and I will try to update it ASAP.

To answer your second question, In principle, any of the parameters can be fit. You are one of a number of people to request fitting abundance in order to make metallicity maps. This is my next priority after finishing your first request.

I will try to send you an email when I commit the code tomorrow with the normalization maps.

Any comments on the UI/ease of use?  I love getting feedback (good or bad) and will try to incorporate ay suggestions into future releases.

Thank you!

Brian On May 25, 2020, 9:50 AM -0600, Duy Hoang notifications@github.com, wrote:

Hi.

  1. I see that the best-fit parameters are saved in acb/*spectral_fits.csv files. Is there any existing way to export the norm obtained from the spectral fitting as a fits image? Probably like the way the temperature map is generated.

  2. In the code, the abundance and other parameters are kept fixed. Only temperature and normalization are free. Is there any plan to do spectral fitting letting the abundance as free parameters as well?

Thanks. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

duyhoang-astro commented 4 years ago

Thanks for the quick reply.

On the UI/ease of use, I find that the code is very easy to use and quite robust. Especially in the first times I used. I like the guiding text after each steps: very clear. After being familiar with the pipeline, I prefer running it with command lines rather than with GUI (so I add --continue when possible). It is more convenient to me. I understand this is really personal references. Other users might prefer GUI. But I think many users will like following additional features:

  1. the inputs for all steps are saved in a parset file by user at the beginning of run. This is very helpful when working on large number of clusters.
  2. automatic identification and removal of compact sources, in addition to compact sources defined by users (in case automatic pipeline does not work for some sources).
  3. automatic generation of regions files (acisI_region_0.reg and master_crop-ciaowcs.reg)
  4. automatic search for nH given the build-in databases.
  5. and probably comments on some important files generated by the pipeline and what corrections/processes have been done to them.

Sorry for too many requests. They are not crucial as I still can run the pipeline at its current stage. But with new features, much more can be done.

Thanks, Duy

bcalden commented 4 years ago

Duy,

First off, thank you very much for the feedback! I just committed an update that includes the creation of normalization maps. If you have the python library astroquery installed in your ciao conda environment it will try and querying for redshift if it can parse the name entered for cluster name. I am working on UI improvements to allow for the selection of which parameters are frozen/fit.

Automatic generation of the necessary region files is coming! (This is going to be piece by piece with source finding being the last).

Regarding batch processing a lot of the functionality is there at least for the first stage to be batched in shell scripting (not ideal). Creating a parameter file like you suggested should help this.

I am doing a massive rewrite of the documentation updating it for the GUI. The batch features and the descriptions of important files will be included.

Let me know if you have any issues with the new version.

Thanks again for the feedback!

Brian On May 26, 2020, 6:08 AM -0600, Duy Hoang notifications@github.com, wrote:

Thanks for the quick reply. On the UI/ease of use, I find that the code is very easy to use and quite robust. Especially in the first times I used. I like the guiding text after each steps: very clear. After being familiar with the pipeline, I prefer running it with command lines rather than with GUI (so I add --continue when possible). It is more convenient to me. I understand this is really personal references. Other users might prefer GUI. But I think many users will like following additional features:

  1. the inputs for all steps are saved in a parset file by user at the beginning of run. This is very helpful when working on large number of clusters.
  2. automatic identification and removal of compact sources, in addition to compact sources defined by users (in case automatic pipeline does not work for some sources).
  3. automatic generation of regions files (acisI_region_0.reg and master_crop-ciaowcs.reg)
  4. automatic search for nH given the build-in databases.
  5. and probably comments on some important files generated by the pipeline and what corrections/processes have been done to them.

Sorry for too many requests. They are not crucial as I still can run the pipeline at its current stage. But with new features, much more can be done. Thanks, Duy — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

duyhoang-astro commented 4 years ago

Thanks for the update! I have tried to run the branch ClusterPyXT-dev-CIAO-4.12. I got the error below. Do I miss something for the new version to work?

################ $ python clusterpyxt.py File "clusterpyxt.py", line 130 region_file_label = QtWidgets.QLabel(f"A region file (e.g. {self.observations[0].acisI_region_0_filename}) containing\n" ^ SyntaxError: invalid syntax ################

bcalden commented 4 years ago

Apologies!

It had to do with the way that string was formatted in the code. I updated the syntax and committed to the dev branch. If it still gives you issues let me know (along with what version of python and what os your on).

Brian

On May 27, 2020, 8:19 AM -0600, Duy Hoang notifications@github.com, wrote:

Thanks for the update! I have tried to run the branch ClusterPyXT-dev-CIAO-4.12. I got the error below. Do I miss something for the new version to work? ################ $ python clusterpyxt.py File "clusterpyxt.py", line 130 region_file_label = QtWidgets.QLabel(f"A region file (e.g. {self.observations[0].acisI_region_0_filename}) containing\n" ^ SyntaxError: invalid syntax ################ — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

duyhoang-astro commented 4 years ago

I am using Python 3.5.4 that goes with Ciao-4.12 installation. The os is Ubuntu 18.04. After getting the new code, I still have the same error.

$ python clusterpyxt.py File "clusterpyxt.py", line 133 While any size upto the full CCD may be used, a region larger than 60 arc seconds is generally not necessary.""") ^ SyntaxError: invalid syntax

Cheers, Duy

bcalden commented 4 years ago

Alright, the problem is that most of the strings are formatted for python 3.6 and up. I will rewrite these so it is compatible with python 3.5 again in ciao 4.12. I am at a conference the beginning of the week but should be able to complete the rewrite by the end of the week.

If that is too long as you are under a time crunch, let me know. Alternatively, upgrading to python 3.6+ in a different conda environment will also work (although may not be desirable).

Brian

bcalden commented 4 years ago

The strings are updated and everything should work with python 3.5.

I've tested on a Mac with python 3.7.5 but the strings are formatted and should work with python 3.5.

I had some issues with the test VM today but I should have it tested on ubuntu with python 3.5 by Friday.

Brian

duyhoang-astro commented 4 years ago

Thanks Brian for the fix. I have reinstalled Ciao 4.12 with Python 3.7.7 (installing via miniconda3, https://cxc.cfa.harvard.edu/ciao/download/conda.html) and updated ClusterPyXT-dev-CIAO-4.12. The installation of Ciao passed all smoke tests. The problem with the strings seems to be fixed.

But there is a new error (pget() Parameter not found). This seems to be related to Ciao. I follow this (https://cxc.cfa.harvard.edu/ciao/faq/pget_error.html) to remove files in ~/cxcds_param4/*. But the error still remains. Having checked the par file in the temporary directory (/tmp/tmp5i9h52qe.dmkeypar.par), but it is not there. It is not clear to me if this temporary file is created and then deleted when the pipeline crashed. Or the file was not created at all. Any ideas on how to fix this error?

######### Found background at /home/duy/soft/miniconda3/envs/ciao-4.12/CALDB/data/chandra/acis/bkgrnd/acis1iD2000-12-01bkgrnd_ctiN0005.fits pset: cannot convert parameter value : rval /tmp/tmp5i9h52qe.dmkeypar.par: cannot convert parameter value : rval Traceback (most recent call last): File "clusterpyxt.py", line 496, in run_stage_1 ciao.run_stage_1(self._cluster_obj) File "/home/duy/soft/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 1306, in run_stage_1 merge_observations(cluster) File "/home/duy/soft/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 478, in merge_observations ciao_back(cluster) File "/home/duy/soft/ClusterPyXT-dev-CIAO-4.12/ciao.py", line 184, in ciao_back echo=True) File "/home/duy/soft/miniconda3/envs/ciao-4.12/lib/python3.7/site-packages/ciao_contrib/runtool.py", line 1810, in call stackfiles = self._update_parfile(parfile) File "/home/duy/soft/miniconda3/envs/ciao-4.12/lib/python3.7/site-packages/ciao_contrib/runtool.py", line 1365, in _update_parfile self._update_parfile_verify(parfile, stackfiles) File "/home/duy/soft/miniconda3/envs/ciao-4.12/lib/python3.7/site-packages/ciao_contrib/runtool.py", line 1294, in _update_parfile_verify oval = _to_python(ptype, pio.pget(fp, oname)) ValueError: pget() Parameter not found Aborted (core dumped) ########

duyhoang-astro commented 4 years ago

update: if I run the command (on line 184 of ciao.py) in Python 3.7.7 separately (not with ClusterPyXT-dev-CIAO-4.12), no error occurs.

#######

from ciao_contrib import runtool as rt acis_file='acisf04961_repro_evt2.fits' rt.dmkeypar(infile=acis_file, keyword="GAINFILE", echo=True) acisD2000-01-29gain_ctiN0008.fits ########

bcalden commented 4 years ago

Thank you for the results! Over the weekend I was able to test it on ubuntu with python 3.5 and encountered the error you mentioned above about the parameter file. When I reran the pypeline a couple of times the error went away. I likely need a .punlearn before that specific ciao command runs so I will try and hunt that down so it isn't a recurring issue.

After encountering that bug, I also encountered the string bug as well. There are a couple more so there will be a big commit today to correct those and hopefully the parameter issue as well (e.g. stage 5 has a string bug as well when trying to print the red text in pypeline_io.py).

This update will be pushed later today.

Also thank you very much for your help! It is highly appreciated and will help make ClusterPyXT better/easier to use (or in this case, just able to go through without crashing :( .)

Brian

On Sun, Jun 7, 2020 at 1:58 PM Duy Hoang notifications@github.com wrote:

Brian. I ran the dev pipeline on a Mac OS with Ciao 4.12 + Python 3.7.7. No errors on the strings and 'parameter not found' occur. But another error in stage 3.

############ Starting Stage 2: test3 Removing sources from observations in parallel. removing sources from 899 removing sources from 7687 infile: /Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acisI.fits[exclude sky=region(/Users/hoang/nextcloud/chandra_clusters/test3/sources.reg)] infile: /Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI.fits[exclude sky=region(/Users/hoang/nextcloud/chandra_clusters/test3/sources.reg)] outfile: /Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acis_nosrc_899.fits outfile: /Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acis_nosrc_7687.fits infile: /Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/merged_back.fits[exclude sky=region(/Users/hoang/nextcloud/chandra_clusters/test3/sources.reg)] outfile: /Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/back_nosrc_7687.fits infile: /Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/merged_back.fits[exclude sky=region(/Users/hoang/nextcloud/chandra_clusters/test3/sources.reg)] outfile: /Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/back_nosrc_899.fits Generating light curves. Creating a lightcurve from the high energy events list with dmextract Running dmextract infile=/Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI_hiE.fits[bin time=298127080.04016:298134082.32799:259.28] outfile=/Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI_lcurve_hiE.lc opt=ltc1 clobber=True cleaning the lightcurve for 7687, press enter to continue. Filtering the event list using GTI info from high energy flares. running: dmcopy infile=/Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acis_nosrc_7687.fits[@/Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI_gti_hiE.gti] outfile=/Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI_nosrc_hiEfilter.fits clobber=True Creating a lightcurve from the high energy events list with dmextract Running dmextract infile=/Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acisI_hiE.fits[bin time=81006721.383985:81038272.422658:259.28] outfile=/Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acisI_lcurve_hiE.lc opt=ltc1 clobber=True cleaning the lightcurve for 899, press enter to continue. Filtering the event list using GTI info from high energy flares. running: dmcopy infile=/Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acis_nosrc_899.fits[@/Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acisI_gti_hiE.gti] outfile=/Users/hoang/nextcloud/chandra_clusters/test3/899/analysis/acisI_nosrc_hiEfilter.fits clobber=True Processing test3/7687 Creating the image with sources removed Removing sources from event file to be used in lightcurve Creating lightcurve from the events list with dmextract Cleaning the lightcurve by removing flares with deflare. Press enter to continue. filtering the event list using GTI info just obtained. Don't forget to check the light curves! Processing test3/899 Creating the image with sources removed Removing sources from event file to be used in lightcurve Creating lightcurve from the events list with dmextract Cleaning the lightcurve by removing flares with deflare. Press enter to continue. filtering the event list using GTI info just obtained. Don't forget to check the light curves! Cluster data written to /Users/hoang/nextcloud/chandra_clusters/test3/test3_pypeline_config.ini Stage 2 complete - Point sources removed -> /Users/hoang/nextcloud/chandra_clusters/test3/main_output/test3_xray_surface_brightness_nosrc.fits. High energy events filtered. Next is stage 3. This stage extracts the RMF and ARF files. Before continuing the pipeline on test3, you need to create a region file for each observation. Each observation will need its own region file named acisI_region_0.reg and saved in the respective analysis directory (e.g. /Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI_region_0.reg).

To create this file, open the respective acisI_clean.fits file (e.g. /Users/hoang/nextcloud/chandra_clusters/test3/7687/analysis/acisI_clean.fits) and draw a small circle region containing some of each of the ACIS-I CCD's. This region does not need to contain ALL of the chips, just a piece of each. It can be ~20 pixels (bigger circle=longer runtime).

After the region files for each observation are created, continue running ClusterPyXT on test3

Traceback (most recent call last): File "clusterpyxt.py", line 519, in run_stage_3 win = Stage3Window(self, self._cluster_obj) File "clusterpyxt.py", line 137, in init obs_string = self.get_obs_string() File "clusterpyxt.py", line 173, in get_obs_string obs_string_list.append("{observation.id}: {region_file}".format(obsid= observation.id, region_file=region_file)) KeyError: 'observation' Abort trap: 6 (ciao-4.12) macpro:ClusterPyXT-dev-CIAO-4.12 hoang$

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bcalden/ClusterPyXT/issues/26#issuecomment-640271333, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSHRNLEM3MVKZNSIZQF4XLRVPWPFANCNFSM4NJT4NPA .

duyhoang-astro commented 4 years ago

On my Mac OS, I can run the ClusterPyXT-dev-CIAO-4.12 version without any problem. This has Ciao-4.12 + Python 3.7.7 installed with miniconda. The only problem is that it takes lots of time when doing spectral fitting. So, it's not an ideal solution.

With the same installation on my Ubuntu 18 machine (Ciao-4.12+Python 3.7.7+miniconda), I had problems with parameter error (ValueError: pget() Parameter not found) when running dmkeypar (like in my earlier post). I added a punlearn command, rt.dmkeypar.punlearn(), before dmkeypar lines. This seems to fix the error for the dmkeypar lines that do not have keyword=TSTART or keyword=TSTOP. If I replace the keyword TSTART or TSTOP with other keywords (e.g. OBS_ID, TIMEZERO) found in the header, dmkeypar works just fine.

If I run this command, rt.dmkeypar(infile=data_hiE, keyword="TSTART", echo=True), outside of the pipeline (i.e. in python 3.7.7 with Ciao 4.12 in the environment), no error appears.

I also see the same behaviors for some other commands dmextract, deflare. They work outside of the ClusterPyXT-dev-CIAO-4.12, but do not work with the pipeline.

You have any idea why that is?

bcalden commented 4 years ago

I need to look at this issue further. I will admit, the majority of development/testing has so far been Mac based. I have begun testing on Ubuntu as well as these platform dependent issues become more apparent.

I have had issues with deflare in specific as ciao has been updated over the years even within the same major version number so it does not surprise me that there may be issues using it in the pipeline. I have encountered issues with dmkeypar before with TSTART,TSTOP, but they were fixed. Apparently not.

Does this problem happen every run on Ubuntu? (I.e. You cannot get past this step without manually running the commands.0

Let me try and recreate the error on Ubuntu and I will follow up.

Brian

On Jun 11, 2020, 2:31 PM -0600, Duy Hoang notifications@github.com, wrote:

On my Mac OS, I can run the ClusterPyXT-dev-CIAO-4.12 version without any problem. This has Ciao-4.12 + Python 3.7.7 installed with miniconda. The only problem is that it takes lots of time when doing spectral fitting. So, it's not an ideal solution. With the same installation on my Ubuntu 18 machine (Ciao-4.12+Python 3.7.7+miniconda), I had problems with parameter error (ValueError: pget() Parameter not found) when running dmkeypar (like in my earlier post). I added a punlearn command, rt.dmkeypar.punlearn(), before dmkeypar lines. This seems to fix the error for the dmkeypar lines that do not have keyword=TSTART or keyword=TSTOP. If I replace the keyword TSTART or TSTOP with any other keywords (e.g. OBS_ID, TIMEZERO) found in the header, dmkeypar works just fine. If I run this command, rt.dmkeypar(infile=data_hiE, keyword="TSTART", echo=True), outside of the pipeline (i.e. in python 3.7.7 with Ciao 4.12 in the environment), no error appears. I also see the same behaviors for other commands dmextract, deflare. They work outside of the ClusterPyXT-dev-CIAO-4.12, but do not work with the pipeline. You have any idea why that is? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

duyhoang-astro commented 4 years ago

Yes, the the problem happens every run on Ubuntu. A quick trick I did was to create a python function that executes the function, but passing the arguments to bash shell. To call the function, simply replace, e.g., rt.dmextract by run_dmextract. The arguments are kept the same. Then, the pipeline works fine for these steps (without error on "Parameter not found"). Below are the python functions.

def run_dmextract(infile, outfile, bkg=None, error=None, bkgerror=None, bkgnorm=None, exp=None,bkgexp=None, sys_err=None, opt=None, defaults=None, wmap=None, clobber=None, verbose=None):

cmd = 'dmextract infile="%s" outfile="%s"' %(infile, outfile)
if bkg:
    cmd += ' bkg=%s ' %bkg
if error:
    cmd += ' error=%s ' %error
if bkgerror:
    cmd += ' bkgerror=%s ' %bkgerror
if bkgnorm:
    cmd += ' bkgnorm=%s' %bkgnorm
if exp:
    cmd += ' exp=%s' %exp
if bkgexp:
    cmd += ' bkgexp=%s' %bkgexp
if sys_err:
    cmd += ' sys_err=%s ' %sys_err
if opt:
    cmd += ' opt=%s ' %opt
if defaults:
    cmd += ' defaults=%s ' %defaults
if wmap:
    cmd += ' wmap=%s ' %wmap
if clobber:
    cmd += ' clobber=yes '
if verbose:
    cmd += ' verbose=%s ' %verbose

print ('Running %s' %cmd)
os.system(cmd)

return

def run_deflare(infile, outfile, method, save):

cmd = 'deflare infile="%s" outfile="%s" method=%s save=%s' %(infile, outfile, method, save)

print ('Running %s' %cmd)
os.system(cmd)

return

def run_dmhedit(infile, filelist, operation, key, value, datatype=None, unit=None, comment=None, verbose=None):

cmd = 'dmhedit infile="%s" filelist="%s" operation=%s key=%s value=%s ' %(infile, filelist, operation, key, value)
if datatype:
    cmd += ' datatype=%s ' %datatype
if unit:
    cmd += ' unit=%s ' %unit
if comment:
    cmd += ' comment=%s ' %comment
if verbose:
    cmd += ' verbose=%s ' %verbose

print ('Running %s' %cmd)
os.system(cmd)

return

##############

duyhoang-astro commented 4 years ago

With the fix above, the code runs through all steps to Spectral Fitting. Here I encounter another error on instrument model not being found in the pi file. In the pi header there is keyword INSTRUME. Is this the info that the pipeline is reading? Any tips on how to fix this would be very helpful (sorry for borthering you so much on this)!

A copy of the pi header is here: https://www.dropbox.com/s/rtn7sl5f4fbgj3r/header.txt?dl=0 The error on the screen is copied below.

Duy

########### Processing region number: 207 207: Working on observation id: 4215 Processing region number: 210 210: Working on observation id: 4215 210: Met S/N threshold for 4215 210: Extracting PI files for 4215 207: Met S/N threshold for 4215 207: Extracting PI files for 4215 Running dmextract infile="/home/duy/chandra_clusters/test/cc/acb/acisI_clean_4215.fits[sky=region(/home/duy/chandra_clusters/test/cc/acb/temp_210_4215.reg)][bin pi]" outfile="/home/duy/chandra_clusters/test/cc/acb/4215_210.pi" clobber=yes Running dmextract infile="/home/duy/chandra_clusters/test/cc/acb/acisI_clean_4215.fits[sky=region(/home/duy/chandra_clusters/test/cc/acb/temp_207_4215.reg)][bin pi]" outfile="/home/duy/chandra_clusters/test/cc/acb/4215_207.pi" clobber=yes Running dmextract infile="/home/duy/chandra_clusters/test/cc/acb/backI_clean_4215.fits[sky=region(/home/duy/chandra_clusters/test/cc/acb/temp_207_4215.reg)][bin pi]" outfile="/home/duy/chandra_clusters/test/cc/acb/4215_back_207.pi" clobber=yes Running dmextract infile="/home/duy/chandra_clusters/test/cc/acb/backI_clean_4215.fits[sky=region(/home/duy/chandra_clusters/test/cc/acb/temp_210_4215.reg)][bin pi]" outfile="/home/duy/chandra_clusters/test/cc/acb/4215_back_210.pi" clobber=yes Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_207.pi" filelist="" operation=add key=EXPOSURE value=64220.940691801 Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_210.pi" filelist="" operation=add key=EXPOSURE value=64220.940691801 Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_207.pi" filelist="" operation=add key=RESPFILE value='/home/duy/chandra_clusters/test/cc/acb/cc_4215.rmf' Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_210.pi" filelist="" operation=add key=RESPFILE value='/home/duy/chandra_clusters/test/cc/acb/cc_4215.rmf' Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_207.pi" filelist="" operation=add key=ANCRFILE value='/home/duy/chandra_clusters/test/cc/acb/cc_4215.arf' Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_210.pi" filelist="" operation=add key=ANCRFILE value='/home/duy/chandra_clusters/test/cc/acb/cc_4215.arf' Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_207.pi" filelist="" operation=add key=BACKFILE value=/home/duy/chandra_clusters/test/cc/acb/4215_back_207.pi Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_210.pi" filelist="" operation=add key=BACKFILE value=/home/duy/chandra_clusters/test/cc/acb/4215_back_210.pi Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_back_207.pi" filelist="" operation=add key=EXPOSURE value=1717423.024726406 Running dmhedit infile="/home/duy/chandra_clusters/test/cc/acb/4215_back_210.pi" filelist="" operation=add key=EXPOSURE value=1717423.024726406 207: Loading data pulse invariant files (PI files)

gabriel-fontinele commented 1 year ago

I have the error valueerror: pget, did you manage to solve it?