DerKevinRiehl / transposon_annotation_reasonaTE

Transposon annotation tool "resonaTE" (part of TransposonUltimate)
GNU General Public License v3.0
16 stars 1 forks source link

When running reasonTE -mode pipeline, the running mission was killed, and show python ${PRLSPTH2}$@ #24

Open Djangodu opened 7 months ago

Djangodu commented 7 months ago

Dear

This is my puzzle: When running reasonaTE -mode pipeline, the running mission was killed, and show:

python ${PRLSPTH2}$@

image

this problem show on my server, but it is very smoothly to get the final result on my vmware-workstation of my computer with the same reasonaTE program use the data you show in github.

I don't know why it is happen, if possible please give me some suggestions.

Djangodu commented 7 months ago

Dear

​I also transferred the data you showed in github to my server, which also ran smoothly and had a result output that did not show that the program had killed something that had happened. This reasonaTE-mode pipeline took me at least 2 months and was killed by this problem while doing the analysis for my genome. It was too horrible, too disappointing.

​So, if you have any advice, please help me.

DerKevinRiehl commented 7 months ago

Hello Djangodu, first of all thank you very much for your interest in our work.

1) It is a pitty that your process was killed. But two months and longer processing times are quite common. A killed process could have been done by your cluster and IT admin, you might ask them.

2) Dont worry, intermediate results will likely have been stored. Could you tell me a little bit more about what is in the folders that reasonaTE produced? I am sure you will be able to recover most of the results.

Best, Kevin

Djangodu commented 7 months ago

Dear Kevin,

Thanks for your reply.

  1. I do not think the program that was killed or operated by someone, as the last file produced at 3 a.m. ​Meanwhile, no one was in the building. image

  2. I just read well about your code in the step of pipeline, as you say this problem happened in doClusterBlasting which step from transposonCandC to transposonCandD, before this was running very well, and everything stopped when I missed this "python ${PRLSPTH2}$@". image

  3. I don't know if the biger genome is the reason or not. It is OK when I try your data which you show in github in my workstation or server, but my 3.0 Gb genome has this bug in the server. I also re-run your data to have a look at whether the data will recover or not. When re-run this pipeline again in the same workspace file, it is true that the result date will update to the new one but not the old one. It means that the results regenerate and overwrite the old one.

  4. Although this problem shows in the job record and program being stopped, I also want to have another try. I found the number of threads can be changed in the script of doCluserBlasting.py. I want to make sure from you if it is possible to change original thread number to higher when I have enough computing power, for example from 10 threads to 140 threads. I don't know if the "python ${PRLSPTH2}$@" will be fixed or not when the program run faster with big power. What do you think?

DerKevinRiehl commented 7 months ago

Hey Djangodu, I think you should definetly give it another shot with more threads. The more threads the faster it should be able to run.

Btw, it is possible that your network administrator set a threshold like if a job runs more than X time it is cancelled.

Please let me know once you have new results, and wish you lots of success. Best, Kevin