Open alanbchristie opened 1 year ago
I've re-run the two jobs in Squonk with some fixes from Tim to verify their behaviour and they both now run to completion.
This job still fails with the following exception: -
Traceback (most recent call last):
File "/code/merger.py", line 248, in combine_fragments
v.combine(long_name='combine-' + str(i + 1))
File "/opt/conda/lib/python3.9/site-packages/fragmenstein/victor/_victor_combine.py", line 59, in combine
self._safely_do(execute=self._calculate_combination, resolve=self._resolve, reject=self._reject)
File "/opt/conda/lib/python3.9/site-packages/fragmenstein/victor/_victor_safety.py", line 27, in _safely_do
execute()
File "/opt/conda/lib/python3.9/site-packages/fragmenstein/victor/_victor_combine.py", line 89, in _calculate_combination
self._calculate_combination_thermo()
File "/opt/conda/lib/python3.9/site-packages/fragmenstein/victor/_victor_combine.py", line 137, in _calculate_combination_thermo
self.unminimized_pdbblock = self._plonk_monster_in_structure()
File "/opt/conda/lib/python3.9/site-packages/fragmenstein/victor/_victor_plonk.py", line 62, in _plonk_monster_in_structure
return self._plonk_monster_in_structure_minimal()
File "/opt/conda/lib/python3.9/site-packages/fragmenstein/victor/_victor_plonk.py", line 93, in _plonk_monster_in_structure_minimal
raise ValueError(f'Residue {self.ligand_resi} already exists in structure')
ValueError: Residue 1B already exists in structure
Consequently the output (merged.sdf
) is empty.
We're using fragmenstein == 0.10
(March 8th).
Tim is in discussions with Matteo on a resolution.
This job no-longer exhibits the Divide-by-zero exception and appears to run to completion without error. It still does not create an output (merged.sdf
is present but empty).
Can we expect this job to generate an output?
Three kinds of errors for @alanbchristie .
It has probably been addressed already. As it was said no test jobs in the frontend —or so I understood. Can the Fragmenstein pipeline for now be changed with something simple on the lines of (with correct outputs):
import enum
import ctypes
import warnings
import random
class ProgamError(Exception):
pass
class ErrorTypes(enum.Enum):
no_error = 1
warned = 2
bad_output = 3 # not the result we wanted
codebase_error = 4 # the code is crap
programmatic_error = 5 # the code did it on purpose
segfault = 6
def __call__(self):
"""
Return zero if possible
"""
cls = self.__class__
if self.value == cls.no_error:
return 0
if self.value == cls.warned:
warnings.warn('Warning')
return 0
if self.value == cls.warned:
return 1
elif self.value == cls.codebase_error:
return 0/0
elif self.value == cls.programmatic_error:
raise ProgamError
elif self.value == cls.segfault:
ctypes.string_at(0)
@classmethod
def random(cls):
choice: int = random.randrange(len(ErrorTypes))
cls(choice)
and made to call ErrorTypes.random()
on each call? The whole discussion was about error handling after all so I am confused about not compartmentalising the task.
(Obviously, I will address the pipeline file issue)
Having re-tested, the jobs all behave as expected with the (i.e. there are no unhandled exceptions using the current inputs).
Ruben's test failures with: -
These result in job execution failure and no uploaded compounds