Closed sjeknic closed 2 years ago
Operation.get_inputs_and_outputs()
now accounts for the value of force_rerun
. It's still not ideal, and the inputs and outputs could be more consolidated. But going to close the issue, and reference this PR.Okay, I am now at least 95% confident that I have addressed #76 in this PR. See the issue for more details.
This PR introduces the last set of fixes and changes before release v0.0.0. All future changes to
origin/main
will have to be packaged as their own release. This probably isn't a comprehensive list of changes, but let me try:Operations
are changed. The issue was thatTrack
the file type andTrack
theOperation
could not be imported at the same time. So instead,Track
,Segment
etc are now changed to new names. Probably won't keep the names, but for now, it's fine.Operation.add_function_to_operation()
.save_name
kwarg is changed tosave_as
and bothoutput_type
andsave_as
must be passed as named args. In fact, the only positional arg allowed is the name of the function, no positional arguments can be passed to that function.save=True
by default inOperation
. Intermediates are now saved ifsave_as
is provided, regardless of the value ofsave
.data_frame.hdf5
is no longer made.ExperimentArrays
can now merge conditions. It does this using the required metricposition_id
which can either be manually set using a dictionary passed toOrchestrator
, usingConditionArray.set_position_id()
, or by lettingOrchestrator
attempt to guess. Merged conditions are stacked and accessed via the condition name.core
directory for a lot of the files that the user won't see.Operations
no longer use__slots__
. If the user sub-classedOperations
, but did not include__slots__
(which seems likely), then they wouldn't have been used anyways. It's a little cleaner this way, and preventsAttributeErrors
in some cases.61 All
Operations
can now load all inputs. They get passed tosuper().__init__()
by default. They are also renamed to remove theinput_
prefix.SlurmController
gets several small updates. It now records more information (including the partition) and cleans up files after it is finished running._metrics
is moved toextract.py
. This makes it more visible to the user. Additionally,bbox
andcentroid
have much more understandable names.ConditionArray
squeezes arrays before running, but ensures that the array is at least 2D. If only one frame, thennp.expand_dims(axis=-1)
is used before return.46 Derived metrics, such as ratios of intensities in different channels/regions, is added. The user defines the functions, placeholders are added during extract, and then the metrics are calculated. For now, only
numpy
functions can be used. Otherwise, thePipeline
cannot be saved/loaded as a yaml file.73 Filters are also added, and used similarly to derived metrics above. Extract metrics and what not are calculated, then
ConditionArray.generate_mask()
andConditionArray.filter_cells()
is used to remove cells defined by the filter. Currently only supports functions inutils.filter_utils
.ExperimentArray
can be indexed with multiple keys. The result is not a list, but a custom object that can be indexed in the same way thatConditionArray
is indexed.custom_function
decorator is deleted.Operations
. Important ones include gettingskimage.level_set
inSegmenter
and wavelet-based background subtraction inProcessor
.Operations
.54 Warnings from the
warnings
package are now also handled by any logger fromutils.log_utils
.force_rerun
not being honored in all cases andOperations
not saving all important info in dictionaries.unet-predict
is in bothProcessor
andSegmenter
now.