Open msrepo opened 1 year ago
pandas has a weird API
print(df.nlargest(1, ["DSC"], "first")["subject-id"].values[0])
print(df.nsmallest(1, ["DSC"], "first")["subject-id"].values[0])
print(df[df.DSC == df.median(numeric_only=True)["DSC"]]["subject-id"].values[0])
Median Results
pick random samples instead of median, worse, best. currently, the results are stored as follows: keep the same configuration, but we need a way to identify random samples for tiling later.
def save_montage(ANATOMY, subject_type):
will also need to be modified. either track the subject-id
or find a way to consistently track the random samples by consistent naming convention.
Useful comparative visualization
Median Results
The prediction resolution resampling and metadata
issue ,see #1, has risen its ugly head here. I must have changed the parameters when evaluating attentionunet
results for vertebra
.
Median Results
The
prediction resolution resampling and metadata
issue ,see #1, has risen its ugly head here. I must have changed the parameters when evaluatingattentionunet
results forvertebra
.
Without resampling, metadata copied from groundtruth without resampling, without metadata copied from groundtruth
both of these are wrong.
1) either set resample=True
when you copy metada from groundtruth
The preview happened correctly even if the data and metadata did not match in this case because we reslice when previewing.
2) or set resample=False
and set the correct metadata manually.
fixed visualization issues: see #28 before and after
fury
andxvfb
try thin instead ofvedo
because of aestheticsfilter_runs_from_wandb
to findrun-id
for each anatomy and method under considerationevaluation/metrics.csv
and find the best, median and worse casesample-id