Closed bioimageiobot closed 5 months ago
Hi hi,
I think the file is complaining about the attachments:
attachments:
files: [per_sample_scale_range.ijm, scale_linear.ijm, "pix2pix-tom20-11122023_training_report.pdf"]
Should they be relative paths? I added the pdf but both macros where given like this directly by the bioimageio.core build_model @Tomaz-Vieira @FynnBe
Hmmm. Looks like there is either an old core version in use that doesn't have the updates for the new zenodo API or some updates are missing. I'll try to take a look asap
Well, it's correctly using the latest versions:
bioimageio.core 0.5.11 pyhd8ed1ab_0 conda-forge
bioimageio.spec 0.4.9.post5 pyhd8ed1ab_0 conda-forge
I used bioimageio.core==0.5.11
to build the model. Then, what's the way to go?
Halo @oeway this is a big issue, any idea or roadmap to change it? We really need to have this one solved for different publications
I found (and hopefully also fixed) the issue: The static validation step used bioimageio.spec 0.4.9.post1 (instead of 0.4.9.post5) (it was installed without any pinning). now post5 is installed correctly
Now there is a different, much more interesting error:
terminate called after throwing an instance of 'c10::Error'
what(): isTuple()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1640811723911/work/aten/src/ATen/core/ivalue_inl.h":1400, please report a bug to PyTorch. Expected Tuple but got String
Exception raised from toTuple at /opt/conda/conda-bld/pytorch_1640811723911/work/aten/src/ATen/core/ivalue_inl.h:1400 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fb57a839d62 in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x5b (0x7fb57a83668b in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::string const&) + 0x3e (0x7fb57a836bbe in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x38cb7d7 (0x7fb57e4cb7d7 in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x38cc875 (0x7fb57e4cc875 in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #5: torch::jit::SourceRange::highlight(std::ostream&) const + 0x36 (0x7fb57bb5b166 in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #6: torch::jit::ErrorReport::what() const + 0x2c5 (0x7fb57bb3e075 in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x[27](https://github.com/bioimage-io/collection-bioimage-io/actions/runs/7305314870/job/19908788694?pr=695#step:7:28)80be (0x7fb5862780be in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0x[28](https://github.com/bioimage-io/collection-bioimage-io/actions/runs/7305314870/job/19908788694?pr=695#step:7:29)6a35 (0x7fb586286a35 in /home/runner/micromamba/envs/10366412/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
/home/runner/work/_temp/0031de1c-2434-43dc-a117-1b0d6e921473.sh: line 1: 2[30](https://github.com/bioimage-io/collection-bioimage-io/actions/runs/7305314870/job/19908788694?pr=695#step:7:31)0 Aborted (core dumped)
looks like a cpu/gpu difference or maybe a pytorch version mismatch.
The complete mamba env is
@esgomezm if the environment is as expected I'd try to run the model in your local env on CPU and see if you can reproduce the issue.
Hi Fynn, I did the following in my Mac (without GPU) and it worked:
conda create -n bioimageio-core python==3.9
conda activate bioimageio-core
conda install -c pytorch -c conda-forge bioimageio.core pytorch torchvision cpuonly
pip install torch==2.0.1 # I included the same version in the notebook as it was installing the 2.1.0 by default.
bioimageio test-model "/Users/esti/Downloads/bioimageio-model.zip"
Output:
/Users/esti/mambaforge/envs/bioimageio-core/lib/python3.9/site-packages/bioimageio/spec/shared/_resolve_source.py:440: CacheWarning: found cached /var/folders/2m/yqllfkcd3gs5_3tz_4m5h85m0000gp/T/esti/bioimageio_cache/https/raw.githubusercontent.com/bioimage-io/bioimage.io/main/site.config.json. Skipping download of https://raw.githubusercontent.com/bioimage-io/bioimage.io/main/site.config.json.
warnings.warn(f"found cached {local_path}. Skipping download of {uri}.", category=CacheWarning)
/Users/esti/mambaforge/envs/bioimageio-core/lib/python3.9/site-packages/bioimageio/spec/shared/_resolve_source.py:440: CacheWarning: found cached /var/folders/2m/yqllfkcd3gs5_3tz_4m5h85m0000gp/T/esti/bioimageio_cache/https/bioimage-io.github.io/collection-bioimage-io/collection.json. Skipping download of https://bioimage-io.github.io/collection-bioimage-io/collection.json.
warnings.warn(f"found cached {local_path}. Skipping download of {uri}.", category=CacheWarning)
bioimageio.spec 0.4.9post5
implementing:
collection RDF 0.2.3
general RDF 0.2.3
model RDF 0.4.9
bioimageio.core 0.5.11
computing dataset statistics: 0it [00:00, ?it/s]
testing model /Users/esti/Downloads/bioimageio-model.zip...
load resource description: passed
has expected resource type: passed
All URLs and paths available: passed
Test documentation completeness.: passed
reproduce test outputs from test inputs (bioimageio.core 0.5.11): passed
✔️ Model /Users/esti/Downloads/bioimageio-model.zip passed.
(bioimageio-core) esti@estimacbookair ~ %
I tried with torch==2.1.0
and it also worked.
Now a different issue is why the output of bioimageio is an 8-bit image. When I run the model in Fiji, the same torchscript model, everything works well, but with bioimageio it feels that there is something funny going on inside. I tried running the model with the preprocessing and without it (giving a properly normalised image), and the issue persists.
For an easy check, you can get the zip file in the zenodo repo
Now a different issue is why the output of bioimageio is an 8-bit image. When I run the model in Fiji, the same torchscript model, everything works well, but with bioimageio it feels that there is something funny going on inside. I tried running the model with the preprocessing and without it (giving a properly normalised image), and the issue persists.
Sorted, my bad! It was the datarange and datatype in the output. I will update the model in a sec
conda install -c pytorch -c conda-forge bioimageio.core pytorch torchvision cpuonly pip install torch==2.0.1 # I included the same version in the notebook as it was installing the 2.1.0 by default.
I suppose you're still sorting things out, but I'm confused why you installed pytorch with conda and then with pip again. That should be avoided in the final conda env. Preferrably by only using a conda installation of pytorch.
Conda did not allow me installing torch 2.0.1 but in any case you can skip that, I tested with the one by default (2.1.0), without pip, and it also works. Yet, this still fails in the CI: https://github.com/bioimage-io/collection-bioimage-io/pull/700
This is an automatic PR created by the @bioimageiobot regarding changes to the resource item 10.5281/zenodo.10366411. The following version(s) will be added:
Please review the changes and make sure the new item or version(s) pass the following check list:
Maintainers: @esgomezm
Note: If you updated or re-uploaded another version for the current item on Zenodo, this PR won't be changed automatically. To proceed, you can do the following:
status
field asaccepted
, but change thestatus
under the current version toblocked
.Keep proposed version(s) (and this resource in general if it is new) as pending: Close this PR without merging.
Then wait for the CI on the main branch to complete. It should detect the new version(s) and create another PR for the new version(s).
Previous PRs of this resource: none