TuragaLab / DECODE

This is the official implementation of our publication "Deep learning enables fast and dense single-molecule localization with high accuracy" (Nature Methods)
GNU General Public License v3.0
89 stars 26 forks source link

scipy version 1.9.1 ValueError while training #178

Closed JohannaRahm closed 1 year ago

JohannaRahm commented 2 years ago

Hi,

training throws a ValueError within the scipy library (version 1.9.1). The process runs through with scipy version 1.8.0. There might be breaking changes in the newer scipy library.

Error message:

decode_scipy_error

HypnosWei commented 2 years ago

I get the same error, I try to uninstall scipy 1.9.0, and install scipy 1.8.0, but the decode package just disappears...

JohannaRahm commented 2 years ago

a workaround is to create a new environment

conda create -n decode_env2 -c turagalab -c pytorch -c conda-forge decode=0.10.0 cudatoolkit=11.0 tifffile=2022.4.8 scipy=1.8.0 jupyterlab ipykernel

the tifffile version is fixed because of this issue #177

Haydnspass commented 2 years ago

Thanks for pointing this out, I will pin it in a patch release.

Haydnspass commented 1 year ago

@HypnosWei @JohannaRahm if you want you can install the patched version as

conda create -n decode_v0_10_1rc1 -c turagalab/label/rc -c turagalab/label/dev -c pytorch -c conda-forge decode=0.10.1 cudatoolkit=11.3 jupyterlab ipykernel

for me everything works now, happy to get your feedback :)

JohannaRahm commented 1 year ago

Great! This version runs through without errors (intro, train, fit, evaluation notebooks and training command via CLI was tested).

A minor thing: Compared to the earlier version there are a lot of warnings appearing while training through CLI. The warnings consume quite a lot of output space and the progress bar is a bit lost in it. I'll post the warnings here in case you want to take care of them for future updates.

(decode_v0_10_1rc1) jrahm@DeepL:~/DECODE$ python -m decode.neuralfitter.train.live_engine -p notebook_example_edit.yaml
Model instantiated.
Model initialised as specified in the constructor.
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/models/unet_param.py:139: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  shape_diff = tuple((ish - csh) // 2
Sampled dataset in 0.30s. 249926 emitters on 10001 frames.
Sampled dataset in 0.02s. 12891 emitters on 513 frames.
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/TensorShape.cpp:2894.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
  0%|                                                                                                  | 0/16 [00:00<?, ?it/s]/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [153600], which does not match the required output shape [32, 3, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [153600], which does not match the required output shape [32, 3, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [153600], which does not match the required output shape [32, 3, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [153600], which does not match the required output shape [32, 3, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [32000], which does not match the required output shape [32, 250, 4]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [32000], which does not match the required output shape [32, 250, 4]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [32000], which does not match the required output shape [32, 250, 4]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [32000], which does not match the required output shape [32, 250, 4]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [8000], which does not match the required output shape [32, 250]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [8000], which does not match the required output shape [32, 250]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [8000], which does not match the required output shape [32, 250]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [8000], which does not match the required output shape [32, 250]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [51200], which does not match the required output shape [32, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [51200], which does not match the required output shape [32, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [51200], which does not match the required output shape [32, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [51200], which does not match the required output shape [32, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
(Test) E: 0 - T: 0.24:  38%|█████████████████████████▏                                         | 6/16 [00:00<00:00, 24.97it/s]/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [148800], which does not match the required output shape [31, 3, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [31000], which does not match the required output shape [31, 250, 4]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [7750], which does not match the required output shape [31, 250]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
/home/jrahm/miniconda3/envs/decode_v0_10_1rc1/lib/python3.8/site-packages/decode/neuralfitter/utils/dataloader_customs.py:25: UserWarning: An output with one or more elements was resized since it had shape [49600], which does not match the required output shape [31, 40, 40]. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /opt/conda/conda-bld/pytorch_1656352465323/work/aten/src/ATen/native/Resize.cpp:17.)
  return torch.stack(batch, 0, out=out)
(Test) E: 0 - T: 0.45: 100%|██████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 33.53it/s]
Saved model to file: 2022-10-25_13-31-39_DeepL/model_0.pt
Sampled dataset in 0.28s. 248686 emitters on 10001 frames.
Training finished after reaching maximum number of epochs.
Haydnspass commented 1 year ago

It was fixed by release 0.10.1 from yesterday.