bozeklab / amap

GNU General Public License v3.0
8 stars 0 forks source link

Hello, Can you upload a trained model? #1

Closed wendy127green closed 8 months ago

wendy127green commented 8 months ago

Hello, I wonder if you can upload or send me the model (checkpoint) you used in the paper. Thanks!

platonic-realm commented 8 months ago

I would like to redirect you to AMAP-APP, it is a cross-platform desktop application based on this research. There, you can find the model's checkpoint as well. Please be noted AMAP-APP uses a different algorithm for instance segmentation, but the underlying model is the same.

wendy127green commented 7 months ago

I would like to redirect you to AMAP-APP, it is a cross-platform desktop application based on this research. There, you can find the model's checkpoint as well. Please be noted AMAP-APP uses a different algorithm for instance segmentation, but the underlying model is the same.

Sorry but when I run this AMAP-APP, I got this error: Exception in thread Thread-1: Traceback (most recent call last): File "E:\anaconda3\envs\amap\lib\threading.py", line 980, in _bootstrap_inner self.run() File "E:\anaconda3\envs\amap\lib\threading.py", line 917, in run self._target(*self._args, **self._kwargs) File "E:\AI\lxj\amap-app-main\amap-app-main\src\ui\main_window.py", line 331, in start_project_segmentation self.engine.exec() File "E:\AI\lxj\amap-app-main\amap-app-main\src\engine.py", line 115, in exec self.inference_procedure() File "E:\AI\lxj\amap-app-main\amap-app-main\src\engine.py", line 180, in inference_procedure for batch_i, batch in enumerate(loader): File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data\dataloader.py", line 630, in next data = self._next_data() File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data\dataloader.py", line 674, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data_utils\fetch.py", line 54, in fetch return self.collate_fn(data) File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data_utils\collate.py", line 265, in default_collate return collate(batch, collate_fn_map=default_collate_fn_map) File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data_utils\collate.py", line 127, in collate return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}) File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data_utils\collate.py", line 127, in return elem_type({key: collate([d[key] for d in batch], collate_fn_map=collate_fn_map) for key in elem}) File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data_utils\collate.py", line 119, in collate return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map) File "E:\AI\lxj\amap-app-main\amap-app-main\venv\lib\site-packages\torch\utils\data_utils\collate.py", line 162, in collate_tensor_fn return torch.stack(batch, 0, out=out) RuntimeError: stack expects each tensor to be equal size, but got [1, 384, 384] at entry 0 and [1, 384, 309] at entry 4

platonic-realm commented 7 months ago

Thanks for reporting the error. Can you share the input samples? So I can debug the app using your samples?

wendy127green commented 7 months ago

Thanks for reporting the error. Can you share the input samples? So I can debug the app using your samples? 23239_0004.zip Thanks

platonic-realm commented 7 months ago

I checked the file, and the error arose because the resolution of the input image is not compatible with the patching algorithm. AMAP requires the resolution to be divisible by 128 after subtracting 384. For example, a 1024x1024 image meets this criterion. I will add an error message in the app for such cases to explain the requirements.

Additionally, I've realized that the type of images you are attempting to use with AMAP is not consistent with what AMAP is designed for. Please refer to the paper below for more information.

https://www.kidney-international.org/article/S0085-2538(23)00180-1/fulltext

wendy127green commented 7 months ago

I checked the file, and the error arose because the resolution of the input image is not compatible with the patching algorithm. AMAP requires the resolution to be divisible by 128 after subtracting 384. For example, a 1024x1024 image meets this criterion. I will add an error message in the app for such cases to explain the requirements.

Additionally, I've realized that the type of images you are attempting to use with AMAP is not consistent with what AMAP is designed for. Please refer to the paper below for more information.

https://www.kidney-international.org/article/S0085-2538(23)00180-1/fulltext Sorry but I still get the same error even when I resize the input image to 1024*1024.