Technically, this application did not have to be split into a client (regular slicer extension) and a server (which does the inference). I assume the splitting was done to remove the long GUI freezes which would result from waiting for image embeddings to be computed. Is this correct?
Thanks for asking. Actually GUI freezes can be eliminated using run_on_background function. There are several reasons behind our decision:
It is way faster to load the module with smaller code base.
We are planning to add GPU support and if your computer is not gpu friendly you would be able to run server on a compatible workstation
Slicer does not support virtual environments and we have version clashes with tensorflow, etc. We want to leave a room to dockerize the tensorflow-dependant part
Technically, this application did not have to be split into a client (regular slicer extension) and a server (which does the inference). I assume the splitting was done to remove the long GUI freezes which would result from waiting for image embeddings to be computed. Is this correct?