Closed Vathsan closed 1 year ago
Hello, have you tried to run the code through the conda environment? If not, you can create it with conda env create -f environment.yml
. The Yaml file is provided in the repo. It installs Python 3.8.5 along with all the necessary dependencies. Please let me know it you encounter any issue with the Conda environment.
Hey @gml16, thanks for the response. I am using a pip environment. I was able to train and evaluate the models after tweaking some of the library versions (protobuf and opencv-python).
My goal is to predict a set of landmarks in echocardiography images (ultrasound of heart). I can see that you have used 72 3D fetal head ultrasound images to train the model. Do you have any suggestion on the number of images I should be using? My primary goal is to identify only 3 landmarks. I would also appreciate any tips on how I can reduce the training time. Thanks!
Glad to hear the training is now working successfully. As to the number of images, the more the better. If the images follow a similar distribution you won’t need as many as if, for example, the patients suffer different heart conditions, or different kinds of scanner captured the images. It’s not an exact science. You can retrain with different subset sizes of your dataset and extrapolate the accuracy improvements of more data. Hope that helps :)
To answer how to reduce training time, you can either use a faster CPU or GPU. If you need big improvements, you can implement a multithread version of stepping the environment to collect data faster. I believe this is the bottleneck at the moment but you could make sure by profiling the code. Pull requests are more than welcome.
Thank you very much for the details. This really helps. I am looking forward to see how this works for the echo data.
I am trying to train a model. Due to some version mismatch I run into some import errors. I am using python 3.8.2. Is there a specific python version that I should be using?