-
It seems that there is no calculation of validation loss in the code, and the results of training loss and validation dice score are not averaged for the entire epoch?
Code:“
# 5. Begin training
…
-
Thanks for making this available!
Training the pyro version seems considerably slower (maybe 10x) than training the original tf version from AMLab - any idea why this might be? I am running both o…
-
I want to use the npy2ckpt.py to transfer my own resnet50 pre-train model:
the layer name in my pre-train resnet50 model are:
bn4c_branch2c
bn5b_branch2b
res3d_branch2b
res2b_branch2b
…
y-kl8 updated
6 years ago
-
Hi Tim,
I just installed cryoCARE on our HPC following the installation procedure "For CUDA 10" and did not meet any errors during the installation.
However, I got the following message when I t…
-
Cool project. I have looking for this for months.
I tried to train (retrain.sh) 6000 images with 20 labels. During the training process, I found that
1. the project is not using much of the CPU…
-
the speed of training is slower, and the cpu usage rate is very high,about 800%, is there anything to solve it?
eeric updated
3 years ago
-
Hello, I noticed that in the slide level fine-tuning implementation, the `CrossEntropyLoss` loss function and the `Softmax` activation function are used in binary classification. This seems to be diff…
-
### 🚀 The feature, motivation and pitch
1. NotImplementedError: Could not run 'aten::_to_copy' with arguments from the 'NestedTensorXPU' backend
cases:
test_transformers.py::TestTransformersXPU::te…
-
### Description
Here is my use case:
I have 4 gpu nodes for training (including compute tensors) on aws.
I want to save pre-computed tensors to deeplake (Dataset/database/vectorstore), aiming to …
-
@pangyupo Thank you for the open source code, how get the training code. I want to optimize the face detection time to real-time detection on cpu.