Open SowjanyaKrishna opened 1 year ago
You can crop the images of that dataset. In order not to lose information, it is best to cut 256 pixels from the top, 128 from the right and 128 from the left. I think that is the best way to train your model without a lot of work.
Hello, I'm currently researching ways to use lidar to detect legs. I am attempting to train the neural network by converting it in pytorch from the train_neural_network.py(written in tensorflow). I found that the UNET network written in train_neural_network.py requires 256x256 as input image size, but the dataset from [npy_train_test_globales.tar.gz (http://robotica.unileon.es/~datasets/LegTracking/PeTra_training_dataset/npy_train_test_globales.tar.gz)] are 512x512. What would be the best method to train your UNET model that is rewritten in pytorch.