Open mahfuj9346449 opened 7 years ago
As we are looking at features which are useful for fine grained visual recognition, scaling of images is likely to improve results for the classification.
The resolution of image in train.h5 is 488, while the input of the placeholder is in 448. There is an error in create_h5 file.
@thkinglee It's not an error. I am keeping the resolution to 488 in create_h5, because at the training time I am cropping an image of 448x448 randomly from the original image of 488x488. This is called data augmentation. It is used for better generalisation and expanding the dataset.
hello, I still have a problem. After running the second part of the whole model, I will finish training. It seems that the final model is not saved in the code. Why is this done in the absence of the training model? Can you give me some details?
In the first step,I set the breaking epoch at x then I got the last_layers_epoch_x.npz .That .npz was loaded to do the second part for finetuning the whole model.So the training model was not absent.I don't know whether this can solve your problem.
build_hdf5_image_dataset(new_train, image_shape=(488, 488), mode='file', output_path='new_train_488.h5', categorical_labels=True, normalize=False)
In your code :
Can you explain please why 488,488 and 448,448