Open jinxsfe opened 2 months ago
@esgomezm @paxcalpt
I kept the original structure for your notebook but cause the same error
when I change back to ckpt_name = ckpt_dir + '/' + model_name + '.hdf5' else: ckpt_name = ckpt_dir + '/' + model_name + 'epoch{epoch:02d}val_loss{val_loss:.4f}.hdf5'. and turn these off, it show the error if I change it to keras, also show the same problem for
sorry I forgot to input .hdf5 but error still occur
Hi, whether adjust UNET structure for 4 depth, instead of 3 depth, and training show an error, for weight names,
ValueError Traceback (most recent call last) in <cell line: 18>() 16 start = time.time() 17 # Start Training ---> 18 model.train(epochs=number_of_epochs, 19 batch_size=batch_size, 20 train_generator=train_generator,
1 frames /usr/local/lib/python3.10/dist-packages/keras/src/callbacks/model_checkpoint.py in init(self, filepath, monitor, verbose, save_best_only, save_weights_only, mode, save_freq, initial_value_threshold) 181 if save_weights_only: 182 if not self.filepath.endswith(".weights.h5"): --> 183 raise ValueError( 184 "When using save_weights_only=True in ModelCheckpoint" 185 ", the filepath provided must end in .weights.h5 "
ValueError: When using save_weights_only=True in ModelCheckpoint, the filepath provided must end in .weights.h5 (Keras weights format). Received: filepath=/content/gdrive/MyDrive/cryo/cryo-data_processing_volume/model/gaussian4layer_320_50_0.00014/ckpt/gaussian4layer_320_50_0.00014.hdf5
I tried to change if save_best_ckpt_only: ckpt_name = ckpt_dir + '/' + model_name + '.hdf5' else: ckpt_name = ckpt_dir + '/' + model_name + 'epoch{epoch:02d}_valloss{val_loss:.4f}.hdf5' to if save_best_ckpt_only: ckpt_name = ckpt_dir + '/' + model_name + '.weights.h5' else: ckpt_name = ckpt_dir + '/' + model_name + 'epoch{epoch:02d}_valloss{val_loss:.4f}.weighths.h5' or keras corresponding to the save_weight"_only = True and save best only =True requirement but error output. I even go model check point to set these two parameter equal true and false, but also cause the same issue