# set up training & validation data sets:
data = wobble.Data(datafile, filepath='data/', orders=orders, min_snr=3) # to get N_epochs
validation_epochs = np.random.choice(data.N, data.N//8, replace=False) # 12.5% of epochs will be validation set
training_epochs = np.delete(np.arange(data.N), validation_epochs)
I have a dataset where a few epochs were poorly normalized therefore deleted. The way this piece of code above is written, the deleted epochs could possibly go back as a validation_epochs or training_epochs. Hence, making the regularization inefficient.
I have a dataset where a few epochs were poorly normalized therefore deleted. The way this piece of code above is written, the deleted epochs could possibly go back as a validation_epochs or training_epochs. Hence, making the regularization inefficient.