tensorflow / adanet

Fast and flexible AutoML with learning guarantees.
https://adanet.readthedocs.io
Apache License 2.0
3.47k stars 529 forks source link

`input_fn` called multiple times in `Estimator.train` #143

Open shendiaomo opened 4 years ago

shendiaomo commented 4 years ago

https://github.com/tensorflow/adanet/blob/712bc8efbcce4684cc81108ad916a735cddb4de2/adanet/core/estimator.py#L896-L900 It seems to be problematic because adanet.Estimator.train would load data from scratch at every iteration. As https://github.com/tensorflow/tensorflow/issues/19062#issuecomment-400129963 said, in canned TF estimators train is called once.

shendiaomo commented 4 years ago

There seem to be two negative effects of this:

  1. A repeated Dataset (dataset.repeat(10) for example) cannot stop training via OutOfRangeError or StopIteration, we have to set steps or max_steps, which is inconsistent with canned Estimators.
  2. If a user doesn't shuffle the dataset, AdaNet may repeatedly use the first max_iteration_steps * batch_size samples each time, thus fitting to a subset of the training data.

Am I right? @cweill

cweill commented 4 years ago

@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the max_iteration_steps to be the number of repetitions desired, which unfortunately requires the user to do some extra math (max_iteration_steps = num_examples / batch_size * num_epochs_per_iteration).

Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless.

shendiaomo commented 4 years ago

@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the max_iteration_steps to be the number of repetitions desired, which unfortunately requires the user to do some extra math (max_iteration_steps = num_examples / batch_size * num_epochs_per_iteration).

Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless.

Great! Thanks for the explanation. However, that's sort of not handy to do the math, imagine someone wants to replace the DNNClassifier in her application into adanet.Estimator, there may be lots of work. Will you have a plan to improve this? Or will the Keras version avoid the same situation?

le-dawg commented 4 years ago

@cweill

@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the max_iteration_steps to be the number of repetitions desired, which unfortunately requires the user to do some extra math (max_iteration_steps = num_examples / batch_size * num_epochs_per_iteration).

Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless.

From the tutorials:

max_iteration_steps=TRAIN_STEPS // ADANET_ITERATIONS,

If I want to train with 100 epochs over one Adanet iteration, meaning num_examples/batch_size steps per epoch, should I set max_iteration_steps to that value?

I have a sample size of 5265 and batch sizes of 50, so I have about 105 update steps per epoch. Should my max_iteration_steps be 10500?

le-dawg commented 4 years ago

Pinging