microsoft / CNTK

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
https://docs.microsoft.com/cognitive-toolkit/
Other
17.5k stars 4.29k forks source link

ValidateSubNetwork: Times operation changed during final validation. #3080

Closed hibaahsan closed 6 years ago

hibaahsan commented 6 years ago

I'm trying to code a basic implementation of CDSSM and am getting the below error. I'm currently using a dense representation of the data. (CNTK 2.5 CPU-only)


num_trigrams_per_word = 49292
num_words = 10
output_dimension = 128
input_dim = 492920
input_dim_model = (num_trigrams_per_word, 1, num_words)
num_negative_samples = 5
CONST_GAMMA = 10

labels = C.input_variable(1 + num_negative_samples)
qr = C.input_variable(input_dim_model)
dr = C.input_variable(input_dim_model)

def create_model(features):
    with C.layers.default_options(init=C.glorot_uniform()):
        h = features
        h = C.layers.Convolution2D(filter_shape=(1,3), num_filters = 300, strides=1, pad=True, name='first_conv', activation=C.tanh)(h)
        h = C.layers.MaxPooling((1,10), strides=10, name='max_pooling')(h)
        r = C.layers.Dense(output_dimension, name='classify')(h)
    return r

def create_criterion_function(v1, v2, labels):
    c   = C.cosine_distance_with_negative_samples(v1, v2, shift = 1, num_negative_samples = num_negative_samples)
    loss = C.cross_entropy_with_softmax(C.times(CONST_GAMMA, c), labels)
    errs = C.classification_error(c, labels)    
    return loss, errs

def train_model(train_reader_train, num_sweeps_to_train_with=10):

    q_model = create_model(qr)
    qf = q_model(qr)
    df = q_model(dr)

   num_sweeps_to_train_with=10
   minibatch_size = 64
   num_samples_per_sweep = 60000
   num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size

   input_map={
            qr  : reader_train.streams.query,
            dr  : reader_train.streams.doc,
            labels : reader_train.streams.labels
        }   

  # Loss and error function
  loss, label_error = create_criterion_function(qf, df, labels)    

  learning_rate = 0.2
  lr_schedule = C.learning_parameter_schedule(learning_rate, minibatch_size)
  learner = C.sgd(q_model.parameters, lr_schedule)
  trainer = C.Trainer(q_model, (loss, label_error), [learner])

  for i in range(0, int(100)):
    # Read a mini batch from the training data file
    data=reader_train.next_minibatch(minibatch_size, input_map=input_map) 
    trainer.train_minibatch(data)

RuntimeError                              Traceback (most recent call last)
<ipython-input-42-9bb1af8693f8> in <module>()
      3     # Read a mini batch from the training data file
      4     data=reader_train.next_minibatch(minibatch_size, input_map=input_map)
----> 5     trainer.train_minibatch(data)
      6     print_training_progress(trainer, i, training_progress_output_freq, verbose=1)

C:\Anaconda3\envs\cntk25-py36\lib\site-packages\cntk\train\trainer.py in train_minibatch(self, arguments, outputs, device, is_sweep_end)
    179             if contains_minibatch_data:
    180                 updated = super(Trainer, self).train_minibatch_overload_for_minibatchdata(
--> 181                     arguments, device)
    182             else:
    183                 updated = super(Trainer, self).train_minibatch(arguments, is_sweep_end,

C:\Anaconda3\envs\cntk25-py36\lib\site-packages\cntk\cntk_py.py in train_minibatch_overload_for_minibatchdata(self, *args)
   3022 
   3023     def train_minibatch_overload_for_minibatchdata(self, *args):
-> 3024         return _cntk_py.Trainer_train_minibatch_overload_for_minibatchdata(self, *args)
   3025 
   3026     def train_minibatch(self, *args):

RuntimeError: ValidateSubNetwork: Times1419 Times operation changed during final validation.

[CALL STACK]
    > CNTK::TrainingParameterSchedule<unsigned __int64>::  GetMinibatchSize
    - CNTK::TrainingParameterSchedule<unsigned __int64>::  GetMinibatchSize (x3)
    - CNTK::Internal::  UseSparseGradientAggregationInDataParallelSGD (x3)
    - CNTK::Function::  Forward
    - CNTK::  CreateTrainer
    - CNTK::Trainer::  TotalNumberOfUnitsSeen
    - CNTK::Trainer::  TrainMinibatch (x2)
    - PyInit__cntk_py (x2)
    - PyEval_EvalFrameDefault
    - Py_CheckFunctionResult
ke1337 commented 6 years ago

The bug is in loss = C.cross_entropy_with_softmax(C.times(CONST_GAMMA, c), labels). Note that times op is for matrix multiple, and if you change it to loss = C.cross_entropy_with_softmax(CONST_GAMMA * c, labels), it will be fine.

hibaahsan commented 6 years ago

Bug fix worked. Thanks!