Open csolorio opened 6 years ago
Please check the entire list of *_loss.arguments and make sure their are supplied as inputs. Your error is that some arguments are not provided with value. Note that clone creates new input nodes which may not be what you intended. You may use logging.graph.plot to visualize the model and understand more about data feeding path.
I have seen something weird that I suspect is the cause. When I check the argument list for the discriminator for 'fake' images after checking the loss arguments, the only one I get is: Input('Input3', [#], [3 x 48 x 48]) Nevertheless, the discriminator on real images is: Input('Input4', [#], [3 x 96 x 96])
I create the discriminator of fake images using:
D_fake = D_real.clone(method = 'share', substitutions = {target_scaled.output: model.output})
The shape of model.output is "(3, 96, 96)". Could that be the source of the error? Why the discriminator of real images has that input shape if neither the original discriminator nor the model.output substitution has a differente shape? If I invert the key/value order in the substitution dictionary, would that work or would simply not substitute anything at all?
Thank you in advance!
Could you use the following function and plot the graph to see how it shares the parameters?
def plot_graph(root, file_name): C.logging.graph.plot(root, file_name+".pdf")
Please note that the model.arguments ordering is not guaranteed. I would suggest you play with the following trick:
input_var = input_variable((num_channels, patch_size, patch_size), np.float32, name="**input**")
target_var = input_variable((num_channels, patch_size * scale, patch_size * scale), np.float32, name="**target**")
generator_batch = {'input': low_res, 'target' : high_res}
discriminator_batch = {arg: generator_batch[arg.name] for arg in D_trainer.model.arguments}
discriminator_batch = {'input': low_res, 'target' : high_res}
discriminator_batch = {arg: discriminator_batch[arg.name] for arg in D_trainer.model.arguments}
I'll try to test the last trick. A question, though. The fourth line is correct? Assign that new dictionary using the generator batch to the discriminator batch? Three assignments to the discriminator batch is correct?
Anyway, I'll try. I'll also upload the plot if the trick doesn't work :)
Update: The trick with the names didn't work. I still get the following error: ValueError: Values for 1 required arguments 'Input('input', [#], [3 x 48 x 48])', that the requested output(s) 'Output('aggregateLoss', [], []), Output('Plus5397_Output_0', [#], [1])' depend on, have not been provided.
I tested both with pairs of generator_batch and discriminator_batch lines, and as the original code. I tested the name with and without asterisks (I assumed it was to just to signal the change si I initially tested without it).
I'm currently testing ONLY the discriminator part to make the code even simpler but the error keeps appearing.
I've plotted also the loss of the discriminator function since plotting one model or the other doesn't show the whole thing. I attach the discriminator model, the generator model and the discriminator loss graph to compare them all (pdf files were generated with the upper part cropped so I saved them in png).
First of all, you will also need use the argument name tricks for clone as well (again because the ordering of C.Function.arguments are not guaranteed). You will need to replace the following with the named trick mentioned above if len(VGG.arguments) > 1 :
VGG_real = VGG.clone(method = 'share', substitutions = {VGG.arguments[0]: pad_block(target_var)})
VGG_fake = VGG.clone(method = 'share', substitutions = {VGG.arguments[0]: pad_block(model.output)})
One suggestion to locate the input map problem is evaluating the function individual with fake input data. For example, if you believe that your function F is only with one input variable: x = C.input_variable((3, 48, 48)) you can do batch_size =1 or 2 F.eval({x: np.ones((batch_size, 3, 48, 48))})
If your function have missing input, this will give the error so that you can locate which function in your big graph has the missing input incrementally.
Hi. I have previously asked for your help for a couple of points to use the the tutorial 302B (https://cntk.ai/pythondocs/CNTK_302B_Image_Super-resolution_Using_CNNs_and_GANs.html) but using a different model (https://github.com/Microsoft/CNTK/issues/3078). The issues in that topic remain. I share with you a minimal working code for you to kindly help me once again:
The model I trained used the same variables (with the same name field).
This code produces the following error: File "test.py", line 133, in
D_trainer.train_minibatch(discriminator_batch)
File "C:\ProgramData\Anaconda2\lib\site-packages\cntk\train\trainer.py", line 184, in train_minibatch
device)
File "C:\ProgramData\Anaconda2\lib\site-packages\cntk\cntk_py.py", line 2856, in train_minibatch
return _cntk_py.Trainer_train_minibatch(self, *args)
RuntimeError: AddNodeToNet: Duplicated name for Constant2614 LearnableParameter operation.
[CALL STACK]
If I use the same dictionary to provide the input batches to the trainers, I get the same error. If I use the following:
I get: File "test.py", line 133, in
D_trainer.train_minibatch(discriminator_batch)
File "C:\ProgramData\Anaconda2\lib\site-packages\cntk\train\trainer.py", line 184, in train_minibatch
device)
File "C:\ProgramData\Anaconda2\lib\site-packages\cntk\cntk_py.py", line 2856, in train_minibatch
return _cntk_py.Trainer_train_minibatch(self, *args)
ValueError: Values for 1 required arguments 'Input('input', [#], [3 x 48 x 48])', that the requested output(s) 'Output('aggregateLoss', [], []), Output('Plus6910_Output_0', [#], [1])' depend on, have not been provided.
[CALL STACK]
Please Help! :(