Closed lush93gh closed 7 years ago
Yes, you can do that. Assume the ResNet is shared, here is how you can do it:
def func():
r = C.layers.Sequential([
# your ResNet here
])
return r
z = func()
z1 = z(input_var1)
z2 = z(input_var2)
model = C.layers.Sequential([
# your post concatenate layers here
])(C.splice(z1, z2, axis=0))
May I ask how to create minibatch source to feed two-stream input? Thank you.
Have you checked this manual: https://cntk.ai/pythondocs/Manual_How_to_feed_data.html
Yes, I've read the manual. But I still got no idea about how to feed two-streams with different mini-batch image inputs. (ex: feeding different images to z1 and z2 in the above sample code ) I know there is a way to create a Composite reader, but I'm still not pretty sure if this will work:
z1_source = ImageDeserializer(map_file_z1, StreamDefs( features =StreamDef(field='image', transforms=transforms), labels =StreamDef(field='label', shape=num_classes)))
z2_source = ImageDeserializer(map_file_z2, StreamDefs( features =StreamDef(field='image', transforms=transforms), labels =StreamDef(field='label', shape=num_classes)))
return MinibatchSource([z1_source, z2_source], randomize=randomize)
Here is an example of composite readers: https://docs.microsoft.com/en-us/cognitive-toolkit/BrainScript-and-Python---Understanding-and-Extending-Readers
I'd like to build two-streams of ResNet. Each stream first feeds-forward through a ResNet. Then, the outputs of both streams are concatenated.
Is there anyway to build that model and make both streams take input separately?