Apologies if I may have missed this in the code. I had a question regarding how I might be able to use TCNN to train multiple networks/encodings end-to-end with a single optimizer with different learning rates. In my architecture, I have two hash-encodings that produce feature vectors which should be composed together somehow (by addition or multiplication), and another which takes in this composed feature vector and extra information and outputs a 3D vector. In pseudocode, it would look like the following:
enc_x = ingp_encoding_1(x) # x is an N-dim vector
enc_y = ingp_encoding_2(y)
feat_xy = mlp_1(enc_x * enc_y)
rgb = mlp_2(composite_encoding(feat_xy, a, b, c, d))
In PyTorch, one could train this in an end-to-end manner by adding more params into torch.optim.Adam with different learning rates. Is it possible to do something similar in TCNN by somehow composing all these things into a DifferentiableObject and then creating a Trainer with a single optimizer?
Hi,
Apologies if I may have missed this in the code. I had a question regarding how I might be able to use TCNN to train multiple networks/encodings end-to-end with a single optimizer with different learning rates. In my architecture, I have two hash-encodings that produce feature vectors which should be composed together somehow (by addition or multiplication), and another which takes in this composed feature vector and extra information and outputs a 3D vector. In pseudocode, it would look like the following:
In PyTorch, one could train this in an end-to-end manner by adding more params into torch.optim.Adam with different learning rates. Is it possible to do something similar in TCNN by somehow composing all these things into a DifferentiableObject and then creating a Trainer with a single optimizer?