I believe the question says most of it. Is there a purpose of getting the x, and losses variables out form the optimizer (the one called at the end of the script). Both variables don't seem to be used in the rest of the script.
I'm trying to use two different optimizer steps, (both have different feval() functions) in every iteration loop. But I'm not sure if something these lines would make sense (given that I have two networks that output different gradients). While it not a strict form of adversarial training, you could think of the two networks computing different gradients for the same image; yet I am not sure if the two gradients are added together or combined in some way.
-- Run optimization.
if params.optimizer == 'lbfgs' then
print('Running optimization with L-BFGS')
local x, losses = optim.lbfgs(feval1, img, optim_state)
elseif params.optimizer == 'adam' then
print('Running optimization with ADAM')
for t = 1, params.num_iterations do
--local x, losses = optim.adam(feval, img, optim_state)
-- Instead do this:
local x, losses = optim.adam(feval1, img, optim_state)
local x, losses = optim.adam(feval2, img, optim_state)
end
end
Also: What would the effect of the two optim.adam functions with different feval functions do during optimization?
I believe the question says most of it. Is there a purpose of getting the x, and losses variables out form the optimizer (the one called at the end of the script). Both variables don't seem to be used in the rest of the script.
I'm trying to use two different optimizer steps, (both have different feval() functions) in every iteration loop. But I'm not sure if something these lines would make sense (given that I have two networks that output different gradients). While it not a strict form of adversarial training, you could think of the two networks computing different gradients for the same image; yet I am not sure if the two gradients are added together or combined in some way.
Also: What would the effect of the two optim.adam functions with different feval functions do during optimization?