jcjohnson / neural-style

Torch implementation of neural style algorithm
MIT License
18.31k stars 2.7k forks source link

Non-issue: function of having the x,losses optim outputs ? and using multiple optim's #393

Open ArturoDeza opened 7 years ago

ArturoDeza commented 7 years ago

I believe the question says most of it. Is there a purpose of getting the x, and losses variables out form the optimizer (the one called at the end of the script). Both variables don't seem to be used in the rest of the script.

I'm trying to use two different optimizer steps, (both have different feval() functions) in every iteration loop. But I'm not sure if something these lines would make sense (given that I have two networks that output different gradients). While it not a strict form of adversarial training, you could think of the two networks computing different gradients for the same image; yet I am not sure if the two gradients are added together or combined in some way.

-- Run optimization.
if params.optimizer == 'lbfgs' then
  print('Running optimization with L-BFGS')
  local x, losses = optim.lbfgs(feval1, img, optim_state)
elseif params.optimizer == 'adam' then
  print('Running optimization with ADAM')
  for t = 1, params.num_iterations do
    --local x, losses = optim.adam(feval, img, optim_state)
    -- Instead do this:
    local x, losses = optim.adam(feval1, img, optim_state)
    local x, losses = optim.adam(feval2, img, optim_state)
  end
end

Also: What would the effect of the two optim.adam functions with different feval functions do during optimization?