brianlangseth-NOAA / Spatial-Workshop-SPASAM

Spatially stratified simulation-estimation framework incorporating multistage stock-recruit relationships and incorporating larval IBM outputs
1 stars 0 forks source link

TO DO: Figure out penalties #24

Closed brianlangseth-NOAA closed 2 years ago

brianlangseth-NOAA commented 2 years ago

Many penalties either aren't working or their purpose is unclear.

brianlangseth-NOAA commented 2 years ago

F_pen: F_pen currently does not work. In this line, current_phase() registers the phase number and last_phase() is a true/false (1/0) thus current_phase() is NEVER less than last_phase() unless there is only 1 phase of estimation. Would need to set this as last_phase() == 0 to match intent of current code

F_pen resets itself every phase. If I set the penalty to add a constant (F_pen_like+=1) then F_pen_like is 1 in the phase F is estimated. Once that phase is over, F_pen_like is reset to 0. Because F_pen isnt calculated in the last phase, F_pen_like is 0 and isn't technically added to the final objective function even though it contributes to the overall likelihood in the phase when estimating F. Thus I view it as a sort of psuedo_penalty only applied early.

Rave_pen: It is to penalize log-mean recruitment (LMR) from a specified starting value (Rave_mean). Penalty is turned on using Rave_pen_switch and weight of penalty is controlled by wt_Rave_pen. Note that this is different than parameterization for R0 which is set up by Rave_start, lb_R_ave, and ub_R_ave. Effectively, Rave_pen is keeping R0 near its user-assigned value (Rave_mean), which is practice we would not know.

init_abund_pen: This penalizes initial abundance compared to a single user-inputted numerical value. It is a penalty applied across all ages, which I dont understand. Why would init abundance across age be the same at each age? Regardless, the penalty is only incorporated when abund_pen_switch = 1 and then when ph_init is being estimated. Thus, keep as it but keep penalty off (switch to 0).

Conclusion:

  1. Need to correct penalties that have current_phase() < last_phase() to last_phase()==0 or just remove depending on answers to questions below. This applies for F_pen_like and multiple instances of M_pen_like.
  2. Add Rave_pen value in the report out section. For our estimation, remove penalty from being applied (set wt_Rave_pen to 0 or Rave_pen_switch = 0)
  3. I dont understand the purpose of the init_abund_pen. Works to set initial abundance at age at a constant value across ages 2+. Keep the penalty off.

Remaining questions for @JDeroba and @AmySchueller-NOAA

  1. Should we report out the early penalty values used in the phase of estimation even though they become zero in the last phase? Alternatively, we can keep them present even in the last_phase (remove last_phase() == 0). That results in them being reported.
  2. More generally, do you think an early penalty should be applied? Seems tricky in that it doesnt contribute to overall likelihood yet it is being used during early phasing to limit the space the model explores.
  3. I dont see any likelihood component for Ndevs. Currently, Im exponentially decaying from Rave to obtain initial age distribution, which seems fine. Any concerns with not having Ndevs under such scenarios (Im assuming no)?
JDeroba commented 2 years ago

@brianlangseth-NOAA Brian and @AmySchueller-NOAA As I recall, Dan coded the penalties only to apply to the early phases because we were generally working in a simulation world where we knew what the penalties should be and things ran/converged quicker when we had those penalties but then let the model be free for the final phase. We're in a pseudo-simulation world now and I think it is best to make decisions on penalties as needed. In short, leave them as is for now (i.e., if used then only applying early) mostly because fewer penalties seems like it should be the default preference. If needed, then use the early penalties or include them through the entire fitting process (during all phases) so that they contribute to the final/total likelihood; but only as we find they are necessary. I would say no need to report the penalty contributions during early phases, but it doesn't really cost much if you want to make that happen.

I've been working with the model for the last two days. Mostly I turned off all devs, selectivities, etc., and I'm only estimating R-ave, init_abund, and q parameters. My theory is that by estimating just the parameters related to scale that: 1) the fit time is much less, which I have found to be true, and 2) I can wrench with the scale parameters until I get close to a sensible scale, and then free up other params and devs. I'm still struggling with scale, especially as it pertains to R-ave and init_abund. I still have several things I want to try and I will have more time this week to keep playing. At the moment, I can't even seem to fix the init_abund at a lower scale, and I'm not sure if something is really wrong, or if I'm missing a switch. Anyway, more later.

JJD

On Tue, Mar 22, 2022 at 1:33 PM Brian Langseth @.***> wrote:

F_pen: F_pen currently does not work. In this line https://github.com/brianlangseth-NOAA/Spatial-Workshop-SPASAM/blob/253c0d3df38e65efd13351157c4546b89d78c662/YFT_1area/Estimation_Model/YFT_1area.tpl#L4931, current_phase() registers the phase number and last_phase() is a true/false (1/0) thus current_phase() is NEVER less than last_phase() unless there is only 1 phase of estimation. Would need to set this as last_phase() == 0 to match intent of current code

F_pen resets itself every phase. If I set the penalty to add a constant (F_pen_like+=1) then F_pen_like is 1 in the phase F is estimated. Once that phase is over, F_pen_like is reset to 0. Because F_pen isnt calculated in the last phase, F_pen_like is 0 and isn't technically added to the final objective function even though it contributes to the overall likelihood in the phase when estimating F. Thus I view it as a sort of psuedo_penalty only applied early.

Rave_pen: It is to penalize log-mean recruitment (LMR) from a specified starting value (Rave_mean). Penalty is turned on using Rave_pen_switch and weight of penalty is controlled by wt_Rave_pen. Note that this is different than parameterization for R0 which is set up by Rave_start, lb_R_ave, and ub_R_ave. Effectively, Rave_pen is keeping R0 near its user-assigned value (Rave_mean), which is practice we would not know.

init_abund_pen: This penalizes initial abundance compared to a single user-inputted numerical value. It is a penalty applied across all ages, which I dont understand. Why would init abundance across age be the same at each age? Regardless, the penalty is only incorporated when init_abund_switch = 1 and then ph_init is being estimated. Thus, keep as it but keep penalty off (switch to 0).

Conclusion:

  1. Need to correct penalties that have current_phase() < last_phase() to last_phase()==0 or just remove depending on answers to questions below. This applies for F_pen_like and multiple instances of M_pen_like.
  2. Add Rave_pen value in the report out section. For our estimation, remove penalty from being applied (set wt_Rave_pen to 0 or Rave_pen_switch = 0)

Remaining questions for @JDeroba https://github.com/JDeroba and @AmySchueller-NOAA https://github.com/AmySchueller-NOAA

  1. Should we report out the early penalty values used in the phase of estimation even though they become zero in the last phase? Alternatively, we can keep them present even in the last_phase (remove last_phase() == 0). That results in them being reported.
  2. More generally, do you think an early penalty should be applied? Seems tricky in that it doesnt contribute to overall likelihood yet it is being used during early phasing to limit the space the model explores.

— Reply to this email directly, view it on GitHub https://github.com/brianlangseth-NOAA/Spatial-Workshop-SPASAM/issues/24#issuecomment-1075342076, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQHO5HE5OZ3H22JOM7ID5DVBH77RANCNFSM5RAC56NQ . You are receiving this because you were mentioned.Message ID: @.*** com>