Philipp-Neubauer / FirstAssessment

Analysing the time to first assessment of fish stocks
0 stars 0 forks source link

Manuscript text #5

Open James-Thorson opened 8 years ago

James-Thorson commented 8 years ago

I just added a few more paragraphs to the introduction (Rnw file), but still can't compile to the PDF (see other issue for next bug).

This intro is basically a standard 5-paragraph style intro giving:

  1. context for fisheries management
  2. defining stock assessment
  3. explaining rate of assessment
  4. justifying why studying rate-of-assessment would be useful
  5. outlining our paper's goals.

I'm happy to heavily modify it, but think its a decent place to start

James-Thorson commented 8 years ago

@Philipp-Neubauer I see that you added a commit "draft dataset and results". Do you want me to start adding some text for the results section, or how could I be helpful?

Philipp-Neubauer commented 8 years ago

yes; results and figures should be updated now with the newest data, and with nearly all stocks in Mike's list included. Over-all the updated grooming and such added 100+ stocks to the analysis.

There are still a few stocks that are dropped - these are stocks where catches have in general been really low (e.g., some Rockfish). I have a rule to exclude species that did not have at least 10t catch in some year - that was mainly chosen to exclude landings of marginal species that only showed up here and there in the landings.

I've added some methods text but its really rough and incomplete. Will aim to fix that sometime this week; I'm at a bycatch workshop though so can probably only have a few hours here and there. I also haven't added the projection plot we discussed - same there, will add that as I find time this week.

Happy for you to add some writing around the results and perhaps put some bullets about things to discuss. Let me know if you have any queries re results. The Weibull_model_output.rda file has the original data attached as year.table. You could use that if you need numbers from the data...

Also, you could start a bibliography by exporting citations as bibtex and putting them in a .bib file that can live in the github repo. I can set up a makefile to make sure the citations get processed.

On Mon, Oct 31, 2016 at 10:21 AM, Jim Thorson notifications@github.com wrote:

@Philipp-Neubauer https://github.com/Philipp-Neubauer I see that you added a commit "draft dataset and results". Do you want me to start adding some text for the results section, or how could I be helpful?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257180146, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDCyvYTXPqaXQKu-Cfd0ypQ7TR4C_4ks5q5QpUgaJpZM4KgiJj .

Phil

James-Thorson commented 8 years ago

Cool, I'm starting to look it over. First response:

I get an error in "results.Rnw" code-block starting line 219, which contains ggplot with an argument panel.spacing that throws an error

Error in (function (el, elname)  : 
  "panel.spacing" is not a valid theme element name.

I've updated ggplot2 to version 2.1.0, and am using R 3.3.1. It runs if I comment out line 229, which I have done. Just FYI

Philipp-Neubauer commented 8 years ago

I might have the dev version from hadley/ggplot2 installed. You can put panel.margin instead (that's the old way...I think)

On Mon, Oct 31, 2016 at 11:13 AM, Jim Thorson notifications@github.com wrote:

Cool, I'm starting to look it over. First response:

I get an error in "results.Rnw" code-block starting line 219, which contains ggplot with an argument panel.spacing that throws an error

Error in (function (el, elname) : "panel.spacing" is not a valid theme element name.

I've updated ggplot2 to version 2.1.0, and am using R 3.3.1. It runs if I comment out line 229, which I have done. Just FYI

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257183472, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC4cNvMF_WTZJfqFjb4ZlpeNv9Ox3ks5q5RZ4gaJpZM4KgiJj .

Phil

James-Thorson commented 8 years ago

OK.

Next response: I've spent maybe 30 minutes trying to address a question I posed in a previous email. Basically, Fig. 2 doesn't make sense to me because I would think that "landed stocks" should be the set of stocks that were ever previously landed. Given this definition, the number of landed stocks could only increase, but the green line USNE decreases around 1960, prior to any stocks being assessed.

So I tried exploring the code for Fig. 2 (line 152-186). However, I basically can't read the code at all because I don't ever use dplyr, ggplot, know how to work with a tibble, and I might be misunderstanding the column headers for full.tab. Also, my attempt to use full.tab to replicate the left panel of Fig. 2 wasn't seeming to work out.

So I have two questions:

  1. How would you like us to proceed? If I'm going to work directly with the code, I might need some changes in Results.Rnw to eliminate use of dplyr at least. Would that be easier, or would you prefer for us to direct questions to you for coding? (and of course, I'm sorry for my ignorance! some day I imagine I'll need to learn dplyr)
  2. For Fig. 2, could you define "landed stock" in the caption? Do you agree that "landed stock" should be defined as the cumulative set of stocks landed in that year or any previous year (because its this set of stocks that might potentially have an assessment)?
Philipp-Neubauer commented 8 years ago

Yes to question 2; I can fix the code to do that.

For question 1: It would be a mission to eliminate dplyr at this stage given its prevalence in pretty much all data transformation operations in the document. I think the more efficient option would be for me to make changes as they arise.

I'll aim to push the figure 2 change in a few minutes...

On Mon, Oct 31, 2016 at 1:08 PM, Jim Thorson notifications@github.com wrote:

OK.

Next response: I've spent maybe 30 minutes trying to address a question I posed in a previous email. Basically, Fig. 2 doesn't make sense to me because I would think that "landed stocks" should be the set of stocks that were ever previously landed. Given this definition, the number of landed stocks could only increase, but the green line USNE decreases around 1960, prior to any stocks being assessed.

So I tried exploring the code for Fig. 2 (line 152-186). However, I basically can't read the code at all because I don't ever use dplyr, ggplot, know how to work with a tibble, and I might be misunderstanding the column headers for full.tab. Also, my attempt to use full.tab to replicate the left panel of Fig. 2 wasn't seeming to work out.

So I have two questions:

1.

How would you like us to proceed? If I'm going to work directly with the code, I might need some changes in Results.Rnw to eliminate use of dplyr at least. Would that be easier, or would you prefer for us to direct questions to you for coding? 2.

For Fig. 2, could you define "landed stock" in the caption? Do you agree that "landed stock" should be defined as the cumulative set of stocks landed in that year or any previous year (because its this set of stocks that might potentially have an assessment)?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257190620, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC7nMZaAno23roxDDOunCNdGLWmLpks5q5TF0gaJpZM4KgiJj .

Phil

Philipp-Neubauer commented 8 years ago

Ok, Fig two should now correspond to cumulative number of species landed; with dotted line being cumulative number of assessments and solid lines being cumulative landed - cumulative assessed per year.

On Mon, Oct 31, 2016 at 1:34 PM, Philipp Neubauer neubauer.phil@gmail.com wrote:

Yes to question 2; I can fix the code to do that.

For question 1: It would be a mission to eliminate dplyr at this stage given its prevalence in pretty much all data transformation operations in the document. I think the more efficient option would be for me to make changes as they arise.

I'll aim to push the figure 2 change in a few minutes...

On Mon, Oct 31, 2016 at 1:08 PM, Jim Thorson notifications@github.com wrote:

OK.

Next response: I've spent maybe 30 minutes trying to address a question I posed in a previous email. Basically, Fig. 2 doesn't make sense to me because I would think that "landed stocks" should be the set of stocks that were ever previously landed. Given this definition, the number of landed stocks could only increase, but the green line USNE decreases around 1960, prior to any stocks being assessed.

So I tried exploring the code for Fig. 2 (line 152-186). However, I basically can't read the code at all because I don't ever use dplyr, ggplot, know how to work with a tibble, and I might be misunderstanding the column headers for full.tab. Also, my attempt to use full.tab to replicate the left panel of Fig. 2 wasn't seeming to work out.

So I have two questions:

1.

How would you like us to proceed? If I'm going to work directly with the code, I might need some changes in Results.Rnw to eliminate use of dplyr at least. Would that be easier, or would you prefer for us to direct questions to you for coding? 2.

For Fig. 2, could you define "landed stock" in the caption? Do you agree that "landed stock" should be defined as the cumulative set of stocks landed in that year or any previous year (because its this set of stocks that might potentially have an assessment)?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257190620, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC7nMZaAno23roxDDOunCNdGLWmLpks5q5TF0gaJpZM4KgiJj .

Phil

Phil

James-Thorson commented 8 years ago

OK, thanks Phil! makes more sense to me, and also lines up closer to proportions I was getting when I tried to replicate the plot (presumably I missed some small restrictions on which stocks to include -- I was getting the same rank order and qualitative picture)

Another request:

What about expanding Fig. 8 to include the posterior distribution for each Class and Order? I think an interesting result is which taxa (e.g., Elasmobranchs, Clupeids) have significantly higher or lower assessment rates.

Philipp-Neubauer commented 8 years ago

Yes, totally. jsut pushed that

On Mon, Oct 31, 2016 at 2:43 PM, Jim Thorson notifications@github.com wrote:

OK, thanks Phil! makes more sense to me, and also lines up closer to proportions I was getting when I tried to replicate the plot (presumably I missed some small restrictions on which stocks to include -- I was getting the same rank order and qualitative picture)

Another request:

What about expanding Fig. 8 to include the posterior distribution for each Class and Order? I think an interesting result is which taxa (e.g., Elasmobranchs, Clupeids) have significantly higher or lower assessment rates.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257197185, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC1SgnhnkZU84XtG83xszV6zXsT1eks5q5UepgaJpZM4KgiJj .

Phil

Philipp-Neubauer commented 8 years ago

Did class only though, could do the same as in figure 4 for order instead of just having class. that might be more interesting.

On Mon, Oct 31, 2016 at 2:59 PM, Philipp Neubauer neubauer.phil@gmail.com wrote:

Yes, totally. jsut pushed that

On Mon, Oct 31, 2016 at 2:43 PM, Jim Thorson notifications@github.com wrote:

OK, thanks Phil! makes more sense to me, and also lines up closer to proportions I was getting when I tried to replicate the plot (presumably I missed some small restrictions on which stocks to include -- I was getting the same rank order and qualitative picture)

Another request:

What about expanding Fig. 8 to include the posterior distribution for each Class and Order? I think an interesting result is which taxa (e.g., Elasmobranchs, Clupeids) have significantly higher or lower assessment rates.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257197185, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC1SgnhnkZU84XtG83xszV6zXsT1eks5q5UepgaJpZM4KgiJj .

Phil

Phil

James-Thorson commented 8 years ago

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of both numeric values (to reference in the text), tables, and figures. But I don't have an immediate opinion about the best combination of figures/tables/in-text. Any thoughts?

jim

mcmelnychuk commented 8 years ago

One way to do it could be to list 1 & 3 below in tables, and 2 visually. If we group the Orders in the Fig. 8 panel by Class (similar to Fig. 4, like Phil suggested, rotated 90deg), then we could overlay short vertical dashed lines for each Class, maybe color-coded. The location of these dashed lines could be posterior means of the Class, and the length of these dashed lines could span only the Orders within the Class. That would lose out on error bars for the Classes, though they would still be present for all the Orders. That would visually satisfy #2 below.

Mike

On 2016-10-30 7:39 PM, Jim Thorson wrote:

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of both numeric values (to reference in the text), tables, and figures. But I don't have an immediate opinion about the best combination of figures/tables/in-text. Any thoughts?

jim

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257202406, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVSCYa2oZ5hovfa3FkZud53u7cG7zks5q5VTTgaJpZM4KgiJj.

Philipp-Neubauer commented 8 years ago

Yes - agreed. Actually, if we can manage a figure that includes posterior means and intervals (say as dashed and dotted lines on either side of the dashed line), then we could pretty much display 1 & 2 in a single figure. Else its probably a matter of deciding which is more important to visualise. Agreed that 3 should probably be a table.

Pretty busy getting stuff ready for the turtle workshop, but will give the figure a go asap (as well as the projection figure Jim was talking about). We're starting to have a good number of figures, and might want to chuck some in an appendix (e.g.,, the model fit...).

Phil

On Mon, Oct 31, 2016 at 6:32 PM, Michael Melnychuk <notifications@github.com

wrote:

One way to do it could be to list 1 & 3 below in tables, and 2 visually. If we group the Orders in the Fig. 8 panel by Class (similar to Fig. 4, like Phil suggested, rotated 90deg), then we could overlay short vertical dashed lines for each Class, maybe color-coded. The location of these dashed lines could be posterior means of the Class, and the length of these dashed lines could span only the Orders within the Class. That would lose out on error bars for the Classes, though they would still be present for all the Orders. That would visually satisfy #2 below.

Mike

On 2016-10-30 7:39 PM, Jim Thorson wrote:

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of both numeric values (to reference in the text), tables, and figures. But I don't have an immediate opinion about the best combination of figures/tables/in-text. Any thoughts?

jim

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-257202406, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVSCYa2oZ5hovfa3FkZud53u7cG7zks5q5VTTgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257216911, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC_mVPRewIluNOTJCF4gy7gZmGuIwks5q5X1agaJpZM4KgiJj .

Phil

Philipp-Neubauer commented 8 years ago

Hi y'all;

have added the plot discussed above, as well as a draft of the projection plot. Also, have added an appendix table with all effects, their estimate for the assessment rate and time-to-assessment etc.

I've done some writing for methods, need to go through to see that it all makes sense and to figure what still needs writing in the methods.

Let me know what you guys think about the new figures...happy to iterate on them.

Phil

On Mon, Oct 31, 2016 at 9:08 PM, Philipp Neubauer neubauer.phil@gmail.com wrote:

Yes - agreed. Actually, if we can manage a figure that includes posterior means and intervals (say as dashed and dotted lines on either side of the dashed line), then we could pretty much display 1 & 2 in a single figure. Else its probably a matter of deciding which is more important to visualise. Agreed that 3 should probably be a table.

Pretty busy getting stuff ready for the turtle workshop, but will give the figure a go asap (as well as the projection figure Jim was talking about). We're starting to have a good number of figures, and might want to chuck some in an appendix (e.g.,, the model fit...).

Phil

On Mon, Oct 31, 2016 at 6:32 PM, Michael Melnychuk < notifications@github.com> wrote:

One way to do it could be to list 1 & 3 below in tables, and 2 visually. If we group the Orders in the Fig. 8 panel by Class (similar to Fig. 4, like Phil suggested, rotated 90deg), then we could overlay short vertical dashed lines for each Class, maybe color-coded. The location of these dashed lines could be posterior means of the Class, and the length of these dashed lines could span only the Orders within the Class. That would lose out on error bars for the Classes, though they would still be present for all the Orders. That would visually satisfy #2 below.

Mike

On 2016-10-30 7:39 PM, Jim Thorson wrote:

Hmm. I think we'll want to make statements about:

  1. Class differences from "average fish" (both mean and 2-sided bayesian p-value)
  2. Order differences from its class (mean and p-value)
  3. Order differences from "average fish"

I think we'll want to communicate these through some combination of both numeric values (to reference in the text), tables, and figures. But I don't have an immediate opinion about the best combination of figures/tables/in-text. Any thoughts?

jim

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/ 5#issuecomment-257202406, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVSCYa 2oZ5hovfa3FkZud53u7cG7zks5q5VTTgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-257216911, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC_mVPRewIluNOTJCF4gy7gZmGuIwks5q5X1agaJpZM4KgiJj .

Phil

Phil

James-Thorson commented 8 years ago

Phil,

I think it all looks really cool! Definitely plenty to write about here, and I think the projection plot is a good "highline" point with which to end the results. Plus the price, landings, and rockfish-plus-dusky-sharks as being faster assessed makes sense, and are worthwhile mid-results section. And I got it compiling after installing the development version of ggplot2 via devtools::install_github("hadley/ggplot") (rather than CRAN) and installing appendix.sty from MiKTeX package manager

One more request though: Are you willing to convert the projection plot to a "finite-sample" projection? Obviously in 2016 we know how many were assessed (well, its approximate if I understand the model right, because the censoring year might be earlier than 2016). This could be done by removing stocks with an assessment from the set, setting the other stocks as currently unassessed in their censored-year, tracking their probability of having prior assessment during the forecast period, and then recombining the model-based subset with the withheld subset (where the latter have a 100% probability of previous assessment).

Does this sound plausible and reasonable? the point is that the forecast intervals in the finite-sample version will be smaller, particularly in teh earlier years, so it'll be easier to say something specific about the different regions.

Philipp-Neubauer commented 8 years ago

Yes, definitely, will give that a go on my plane-ride home over the next few hours...

On Sat, Nov 5, 2016 at 4:51 AM, Jim Thorson notifications@github.com wrote:

Phil,

I think it all looks really cool! Definitely plenty to write about here, and I think the projection plot is a good "highline" point with which to end the results. Plus the price, landings, and rockfish-plus-dusky-sharks as being faster assessed makes sense, and are worthwhile mid-results section. And I got it compiling after installing the development version of ggplot2 via devtools::install_github("hadley/ggplot") (rather than CRAN) and installing appendix.sty from MiKTeX package manager

One more request though: Are you willing to convert the projection plot to a "finite-sample" projection? Obviously in 2016 we know how many were assessed (well, its approximate if I understand the model right, because the censoring year might be earlier than 2016). This could be done by removing stocks with an assessment from the set, setting the other stocks as currently unassessed in their censored-year, tracking their probability of having prior assessment during the forecast period, and then recombining the model-based subset with the withheld subset (where the latter have a 100% probability of previous assessment).

Does this sound plausible and reasonable? the point is that the forecast intervals in the finite-sample version will be smaller, particularly in teh earlier years, so it'll be easier to say something specific about the different regions.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-258616496, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC7o4aJuebK1urYbJPgOcIm5WS-65ks5q7Jf9gaJpZM4KgiJj .

Phil

Philipp-Neubauer commented 8 years ago

Hey Jim;

spent some time on this and realised that your description actually sounds a lot like what I actually did...the uncertainty seems large because of the scale, I guess (and the start in 2016 is later than the last year for most stocks, so might need to start earlier....). I've re-sized the y-axis on the plot to make that clear - happy to change the start date, too.

I thought there might be another way of doing the projections that would lead to smaller prediction intervals, but I'm not sure now.

I my code, line 471 calculates the probability of assessment for each stock from 2016-2015 for each MCMC sample, then below that the quantiles are taken, the key part here is:

sapply(seq(lmin,lmin+34),function(t) 1-exp(-l$MCMC*t^tau$MCMC)),

where 1-exp(-mu*t^tau) is the probability of assessment up to t, i.e., that the stock will have been assessed by time t.

Then, the predicted proportion assessed is p_assessed + (1-p_assessed)*mean(P_r,s); where the last term is the mean over the assessment probabilities for all stocks s in region r.

Hope this makes sense - is this what you described? Definitely open to other ways of approaching this if you have something else in mind.

Phil

On Sun, Nov 6, 2016 at 5:59 AM, Philipp Neubauer neubauer.phil@gmail.com wrote:

Yes, definitely, will give that a go on my plane-ride home over the next few hours...

On Sat, Nov 5, 2016 at 4:51 AM, Jim Thorson notifications@github.com wrote:

Phil,

I think it all looks really cool! Definitely plenty to write about here, and I think the projection plot is a good "highline" point with which to end the results. Plus the price, landings, and rockfish-plus-dusky-sharks as being faster assessed makes sense, and are worthwhile mid-results section. And I got it compiling after installing the development version of ggplot2 via devtools::install_github("hadley/ggplot") (rather than CRAN) and installing appendix.sty from MiKTeX package manager

One more request though: Are you willing to convert the projection plot to a "finite-sample" projection? Obviously in 2016 we know how many were assessed (well, its approximate if I understand the model right, because the censoring year might be earlier than 2016). This could be done by removing stocks with an assessment from the set, setting the other stocks as currently unassessed in their censored-year, tracking their probability of having prior assessment during the forecast period, and then recombining the model-based subset with the withheld subset (where the latter have a 100% probability of previous assessment).

Does this sound plausible and reasonable? the point is that the forecast intervals in the finite-sample version will be smaller, particularly in teh earlier years, so it'll be easier to say something specific about the different regions.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-258616496, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC7o4aJuebK1urYbJPgOcIm5WS-65ks5q7Jf9gaJpZM4KgiJj .

Phil

Phil

James-Thorson commented 8 years ago

Phil,

As always, sorry that I can't read the code in this project. A few questions pop up from skimming the model text, and struggling through the code:

  1. Fig. 1 references a weibull shape parameter p (with slope 1.93) but Eq. 1 includes a function Weibull(tau,lambda). Wikipedia lists Weibull(lambda,k), with k defined as the shape parameter. Is accelerate rate = p=k=tau?
  2. do linear predictors affect lambda or tau? It seems like lambda makes more sense and acceleration rate = tau is assumed constant across stocks.
  3. In code line 471 it appears that MCMC$l is lambda and MCMC$tau is tau, so this interpretation seems internally consistent. But still, maybe its easiest to work through whether this projection is what I was thinking (or otherwise optimal) by writing up the process of calculating it in English in the text?

Anyway, I would have thought this forecast would have to be done for each stock individually, i.e., to condition on the biological and economic characteristics of that stock, plus its first year of exploitation and its censoring year. Is that what the 467-473 function is doing? To me, it looks more like 467-473 is just conditioning on the unconditional posterior predictive for lambda and tau, without conditioning on the characteristics of each stock...?

Philipp-Neubauer commented 8 years ago

Jim -

my apologies for the lack of consistency - I picked up on it in a few places as I was trying to tidy up a bit after myself, but evidently missed a few.

So for the Weibull, the JAGS parametrisation is not the same as the Wikipedia one. But you are right in that p=k=tau; where we should change p to be tau where it appears in the MS. The lack of consistency is from my own confusion - not remembering what parameters were called last time I worked on the project...so I just put in a placeholder while writing and forgot to fix it.

The linear predictor is for the scale parameter lambda, hopefully that's consistent throughout.

For the code, I use a few patters throughout, so perhaps this summary will make it easier to read:

I'll try to write the projection method this up in plain English in the MS to clarify. Am I right to assume that you mean projection in your point 3, rather than the whole project?

Hope this helps, sorry again for the confusion! Phil

On Mon, Nov 7, 2016 at 6:13 AM, Jim Thorson notifications@github.com wrote:

Phil,

Its not the first time that I've been embarrassed that I can't read the code in this project. I think the combination of R code I don't know, and not knowing the equations for the time-to-event. A few questions pop up from skimming the model text:

1.

Fig. 1 references a weibull shape parameter p (with slope 1.93) but Eq. 1 includes a function Weibull(tau,lambda). Wikipedia lists Weibull(lambda,k), with k defined as the shape parameter. Is p=k=tau? 2.

do linear predictors affect lambda or tau? 3.

(In code line 471 it appears that MCMC$l is lambda and MCMC$tau is tau, so that makes sense). But still, maybe its easiest to work through whether this project is what I was thinking (or otherwise optimal) by writing up the process in English in the text?

Anyway, I would have thought this forecast would have to be done for each stock individually, i.e., to condition on the biological and economic characteristics of that stock, plus its first year of exploitation and its censoring year. Is that what the 467-473 function is doing?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-258694991, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDCwnlaGMDgMlJFcZs_25p_uyMpG7wks5q7gq0gaJpZM4KgiJj .

Phil

James-Thorson commented 8 years ago

Coolcool. So I think we're on the same page. And yes, point 3 was just for the project, I think the rest of the model description is there enough that I can understand and start chipping in clarifications where it seems appropriate.

jim

mcmelnychuk commented 8 years ago

A quick question that will surely reveal my ignorance - if I don't use LaTeX, would I be better suited looking at the results.pdf outputs rather than trying to run snippets of code from results.Rnw extracts? I've changed RStudio Global Otions > SWeave to weave using knitr, but after that I don't see how running the Rnw files in RStudio will get me to seeing the outputs, etc.

A few unrelated questions/comments about some figures in the most recent results.pdf (which are looking good):

Mike

James-Thorson commented 8 years ago

@mcmelnychuk I'm reading through the definition of assessments, and see that it has no criterion that would exclude stock-reduction analyses e.g., DCAC or DBSRA, CMSY or Catch-MSY etc. However, I don't think we included these. Is it fair to expand the definition to clarify that stock assessments needed to be fitted to biological data to estimate population scale (e.g., an index of abundance or compositional data that allows changes in biomass to be inferred)?

mcmelnychuk commented 8 years ago

hi Jim,

Yes, that's an improvement to the definition, and is consistent with what we did.

To double-check I took a quick look at the comments, found a few references to stock reduction analyses, and looked into those. They were occasionally used in combination with other methods for the first assessment, but after a quick check at some archived assessments, I think our dataset as it stands is at least consistent.

Mike

On 2016-11-06 5:53 PM, Jim Thorson wrote:

@mcmelnychuk https://github.com/mcmelnychuk I'm reading through the definition of assessments, and see that it has no criterion that would exclude stock-reduction analyses e.g., DCAC or DBSRA, CMSY or Catch-MSY etc. However, I don't think we included these. Is it fair to expand the definition to clarify that stock assessments needed to be fitted to biological data to estimate population scale (e.g., an index of abundance or compositional data that allows changes in biomass to be inferred)?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-258730515, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVRJVw2Rnz6bySaZTW-NLWcr01hQrks5q7oSvgaJpZM4KgiJj.

Philipp-Neubauer commented 8 years ago

@mcmelnychuk

For the .Rnw; you can either click "compile pdf" to compile the pdf yourself from the most recent changes, or you can run the R chunks one by one (e.g., by going to the chuck you want to see, and clicking "Run all chunks above" (ctrl-alt-p for me) in the run menu on the top right of your editor panel) to produce the plots in your R graphics device. If you want to tweak things, this is usually the better way since you don't have to wait for the whole doc to compile every time you make a change. Once you're happy with your edit, you can re-compile...

I like the idea of the "assessed proportion of catch" panel for figure two. Will try and get that in tomorrow.

is the "bathy" group ok to leave in as a random effect level given there are few observations with that level?

That shouldn't cause trouble as long as the over-all variance for habitat is well informed, I'd say. Perhaps a column for n for each effect level (for all random effects) listed in the appendix table would be good though.

Fig. 9 shows 3 of the 12 classes that were shown in Fig. 4. Do the other classes not have multiple orders within them, thus they're not shown in Fig. 9?

Yep.

I'll deal with typos and colors tomorrow, thanks for pointing those out. And agreed that the colors are still sub-optimal. Will try to find a better color scheme to go with.

And yes, agreed that @James-Thorson's amendment of the assessment definition seems like the right one to go with.

Philipp-Neubauer commented 8 years ago

Pushed a third panel for figure 2 in 68c70fcc7158ec584713eebb53361bf2debaa545; quite impressive how in terms of catch, Alaska has almost 100% coverage!

James-Thorson commented 8 years ago

Phil,

If you're willing to do another finicky change, I think Fig. 8 would look better as a 4-panel figure with one region per panel (this could be done without color to save money, and also the current version is hard to read).

Also Phil, could you:

  1. add a new section on the title page with our priotized list of target journals? (I forget what we'd said)
  2. add a section for the abstract, which we could start drafting as a way to decide what is the "most important points"?

Jim

On Mon, Nov 7, 2016 at 4:46 PM, Philipp Neubauer notifications@github.com wrote:

Pushed a third panel for figure 2 in 68c70fc https://github.com/Philipp-Neubauer/FirstAssessment/commit/68c70fcc7158ec584713eebb53361bf2debaa545; quite impressive how in terms of catch, Alaska has almost 100% coverage!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-259011203, or mute the thread https://github.com/notifications/unsubscribe-auth/AHnqTRSBMActonY1Xavf0ZYsdo08zqgjks5q78ZsgaJpZM4KgiJj .

Philipp-Neubauer commented 8 years ago

Have added the above; as well as a very rough abstract. Just a listing of what I thought stood out so far...but I may have missed a fair bit as I haven't really taken a step back to look at it in detail yet...

James-Thorson commented 8 years ago

Hi all, just edited the abstract although maybe the distinctions there a bit too fine now. also added some journal suggestions.

Phil -- do you want to keep adding Results text, or you want me to? I'm happy to do any writing, just give me the word.

Philipp-Neubauer commented 8 years ago

Hi Jim;

perhaps you can draft some results if you have time; I will spend some time on results later today or tomorrow, too. Hopefully by next week we can have a discussion and with a bit of polish send it off?

Will have a look at the abstract and journal suggestions.

Phil

James-Thorson commented 8 years ago

I'll need to be pretty careful with internal review with this one, so it might take a while prior to submission after we have a rough draft :(. But yes, a rough draft next week would be a great goal!

On Nov 9, 2016 3:18 PM, "Philipp Neubauer" notifications@github.com wrote:

Hi Jim;

perhaps you can draft some results if you have time; I will spend some time on results later today or tomorrow, too. Hopefully by next week we can have a discussion and with a bit of polish send it off?

Will have a look at the abstract and journal suggestions.

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-259554934, or mute the thread https://github.com/notifications/unsubscribe-auth/AHnqTSg2O0fKNz09eTJVGsDrSu0z7rmiks5q8lSygaJpZM4KgiJj .

Philipp-Neubauer commented 8 years ago

Righto; understood.

On Thu, Nov 10, 2016 at 12:23 PM, Jim Thorson notifications@github.com wrote:

I'll need to be pretty careful with internal review with this one, so it might take a while prior to submission after we have a rough draft :(. But yes, a rough draft next week would be a great goal!

On Nov 9, 2016 3:18 PM, "Philipp Neubauer" notifications@github.com wrote:

Hi Jim;

perhaps you can draft some results if you have time; I will spend some time on results later today or tomorrow, too. Hopefully by next week we can have a discussion and with a bit of polish send it off?

Will have a look at the abstract and journal suggestions.

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-259554934, or mute the thread https://github.com/notifications/unsubscribe-auth/ AHnqTSg2O0fKNz09eTJVGsDrSu0z7rmiks5q8lSygaJpZM4KgiJj .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-259555945, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC3KEMsPyZ2l025sVYm7y19CzadBGks5q8lXfgaJpZM4KgiJj .

Phil

James-Thorson commented 8 years ago

Phil,

I'm adding some results text now. However, I see that we have a lot of figures, and suggest:

  1. Eliminating Fig. 5 (Fig. 8 is a more interesting summary of survival function), and Fig. 6 has most of the same info
  2. Changing the lower-right panel of Fig. 6 to taxonomic order, so that all info in Fig. 5 is now in Fig. 6 (the existing lower-right panel of Fig. 6 is also in Table 1).

Any objections or alternative ways of eliminating redundancies?

mcmelnychuk commented 8 years ago

hi Phil and Jim,

I'll go through the text/figures in detail today. Any objections if I cut & paste the text into Word and then provide comments with Track Changes? That may seem archaic, but that way you can see what the changes/comments are more easily and choose to make changes based on them or else ignore them. Or if it's best to just modify results.Rnw directly, just say so.

Comparing Figs. 5 and 8, I noticed that they don't align and wondered if this indicates something is up. In Fig. 5 (region panel), all regions show P(assessed) of around 0.1-0.15 in the final year. I would have expected projections to begin around these values, but instead they begin around 0.25-0.55 in Figure 8.

If we get rid of the lower right panel of Fig. 6 as Jim suggests, we could add the info about median & 95% intervals of the posterior distributions to an additional column in Table 1. On the other hand, leaving that panel as part of Fig 6 emphasizes that the price effect (and even the maximum landings effect) were stronger than any of the categorical effects.

We might consider moving Fig. 4 to the Appendix.

Mike

On 2016-11-11 9:42 AM, Jim Thorson wrote:

Phil,

I'm adding some results text now. However, I see that we have a lot of figures, and suggest:

  1. Eliminating Fig. 5 (Fig. 8 is a more interesting summary of survival function), and Fig. 6 has most of the same info
  2. Changing the lower-right panel of Fig. 6 to taxonomic order, so that all info in Fig. 5 is now in Fig. 6 (the existing lower-right panel of Fig. 6 is also in Table 1).

Any objections or alternative ways of eliminating redundancies?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-260012709, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVcZwL1rPbE72JThEFmOHCMH2Sz3zks5q9KkWgaJpZM4KgiJj.

Philipp-Neubauer commented 8 years ago

Agreed;

Mike, happy to go through a word doc and port changes to the .Rnw.

Yes, Figure 4 should be in the appendix. Similarly, figure 5 could be put into the appendix instead of eliminated.

Mike - your question about Fig 5 suggests that the caption needs to be clearer: These are marginal effects - i.e., at the mean of other variables, so we wouldn't expect the actual proportion assessed to be the same as in these projections. Also, the projection is over an arbitrary 50 year span, not until the final year in the analysis.

Agreed with adding info about the posterior in the table if we get rid of the last panel in Fig 6. But I also like the idea of having all the info about main effects visualized in one figure. Plus, the taxonomic order effect is emphasized in Fig 7 also, so might not need to have it in Fig 6 (unless we move Fig 7 as well?)...

Phil

On Sat, Nov 12, 2016 at 8:36 AM, Michael Melnychuk <notifications@github.com

wrote:

hi Phil and Jim,

I'll go through the text/figures in detail today. Any objections if I cut & paste the text into Word and then provide comments with Track Changes? That may seem archaic, but that way you can see what the changes/comments are more easily and choose to make changes based on them or else ignore them. Or if it's best to just modify results.Rnw directly, just say so.

Comparing Figs. 5 and 8, I noticed that they don't align and wondered if this indicates something is up. In Fig. 5 (region panel), all regions show P(assessed) of around 0.1-0.15 in the final year. I would have expected projections to begin around these values, but instead they begin around 0.25-0.55 in Figure 8.

If we get rid of the lower right panel of Fig. 6 as Jim suggests, we could add the info about median & 95% intervals of the posterior distributions to an additional column in Table 1. On the other hand, leaving that panel as part of Fig 6 emphasizes that the price effect (and even the maximum landings effect) were stronger than any of the categorical effects.

We might consider moving Fig. 4 to the Appendix.

Mike

On 2016-11-11 9:42 AM, Jim Thorson wrote:

Phil,

I'm adding some results text now. However, I see that we have a lot of figures, and suggest:

  1. Eliminating Fig. 5 (Fig. 8 is a more interesting summary of survival function), and Fig. 6 has most of the same info
  2. Changing the lower-right panel of Fig. 6 to taxonomic order, so that all info in Fig. 5 is now in Fig. 6 (the existing lower-right panel of Fig. 6 is also in Table 1).

Any objections or alternative ways of eliminating redundancies?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-260012709, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVcZwL1rPbE72JThEFmOHCMH2Sz3zks5q9KkWgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-260037699, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC58Pcrx0RaSn7OA29Le9zmAD0e8Rks5q9MO2gaJpZM4KgiJj .

Phil

mcmelnychuk commented 8 years ago

Sounds good, I'll send some Track Change comments/suggestions later today, working off Jim's Rnw version from this morning.

I think the caption is fine - I had misinterpreted, thinking that the time = 50 yr would line up with the present, but I realize now my error.

My two cents would be to move Figs 4 & 5 to the Appendix and keep Figs 6 and 7 (as they currently are) in the main text, but I'm happy to go with other possibilities.

Mike

On 2016-11-11 1:49 PM, Philipp Neubauer wrote:

Agreed;

Mike, happy to go through a word doc and port changes to the .Rnw.

Yes, Figure 4 should be in the appendix. Similarly, figure 5 could be put into the appendix instead of eliminated.

Mike - your question about Fig 5 suggests that the caption needs to be clearer: These are marginal effects - i.e., at the mean of other variables, so we wouldn't expect the actual proportion assessed to be the same as in these projections. Also, the projection is over an arbitrary 50 year span, not until the final year in the analysis.

Agreed with adding info about the posterior in the table if we get rid of the last panel in Fig 6. But I also like the idea of having all the info about main effects visualized in one figure. Plus, the taxonomic order effect is emphasized in Fig 7 also, so might not need to have it in Fig 6 (unless we move Fig 7 as well?)...

Phil

On Sat, Nov 12, 2016 at 8:36 AM, Michael Melnychuk <notifications@github.com

wrote:

hi Phil and Jim,

I'll go through the text/figures in detail today. Any objections if I cut & paste the text into Word and then provide comments with Track Changes? That may seem archaic, but that way you can see what the changes/comments are more easily and choose to make changes based on them or else ignore them. Or if it's best to just modify results.Rnw directly, just say so.

Comparing Figs. 5 and 8, I noticed that they don't align and wondered if this indicates something is up. In Fig. 5 (region panel), all regions show P(assessed) of around 0.1-0.15 in the final year. I would have expected projections to begin around these values, but instead they begin around 0.25-0.55 in Figure 8.

If we get rid of the lower right panel of Fig. 6 as Jim suggests, we could add the info about median & 95% intervals of the posterior distributions to an additional column in Table 1. On the other hand, leaving that panel as part of Fig 6 emphasizes that the price effect (and even the maximum landings effect) were stronger than any of the categorical effects.

We might consider moving Fig. 4 to the Appendix.

Mike

On 2016-11-11 9:42 AM, Jim Thorson wrote:

Phil,

I'm adding some results text now. However, I see that we have a lot of figures, and suggest:

  1. Eliminating Fig. 5 (Fig. 8 is a more interesting summary of survival function), and Fig. 6 has most of the same info
  2. Changing the lower-right panel of Fig. 6 to taxonomic order, so that all info in Fig. 5 is now in Fig. 6 (the existing lower-right panel of Fig. 6 is also in Table 1).

Any objections or alternative ways of eliminating redundancies?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-260012709, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVcZwL1rPbE72JThEFmOHCMH2Sz3zks5q9KkWgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-260037699, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC58Pcrx0RaSn7OA29Le9zmAD0e8Rks5q9MO2gaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-260063501, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVcPlg-uTzcUQUM49kbuyulVdRKbrks5q9OLogaJpZM4KgiJj.

James-Thorson commented 8 years ago

Hi guys,

I added and edited a bunch of the discussion, and IMO we would have a good rough draft if someone was willing to draft a paragraph discussing the taxonomic levels (groundsharks, flatfishes, Scorpaenids) that had relatively high assessment probability after controlling for price and landings (maybe re: the conservation importance of Carcharanids and Scorpaenids).

  1. Could you each please look it over before I try adding any more text to see if you're OK with the direction, or if you want to add other major points?
  2. Could one of you volunteer to write that remaining paragraph with necessary references (I don't know good refs to suggest which is presumably the harder part)

Anway, IMO we're close to a rough draft, yay!

Philipp-Neubauer commented 8 years ago

Hey Jim -

that's exciting! I'm happy to read through later today and comment/add. But I'm not sure that I'm the best placed to comment on Rockfish of Groundshark conservation issues - I only know of them anecdotally. That said, if you guys are in the same boat, I'm happy to read into it to see what discussion points I can find.

Cheerio Phil

On Thu, Nov 17, 2016 at 10:00 AM, Jim Thorson notifications@github.com wrote:

Hi guys,

I added and edited a bunch of the discussion, and IMO we would have a good rough draft if someone was willing to draft a paragraph discussing the taxonomic levels (groundsharks, flatfishes, Scorpaenids) that had relatively high assessment probability after controlling for price and landings (maybe re: the conservation importance of Carcharanids and Scorpaenids).

1.

Could you each please look it over before I try adding any more text to see if you're OK with the direction, or if you want to add other major points? 2.

Could one of you volunteer to write that remaining paragraph with necessary references (I don't know good refs to suggest which is presumably the harder part)

Anway, IMO we're close to a rough draft, yay!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261070098, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC_P6Gjfz8sDnwNtGp8W939nHjxSmks5q-27mgaJpZM4KgiJj .

Phil

mcmelnychuk commented 8 years ago

I'm in the same boat, but also happy to dig into some papers to check (I should be doing that more anyway for these stocks!).

I'll check over the current Discussion in the next couple days.

I don't yet have an estimate of when Nicole will get through checking the list for other possible assessments - I'm hoping by the end of this week but it could be next week instead.

Mike

On 2016-11-16 2:20 PM, Philipp Neubauer wrote:

Hey Jim -

that's exciting! I'm happy to read through later today and comment/add. But I'm not sure that I'm the best placed to comment on Rockfish of Groundshark conservation issues - I only know of them anecdotally. That said, if you guys are in the same boat, I'm happy to read into it to see what discussion points I can find.

Cheerio Phil

On Thu, Nov 17, 2016 at 10:00 AM, Jim Thorson notifications@github.com wrote:

Hi guys,

I added and edited a bunch of the discussion, and IMO we would have a good rough draft if someone was willing to draft a paragraph discussing the taxonomic levels (groundsharks, flatfishes, Scorpaenids) that had relatively high assessment probability after controlling for price and landings (maybe re: the conservation importance of Carcharanids and Scorpaenids).

1.

Could you each please look it over before I try adding any more text to see if you're OK with the direction, or if you want to add other major points? 2.

Could one of you volunteer to write that remaining paragraph with necessary references (I don't know good refs to suggest which is presumably the harder part)

Anway, IMO we're close to a rough draft, yay!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261070098, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC_P6Gjfz8sDnwNtGp8W939nHjxSmks5q-27mgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261091100, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVag_1U_EQc6s8PKFbWkqncHU_xxEks5q-4G0gaJpZM4KgiJj.

James-Thorson commented 8 years ago

OK, makes sense. I've gone ahead and added a sentence and references explaining Scorpaenids. I don't have any special knowledge of groundsharks or flatfishes, so those might require a bit more sleuthing if anyone is willing to take the lead?

mcmelnychuk commented 8 years ago

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references explaining Scorpaenids. I don't have any special knowledge of groundsharks or flatfishes, so those might require a bit more sleuthing if anyone is willing to take the lead?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261154300, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj.

Philipp-Neubauer commented 8 years ago

Hi there;

have just read through the discussion - lots of good points, Jim. I've also added the citations/bib into the manuscript. To make the citations work, in Rstudio you'll need to go to the "Build" menu -> Configure build tools -> choose "Makefile" and select the project directory. From there on, when you want to build after adding writing etc, do "build all" (Shift-Ctrl-B). Outside of Rstudio, open a command window, cd to the first assessment directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I think the interesting angle here is conservation vs economics: We could asdd something more general: we can only capture the conservation status driver in the taxonomic component since it would be hard to define some kind of surrogate for that for stocks without an assessment. This points to a problem in prioritizing assessments: the conservation status only really factors once we have some evidence that things are probably going badly for a stock. Thus, valuable stocks are potentially well managed early on in their exploitation history, whereas small stocks probably only get that level of attention when there is an indication that things are heading for disaster. This could have unforeseen ecological consequences if the importance of such species is high relative to the economic value from fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed quantitatively, but won't figure in our DB (Turtles, Mammals) since they would probably have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk <notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references explaining Scorpaenids. I don't have any special knowledge of groundsharks or flatfishes, so those might require a bit more sleuthing if anyone is willing to take the lead?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261154300, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261159042, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC8y0pzRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj .

Phil

mcmelnychuk commented 8 years ago

hi Phil,

I'm getting a couple errors after trying to compile the PDF... any ideas?

In the RStudio Build window:

'make' is not recognized as an internal or external command, operable program or batch file. Exited with status 1.

In the RStudio Compile PDF window:

output file: results.tex

[1] "results.tex" Running xelatex.exe on results.tex...failed

Issues: 1 warning

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim. I've also added the citations/bib into the manuscript. To make the citations work, in Rstudio you'll need to go to the "Build" menu -> Configure build tools -> choose "Makefile" and select the project directory. From there on, when you want to build after adding writing etc, do "build all" (Shift-Ctrl-B). Outside of Rstudio, open a command window, cd to the first assessment directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I think the interesting angle here is conservation vs economics: We could asdd something more general: we can only capture the conservation status driver in the taxonomic component since it would be hard to define some kind of surrogate for that for stocks without an assessment. This points to a problem in prioritizing assessments: the conservation status only really factors once we have some evidence that things are probably going badly for a stock. Thus, valuable stocks are potentially well managed early on in their exploitation history, whereas small stocks probably only get that level of attention when there is an indication that things are heading for disaster. This could have unforeseen ecological consequences if the importance of such species is high relative to the economic value from fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed quantitatively, but won't figure in our DB (Turtles, Mammals) since they would probably have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk <notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references explaining Scorpaenids. I don't have any special knowledge of groundsharks or flatfishes, so those might require a bit more sleuthing if anyone is willing to take the lead?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261154300, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261159042, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC8y0pzRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261412044, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVdEB4r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj.

Philipp-Neubauer commented 8 years ago

Ok; the Rstudio compile says failed for me as well, but should actually be mostly OK - except that the references won't be resolved in the text. If that's no biggie for you, you can just look at that version.

I forgot about make not being native in windows (it is on Mac and Unix systems). To install make and resolve the biblio, you'll need to follow instructions here:

http://stat545.com/automation02_windows.html

sorry about that hassle!

Phil

On Fri, Nov 18, 2016 at 2:16 PM, Michael Melnychuk <notifications@github.com

wrote:

hi Phil,

I'm getting a couple errors after trying to compile the PDF... any ideas?

In the RStudio Build window:

'make' is not recognized as an internal or external command, operable program or batch file. Exited with status 1.

In the RStudio Compile PDF window:

output file: results.tex

[1] "results.tex" Running xelatex.exe on results.tex...failed

Issues: 1 warning

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim. I've also added the citations/bib into the manuscript. To make the citations work, in Rstudio you'll need to go to the "Build" menu -> Configure build tools -> choose "Makefile" and select the project directory. From there on, when you want to build after adding writing etc, do "build all" (Shift-Ctrl-B). Outside of Rstudio, open a command window, cd to the first assessment directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I think the interesting angle here is conservation vs economics: We could asdd something more general: we can only capture the conservation status driver in the taxonomic component since it would be hard to define some kind of surrogate for that for stocks without an assessment. This points to a problem in prioritizing assessments: the conservation status only really factors once we have some evidence that things are probably going badly for a stock. Thus, valuable stocks are potentially well managed early on in their exploitation history, whereas small stocks probably only get that level of attention when there is an indication that things are heading for disaster. This could have unforeseen ecological consequences if the importance of such species is high relative to the economic value from fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed quantitatively, but won't figure in our DB (Turtles, Mammals) since they would probably have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk <notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references explaining Scorpaenids. I don't have any special knowledge of groundsharks or flatfishes, so those might require a bit more sleuthing if anyone is willing to take the lead?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261154300, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261159042, or mute the thread

https://github.com/notifications/unsubscribe-auth/ ACJDC8y0pzRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261412044, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVdEB4r6fNgae_ zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261421252, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC142I6Log8ExBkmqcoieerI5GFwfks5q_PxagaJpZM4KgiJj .

Phil

James-Thorson commented 8 years ago

I probably won't try to compile while including the bibliography Phil, perhaps you could just try to compile it when we're close to done?

FYI -- I also just started a Discussion paragraph on the potential use of our model for "propensity score matching" in assessment meta-analyses. Feel free to jump in with edits -- very rough!

mcmelnychuk commented 8 years ago

That paragraph is looking good, Jim. Just going over the Discussion now with edits, and will post today.

Before we go too far down the rabbit hole of explaining the high assessment probabilities for certain taxa like cephalopods, groundsharks, and rockfishes, I'd like to try creating a 2-d surface plot just to see what it looks like (x=maximum landings, y=price, z=assessment probability), with either symbols or text labels overlaid to indicate where some of the taxa fall on this surface. I'm wondering whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price. The opposite may be true for bivalves, which had relatively low marginal effects of assessment probability, and which I'm guessing have relatively high landings and prices.

Phil, would it be easy to extract the predicted assessment probabilities of each stock? Probably based on the most recent year makes sense, and including both assessed and non-assessed stocks. A csv file or anything like would be fine, I can just make up a quick & dirty plot in Excel.

Mike

On 2016-11-17 9:18 PM, Jim Thorson wrote:

I probably won't try to compile while including the bibliography Phil, perhaps you could just try to compile it when we're close to done?

FYI -- I also just started a Discussion paragraph on the potential use of our model for "propensity score matching" in assessment meta-analyses. Feel free to jump in with edits -- very rough!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261450949, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVeJ8jhy7z7pvQ1eZ8Y-JS-RoMKNYks5q_TUOgaJpZM4KgiJj.

Philipp-Neubauer commented 8 years ago

Mike -

I'm wondering

whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price.

Yes - that's exactly what's happening, as the only influential terms in the model are landings and price. So the taxonomic effects balance the assessment probabilities for individual taxa against the over-all trend set by the continuous covariates (and by higher taxonomic groupings). I think we can say that without an additional plot (since there's not really much else going on in the model)? I can make a csv for you tomorrow if you still want to do that plot.

So the question is why these taxa defy the general trend - in the case of Rockfish and Groundsharks its probably conservation concerns, not so sure about bivalves...

Please correct me if I misunderstood your question/request....

On Sun, Nov 20, 2016 at 8:12 AM, Michael Melnychuk <notifications@github.com

wrote:

That paragraph is looking good, Jim. Just going over the Discussion now with edits, and will post today.

Before we go too far down the rabbit hole of explaining the high assessment probabilities for certain taxa like cephalopods, groundsharks, and rockfishes, I'd like to try creating a 2-d surface plot just to see what it looks like (x=maximum landings, y=price, z=assessment probability), with either symbols or text labels overlaid to indicate where some of the taxa fall on this surface. I'm wondering whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price. The opposite may be true for bivalves, which had relatively low marginal effects of assessment probability, and which I'm guessing have relatively high landings and prices.

Phil, would it be easy to extract the predicted assessment probabilities of each stock? Probably based on the most recent year makes sense, and including both assessed and non-assessed stocks. A csv file or anything like would be fine, I can just make up a quick & dirty plot in Excel.

Mike

On 2016-11-17 9:18 PM, Jim Thorson wrote:

I probably won't try to compile while including the bibliography Phil, perhaps you could just try to compile it when we're close to done?

FYI -- I also just started a Discussion paragraph on the potential use of our model for "propensity score matching" in assessment meta-analyses. Feel free to jump in with edits -- very rough!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261450949, or mute the thread https://github.com/notifications/unsubscribe- auth/AV_oVeJ8jhy7z7pvQ1eZ8Y-JS-RoMKNYks5q_TUOgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261733288, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJDC0fB-ueeqeHzhPPSq70bYnsN_V_0ks5q_0oKgaJpZM4KgiJj .

Phil

mcmelnychuk commented 8 years ago

What I was thinking was that, because we consider only additive effects and no interactions, a given taxonomic effect could be interpreted differently depending on whether the taxon's landings and price values are more or less spread over the ranges of landings and price, or are clustered at one of the extremes of max landings.

For example, if groundshark stocks have a variety of max landings values that spans much of the range of max landings across all stocks, then a high coefficient for groundsharks would apply (or be robust) across that range and one would conclude they have a higher probability of assessment after controlling for landings etc.

If instead landings of all groundshark stocks were very low, then the simple linear slope for max landings estimated across all stocks might behave less well at the extremes. You could still argue the same interpretation of the coefficient under assumptions of linearity and additive terms, but in cases of clustering at the extremes you could instead argue that the actual influence of max landings at the extremes is not necessarily fully captured by the linear and additive slope modelled.

I may be off track with this, or it might be getting too far away from the simple, clear patterns we're showing. Just something that crossed my mind, but I'm happy to abandon it!

Mike

On 2016-11-19 12:43 PM, Philipp Neubauer wrote:

Mike -

I'm wondering

whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price.

Yes - that's exactly what's happening, as the only influential terms in the model are landings and price. So the taxonomic effects balance the assessment probabilities for individual taxa against the over-all trend set by the continuous covariates (and by higher taxonomic groupings). I think we can say that without an additional plot (since there's not really much else going on in the model)? I can make a csv for you tomorrow if you still want to do that plot.

So the question is why these taxa defy the general trend - in the case of Rockfish and Groundsharks its probably conservation concerns, not so sure about bivalves...

Please correct me if I misunderstood your question/request....

On Sun, Nov 20, 2016 at 8:12 AM, Michael Melnychuk <notifications@github.com

wrote:

That paragraph is looking good, Jim. Just going over the Discussion now with edits, and will post today.

Before we go too far down the rabbit hole of explaining the high assessment probabilities for certain taxa like cephalopods, groundsharks, and rockfishes, I'd like to try creating a 2-d surface plot just to see what it looks like (x=maximum landings, y=price, z=assessment probability), with either symbols or text labels overlaid to indicate where some of the taxa fall on this surface. I'm wondering whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price. The opposite may be true for bivalves, which had relatively low marginal effects of assessment probability, and which I'm guessing have relatively high landings and prices.

Phil, would it be easy to extract the predicted assessment probabilities of each stock? Probably based on the most recent year makes sense, and including both assessed and non-assessed stocks. A csv file or anything like would be fine, I can just make up a quick & dirty plot in Excel.

Mike

On 2016-11-17 9:18 PM, Jim Thorson wrote:

I probably won't try to compile while including the bibliography Phil, perhaps you could just try to compile it when we're close to done?

FYI -- I also just started a Discussion paragraph on the potential use of our model for "propensity score matching" in assessment meta-analyses. Feel free to jump in with edits -- very rough!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261450949, or mute the thread https://github.com/notifications/unsubscribe- auth/AV_oVeJ8jhy7z7pvQ1eZ8Y-JS-RoMKNYks5q_TUOgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261733288, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC0fB-ueeqeHzhPPSq70bYnsN_V_0ks5q_0oKgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261738399, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVRs7K0jLvCUfqzdR4Xu8g7fQg2zvks5q_19cgaJpZM4KgiJj.

James-Thorson commented 8 years ago

I'm not sure I'm following the concern. I agree that a regression model is not very useful for making predictions outside the observed range of a predictor variable. So given a small range for landings for groundsharks for training the model, I'm not sure its useful to predict what would happen if we had a groundshark with large landings.

However, I still think we can interpret results as saying that, given the low landings and price of groundsharks, they have relatively high assessment probability relative to other stocks with similarly low landings and price.

On Sat, Nov 19, 2016 at 1:40 PM, Michael Melnychuk <notifications@github.com

wrote:

What I was thinking was that, because we consider only additive effects and no interactions, a given taxonomic effect could be interpreted differently depending on whether the taxon's landings and price values are more or less spread over the ranges of landings and price, or are clustered at one of the extremes of max landings.

For example, if groundshark stocks have a variety of max landings values that spans much of the range of max landings across all stocks, then a high coefficient for groundsharks would apply (or be robust) across that range and one would conclude they have a higher probability of assessment after controlling for landings etc.

If instead landings of all groundshark stocks were very low, then the simple linear slope for max landings estimated across all stocks might behave less well at the extremes. You could still argue the same interpretation of the coefficient under assumptions of linearity and additive terms, but in cases of clustering at the extremes you could instead argue that the actual influence of max landings at the extremes is not necessarily fully captured by the linear and additive slope modelled.

I may be off track with this, or it might be getting too far away from the simple, clear patterns we're showing. Just something that crossed my mind, but I'm happy to abandon it!

Mike

On 2016-11-19 12:43 PM, Philipp Neubauer wrote:

Mike -

I'm wondering

whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price.

Yes - that's exactly what's happening, as the only influential terms in the model are landings and price. So the taxonomic effects balance the assessment probabilities for individual taxa against the over-all trend set by the continuous covariates (and by higher taxonomic groupings). I think we can say that without an additional plot (since there's not really much else going on in the model)? I can make a csv for you tomorrow if you still want to do that plot.

So the question is why these taxa defy the general trend - in the case of Rockfish and Groundsharks its probably conservation concerns, not so sure about bivalves...

Please correct me if I misunderstood your question/request....

On Sun, Nov 20, 2016 at 8:12 AM, Michael Melnychuk <notifications@github.com

wrote:

That paragraph is looking good, Jim. Just going over the Discussion now with edits, and will post today.

Before we go too far down the rabbit hole of explaining the high assessment probabilities for certain taxa like cephalopods, groundsharks, and rockfishes, I'd like to try creating a 2-d surface plot just to see what it looks like (x=maximum landings, y=price, z=assessment probability), with either symbols or text labels overlaid to indicate where some of the taxa fall on this surface. I'm wondering whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price. The opposite may be true for bivalves, which had relatively low marginal effects of assessment probability, and which I'm guessing have relatively high landings and prices.

Phil, would it be easy to extract the predicted assessment probabilities of each stock? Probably based on the most recent year makes sense, and including both assessed and non-assessed stocks. A csv file or anything like would be fine, I can just make up a quick & dirty plot in Excel.

Mike

On 2016-11-17 9:18 PM, Jim Thorson wrote:

I probably won't try to compile while including the bibliography Phil, perhaps you could just try to compile it when we're close to done?

FYI -- I also just started a Discussion paragraph on the potential use of our model for "propensity score matching" in assessment meta-analyses. Feel free to jump in with edits -- very rough!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261450949, or mute the thread https://github.com/notifications/unsubscribe- auth/AV_oVeJ8jhy7z7pvQ1eZ8Y-JS-RoMKNYks5q_TUOgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261733288, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC0fB- ueeqeHzhPPSq70bYnsN_V_0ks5q_0oKgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261738399, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVRs7K0jLvCUfqzdR4Xu8g7fQg2zvks5q_19cgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261741430, or mute the thread https://github.com/notifications/unsubscribe-auth/AHnqTeKS25AfsSZvYyXDAT6MrDicwq5_ks5q_2y3gaJpZM4KgiJj .

mcmelnychuk commented 8 years ago

I think you guys are right... the taxonomic coefficients are probably robust and simple enough to interpret even if the assumed linear slope with landings were to (hypothetically) break down at the extremes. Sorry for the monkey wrench - consider it abandoned!

Mike

On 2016-11-19 5:42 PM, Jim Thorson wrote:

I'm not sure I'm following the concern. I agree that a regression model is not very useful for making predictions outside the observed range of a predictor variable. So given a small range for landings for groundsharks for training the model, I'm not sure its useful to predict what would happen if we had a groundshark with large landings.

However, I still think we can interpret results as saying that, given the low landings and price of groundsharks, they have relatively high assessment probability relative to other stocks with similarly low landings and price.

On Sat, Nov 19, 2016 at 1:40 PM, Michael Melnychuk <notifications@github.com

wrote:

What I was thinking was that, because we consider only additive effects and no interactions, a given taxonomic effect could be interpreted differently depending on whether the taxon's landings and price values are more or less spread over the ranges of landings and price, or are clustered at one of the extremes of max landings.

For example, if groundshark stocks have a variety of max landings values that spans much of the range of max landings across all stocks, then a high coefficient for groundsharks would apply (or be robust) across that range and one would conclude they have a higher probability of assessment after controlling for landings etc.

If instead landings of all groundshark stocks were very low, then the simple linear slope for max landings estimated across all stocks might behave less well at the extremes. You could still argue the same interpretation of the coefficient under assumptions of linearity and additive terms, but in cases of clustering at the extremes you could instead argue that the actual influence of max landings at the extremes is not necessarily fully captured by the linear and additive slope modelled.

I may be off track with this, or it might be getting too far away from the simple, clear patterns we're showing. Just something that crossed my mind, but I'm happy to abandon it!

Mike

On 2016-11-19 12:43 PM, Philipp Neubauer wrote:

Mike -

I'm wondering

whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price.

Yes - that's exactly what's happening, as the only influential terms in the model are landings and price. So the taxonomic effects balance the assessment probabilities for individual taxa against the over-all trend set by the continuous covariates (and by higher taxonomic groupings). I think we can say that without an additional plot (since there's not really much else going on in the model)? I can make a csv for you tomorrow if you still want to do that plot.

So the question is why these taxa defy the general trend - in the case of Rockfish and Groundsharks its probably conservation concerns, not so sure about bivalves...

Please correct me if I misunderstood your question/request....

On Sun, Nov 20, 2016 at 8:12 AM, Michael Melnychuk <notifications@github.com

wrote:

That paragraph is looking good, Jim. Just going over the Discussion now with edits, and will post today.

Before we go too far down the rabbit hole of explaining the high assessment probabilities for certain taxa like cephalopods, groundsharks, and rockfishes, I'd like to try creating a 2-d surface plot just to see what it looks like (x=maximum landings, y=price, z=assessment probability), with either symbols or text labels overlaid to indicate where some of the taxa fall on this surface. I'm wondering whether the strong taxonomic effects observed for those few, individual taxa might simply be intercept offsets that balance out the stronger influences of landings and price across taxa. I.e. maybe those taxa have high predicted probabilities of assessment simply because they have fairly low max landings and/or low prices, which predict a lower-than-actual probability of assessment given the steep slopes for landings and price. The opposite may be true for bivalves, which had relatively low marginal effects of assessment probability, and which I'm guessing have relatively high landings and prices.

Phil, would it be easy to extract the predicted assessment probabilities of each stock? Probably based on the most recent year makes sense, and including both assessed and non-assessed stocks. A csv file or anything like would be fine, I can just make up a quick & dirty plot in Excel.

Mike

On 2016-11-17 9:18 PM, Jim Thorson wrote:

I probably won't try to compile while including the bibliography Phil, perhaps you could just try to compile it when we're close to done?

FYI -- I also just started a Discussion paragraph on the potential use of our model for "propensity score matching" in assessment meta-analyses. Feel free to jump in with edits -- very rough!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261450949, or mute the thread https://github.com/notifications/unsubscribe- auth/AV_oVeJ8jhy7z7pvQ1eZ8Y-JS-RoMKNYks5q_TUOgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261733288, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC0fB- ueeqeHzhPPSq70bYnsN_V_0ks5q_0oKgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261738399, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVRs7K0jLvCUfqzdR4Xu8g7fQg2zvks5q_19cgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261741430, or mute the thread

https://github.com/notifications/unsubscribe-auth/AHnqTeKS25AfsSZvYyXDAT6MrDicwq5_ks5q_2y3gaJpZM4KgiJj .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261752366, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVX2OOiECyLb-18V80uu0QzvA1jfZks5q_6VrgaJpZM4KgiJj.

mcmelnychuk commented 8 years ago

Sorry, I forgot about this suggestion in your second paragraph when adding to the Discussion yesterday. I agree that those ideas would be good to add. We could mention in the discussion that perceived abundance/status may be a factor affecting when an assessment is first conducted, but that (obviously) abundance is not actually known before that assessment is done.

I can't remember what we decided, but was there any possibility (or advantage) to allowing model parameters to vary pre- and post-1996? Would that allow us to indirectly get at such questions of whether price matters more earlier on or later on in their exploitation history?

Mike

On 2016-11-17 4:20 PM, Philipp Neubauer wrote:

Hi there;

have just read through the discussion - lots of good points, Jim. I've also added the citations/bib into the manuscript. To make the citations work, in Rstudio you'll need to go to the "Build" menu -> Configure build tools -> choose "Makefile" and select the project directory. From there on, when you want to build after adding writing etc, do "build all" (Shift-Ctrl-B). Outside of Rstudio, open a command window, cd to the first assessment directory and type 'make'....hope this works.

I agree that we need a bit more about Rockfish and Groundsharks. I think the interesting angle here is conservation vs economics: We could asdd something more general: we can only capture the conservation status driver in the taxonomic component since it would be hard to define some kind of surrogate for that for stocks without an assessment. This points to a problem in prioritizing assessments: the conservation status only really factors once we have some evidence that things are probably going badly for a stock. Thus, valuable stocks are potentially well managed early on in their exploitation history, whereas small stocks probably only get that level of attention when there is an indication that things are heading for disaster. This could have unforeseen ecological consequences if the importance of such species is high relative to the economic value from fishing (e.g., bycatch of benthic inverts in trawl fisheries.)

Also, other bycatch species are probably assessed/managed quantitatively, but won't figure in our DB (Turtles, Mammals) since they would probably have a risk- rather than a stock assessment....

Happy to add something about this if you guys feel it makes sense...

Phil

On Thu, Nov 17, 2016 at 6:28 PM, Michael Melnychuk <notifications@github.com

wrote:

no problem, I can look into those in the next couple days.

Mike

On 2016-11-16 8:45 PM, Jim Thorson wrote:

OK, makes sense. I've gone ahead and added a sentence and references explaining Scorpaenids. I don't have any special knowledge of groundsharks or flatfishes, so those might require a bit more sleuthing if anyone is willing to take the lead?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/ issues/5#issuecomment-261154300, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_ oVez2nUr0jjGMFAnd21NbMT20dqEPks5q-9vXgaJpZM4KgiJj.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub

https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261159042, or mute the thread

https://github.com/notifications/unsubscribe-auth/ACJDC8y0pzRI6mYGaogcP6Q4UJ6hgQTJks5q--YOgaJpZM4KgiJj .

Phil

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Philipp-Neubauer/FirstAssessment/issues/5#issuecomment-261412044, or mute the thread https://github.com/notifications/unsubscribe-auth/AV_oVdEB4r6fNgae_zqvs7wZxS9wGayzks5q_O85gaJpZM4KgiJj.