cschwarz-stat-sfu-ca / BTSPAS

Bayesian Time Stratified Petersen Analysis System
1 stars 3 forks source link

u2copy to improve mixing could be negative. #18

Closed cschwarz-stat-sfu-ca closed 5 years ago

cschwarz-stat-sfu-ca commented 5 years ago

See email from Suring, Erik Erik.Suring@oregonstate.edu dated 10 August

When creating the u2copy to improve mixing, a spline was used to interpolate any missing values. This could lead to a negative value which causes BTSPAS to crash.

Solution from SB is to interpolate on the log(u2+1) scale to avoid this problem.

Implemented in all routines where u2copy is used.

cschwarz-stat-sfu-ca commented 5 years ago

Here was the raw data that caused the problem along with the code

Error found by Erik in BTSPAS

library("BTSPAS")

indata.csv <- textConnection(" Site , Species , SizeClass , TrapYear , w , u2 , n1 , m2 , sampfrac 1 , NF Nehalem Total , Coho , Smolt , 2012 , 10 , 63 , 23 , 2 , 0.428571429 2 , NF Nehalem Total , Coho , Smolt , 2012 , 12 , 54 , 26 , 4 , 0.428571429 3 , NF Nehalem Total , Coho , Smolt , 2012 , 13 , 131 , 74 , 12 , 0.428571429 4 , NF Nehalem Total , Coho , Smolt , 2012 , 14 , 43 , 25 , 5 , 0.571428571 5 , NF Nehalem Total , Coho , Smolt , 2012 , 15 , 619 , 167 , 55 , 1 6 , NF Nehalem Total , Coho , Smolt , 2012 , 16 , 1809 , 149 , 57 , 1 7 , NF Nehalem Total , Coho , Smolt , 2012 , 17 , 2031 , 125 , 42 , 0.857142857 8 , NF Nehalem Total , Coho , Smolt , 2012 , 18 , 1131 , 125 , 38 , 0.857142857 9 , NF Nehalem Total , Coho , Smolt , 2012 , 19 , 1189 , 175 , 71 , 1 10 , NF Nehalem Total , Coho , Smolt , 2012 , 20 , 2067 , 174 , 69 , 1 11 , NF Nehalem Total , Coho , Smolt , 2012 , 21 , 1615 , 175 , 81 , 1 12 , NF Nehalem Total , Coho , Smolt , 2012 , 22 , 571 , 175 , 76 , 1 13 , NF Nehalem Total , Coho , Smolt , 2012 , 23 , 511 , 175 , 95 , 1 14 , NF Nehalem Total , Coho , Smolt , 2012 , 24 , 271 , 175 , 93 , 1")

indata <- read.csv(indata.csv, header=TRUE, as.is=TRUE, strip.white=TRUE) indata

Create a directory to store the results

if(file.access("error-TSPDE")!=0){ dir.create("error-TSPDE", showWarnings=TRUE)} # Test and then create the directory setwd("error-TSPDE")

Get the data. In many cases, this is stored in a *.csv file and read into the program

using a read.csv() call. In this demo, the raw data is assigned directly as a vector.

#

Indicator for the week.

demo.jweek <- indata$w

Number of marked fish released in each week.

demo.n1 <- indata$n1

Number of marked fish recaptured in the week of release. No marked fish

are recaptured outside the week of release.

demo.m2 <- indata$m2

Number of unmarked fish captured at the trap in each week.

demo.u2 <- indata$u2

What fraction of the week was sampled?

demo.sampfrac<- indata$sampfrac

After which weeks is the spline allowed to jump?

demo.jump.after <- NULL # julian weeks after which jump occurs

Which julian weeks have "bad" recapture values. These will be set to missing and estimated.

demo.bad.m2 <- c() # list julian weeks with bad m2 values. This is used in the Trinity Example demo.bad.u2 <- c() # list julian weeks with bad u2 values. [This was arbitrary to demostrate the feature.] demo.bad.n1 <- c() # list julian weeks with bad n1 values. [This was arbitrary to demonstrate the feature.]

The prefix for the output files:

demo.prefix <- "demo-error-TSPDE"

Title for the analysis

demo.title <- "NF Nehalem Coho Smolt "

cat("*** Starting ",demo.title, "\n\n")

Make the call to fit the model and generate the output files

demo.error.tspde <- TimeStratPetersenDiagError_fit( title=demo.title, prefix=demo.prefix, time=demo.jweek, n1=demo.n1, m2=demo.m2, u2=demo.u2, sampfrac=demo.sampfrac, jump.after=demo.jump.after, bad.n1=demo.bad.n1, bad.m2=demo.bad.m2, bad.u2=demo.bad.u2, InitialSeed=890110, debug=TRUE, # this generates only 10,000 iterations of the MCMC chain for checking. save.output.to.files=TRUE)

Rename files that were created.

file.copy("data.txt", paste(demo.prefix,".data.txt",sep=""), overwrite=TRUE) file.copy("CODAindex.txt", paste(demo.prefix,".CODAindex.txt",sep=""), overwrite=TRUE) file.copy("CODAchain1.txt", paste(demo.prefix,".CODAchain1.txt",sep=""),overwrite=TRUE) file.copy("CODAchain2.txt", paste(demo.prefix,".CODAchain2.txt",sep=""),overwrite=TRUE) file.copy("CODAchain3.txt", paste(demo.prefix,".CODAchain3.txt",sep=""),overwrite=TRUE) file.copy("inits1.txt", paste(demo.prefix,".inits1.txt",sep=""), overwrite=TRUE) file.copy("inits2.txt", paste(demo.prefix,".inits2.txt",sep=""), overwrite=TRUE) file.copy("inits3.txt", paste(demo.prefix,".inits3.txt",sep=""), overwrite=TRUE)

file.remove("data.txt" )
file.remove("CODAindex.txt" ) file.remove("CODAchain1.txt" ) file.remove("CODAchain2.txt" ) file.remove("CODAchain3.txt" ) file.remove("inits1.txt" ) file.remove("inits2.txt" ) file.remove("inits3.txt" )

save the results in a data dump that can be read in later using the load() command.

Contact Carl Schwarz (cschwarz@stat.sfu.ca) for details.

cat("\n\n\n ***** FILES and GRAPHS saved in \n ", getwd(), "\n\n\n") print(dir())

move up the directory

setwd("..")

cat("\n\n\n End of Demonstration \n\n\n")