nimble-dev / nimble

The base NIMBLE package for R
http://R-nimble.org
BSD 3-Clause "New" or "Revised" License
155 stars 22 forks source link

MCEM issues #127

Closed paciorek closed 8 years ago

paciorek commented 8 years ago

two things: 1) optim doesn't like -Inf as objective fun value; probably just replace -Inf in MCEM cCalc_E_llk$run() with -1e307.

2) for this example, it's not moving away from starting point. Works if use Nelder-Mead instead of BFGS. Need to investigate.

consts <- list(G = 2,N = 16, n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13, 10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6, 10, 10, 10, 7), nrow = 2)) data = list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9, 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5, 5, 3, 7, 3, 7, 0), nrow = 2)) inits <- list( a = c(2, 2), b=c(2, 2) ) codeReparam <- nimbleCode({ for (i in 1:G) { for (j in 1:N) { r[i,j] ~ dbin(p[i,j], n[i,j]); p[i,j] ~ dbeta(mean = mu[i], sd = sigma[i]) } mu[i] ~ dunif(0, 1) sigma[i] ~ dunif(0, 1) } }) inits <- list( mu = rep(.5, 2), sigma = rep(.1, 2)) modelReparam <- nimbleModel(codeReparam, constants = consts, data = data, inits = inits) box = list( list('mu', c(0, 1)), list('sigma', c(0, .5)))
modelReparam$simulate('p') mcemReparam <- buildMCEM(modelReparam, latentNodes = 'p', burnIn = 500, boxConstraints = box) outReparam <- mcemReparam(maxit = 30) outReparam

pistacliffcho commented 8 years ago

Hi Chris,

I’ll chime in a little here, being the responsible party, but it might be a little late…

First of all, I’m not happy with R’s implementation L-BFGS algorithm. In particular, it immediately quits whenever the target function evaluates to -Inf, which seems very foolish. This issue is especially problematic for the L-BFGS with box-constraints, as the default behavior for some reason appears to evaluate the function at all corners of the box. I simply cannot fathom what went into this decision for the optimizer. Because often the target function is -Inf on the boundary, this often means that the standard BFGS (without box-constraints) is more likely to be successful than the optimizer with box constraints; if the target function is undefined at the boundary, the box constraints algorithm will fail 100% of the time. However, without the box-constraints, if you have a good initial value and never wander outside the boundary, the unconstrained BFGS algorithm should work. 

 In general, R’s Nelder-Mead algorithm is much more stable; it allows for evaluations of the target function to be -Inf during the search. It can be very problematic if the solution is actually on the boundary though. 

 To deal with this issue of R’s L-BFGS implementation, I actually “scoot” the box constraints in by epsilon (I think I set the default to 10^-5 or something)? This way when optim tries to evaluate the function in the corners, it should get a finite value. 

 That being said, I’m actually confused what’s happening in this example. As I just mentioned, the box constraints should not be an issue for mu and sigma (and I even tried scooting it in even more to no avail). Also, when I tried debugging the R-version, some of the functions were not working correctly (although perhaps those were just issues with the R side, as it did compile, which was surprising given the R version’s errors). I can dig a little deeper on that. 

  If I had more time, I would have half a mind to improve R’s optim functions…

-Cliff

On Mar 30, 2016, at 10:04 AM, Christopher Paciorek notifications@github.com wrote:

two things: 1) optim doesn't like -Inf as objective fun value; probably just replace -Inf in MCEM cCalc_E_llk$run() with -1e307.

2) for this example, it's not moving away from starting point. Works if use Nelder-Mead instead of BFGS. Need to investigate.

consts <- list(G = 2,N = 16, n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13, 10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6, 10, 10, 10, 7), nrow = 2)) data = list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9, 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5, 5, 3, 7, 3, 7, 0), nrow = 2)) inits <- list( a = c(2, 2), b=c(2, 2) ) codeReparam <- nimbleCode({ for (i in 1:G) { for (j in 1:N) { r[i,j] ~ dbin(p[i,j], n[i,j]); p[i,j] ~ dbeta(mean = mu[i], sd = sigma[i]) } mu[i] ~ dunif(0, 1) sigma[i] ~ dunif(0, 1) } }) inits <- list( mu = rep(.5, 2), sigma = rep(.1, 2)) modelReparam <- nimbleModel(codeReparam, constants = consts, data = data, inits = inits) box = list( list('mu', c(0, 1)), list('sigma', c(0, .5)))

modelReparam$simulate('p') mcemReparam <- buildMCEM(modelReparam, latentNodes = 'p', burnIn = 500, boxConstraints = box) outReparam <- mcemReparam(maxit = 30) outReparam

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/nimble-dev/nimble/issues/127

pistacliffcho commented 8 years ago

Oh I see what is wrong with this example. It’s a degenerate posterior distribution.

In particular, if X ~ beta(a, b), then the density at X = 1 is undefined if b < 1. This is exactly what is happening in this model (although it’s much harder to see due to the reparameterization). Since we have that r[1,3] / n[1,3] = 1, this means we can approach infinite density as p -> 1 by letting b -> 0.

On Apr 1, 2016, at 12:05 PM, Clifford Anderson-Bergman pistacliffcho@gmail.com wrote:

Hi Chris,

I’ll chime in a little here, being the responsible party, but it might be a little late…

First of all, I’m not happy with R’s implementation L-BFGS algorithm. In particular, it immediately quits whenever the target function evaluates to -Inf, which seems very foolish. This issue is especially problematic for the L-BFGS with box-constraints, as the default behavior for some reason appears to evaluate the function at all corners of the box. I simply cannot fathom what went into this decision for the optimizer. Because often the target function is -Inf on the boundary, this often means that the standard BFGS (without box-constraints) is more likely to be successful than the optimizer with box constraints; if the target function is undefined at the boundary, the box constraints algorithm will fail 100% of the time. However, without the box-constraints, if you have a good initial value and never wander outside the boundary, the unconstrained BFGS algorithm should work. 

 In general, R’s Nelder-Mead algorithm is much more stable; it allows for evaluations of the target function to be -Inf during the search. It can be very problematic if the solution is actually on the boundary though. 

 To deal with this issue of R’s L-BFGS implementation, I actually “scoot” the box constraints in by epsilon (I think I set the default to 10^-5 or something)? This way when optim tries to evaluate the function in the corners, it should get a finite value. 

 That being said, I’m actually confused what’s happening in this example. As I just mentioned, the box constraints should not be an issue for mu and sigma (and I even tried scooting it in even more to no avail). Also, when I tried debugging the R-version, some of the functions were not working correctly (although perhaps those were just issues with the R side, as it did compile, which was surprising given the R version’s errors). I can dig a little deeper on that. 

  If I had more time, I would have half a mind to improve R’s optim functions…

-Cliff

On Mar 30, 2016, at 10:04 AM, Christopher Paciorek <notifications@github.com mailto:notifications@github.com> wrote:

two things: 1) optim doesn't like -Inf as objective fun value; probably just replace -Inf in MCEM cCalc_E_llk$run() with -1e307.

2) for this example, it's not moving away from starting point. Works if use Nelder-Mead instead of BFGS. Need to investigate.

consts <- list(G = 2,N = 16, n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13, 10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6, 10, 10, 10, 7), nrow = 2)) data = list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9, 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5, 5, 3, 7, 3, 7, 0), nrow = 2)) inits <- list( a = c(2, 2), b=c(2, 2) ) codeReparam <- nimbleCode({ for (i in 1:G) { for (j in 1:N) { r[i,j] ~ dbin(p[i,j], n[i,j]); p[i,j] ~ dbeta(mean = mu[i], sd = sigma[i]) } mu[i] ~ dunif(0, 1) sigma[i] ~ dunif(0, 1) } }) inits <- list( mu = rep(.5, 2), sigma = rep(.1, 2)) modelReparam <- nimbleModel(codeReparam, constants = consts, data = data, inits = inits) box = list( list('mu', c(0, 1)), list('sigma', c(0, .5)))

modelReparam$simulate('p') mcemReparam <- buildMCEM(modelReparam, latentNodes = 'p', burnIn = 500, boxConstraints = box) outReparam <- mcemReparam(maxit = 30) outReparam

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/nimble-dev/nimble/issues/127

paciorek commented 8 years ago

Cliff, thanks, your comments about L-BFGS-F mirror what I was seeing. My thought for buildMCEM is that we set "nan" and "-Inf" to -1e307 to deal with the -Inf issue. What do you think?

In terms of degeneracy, I don't think that is what is going on. I agree that the MLE for p[i] would have an issue if r[i]/n[i]=1, but for the marginal likelihood, we shouldn't get an (alpha,beta) or (mu,sigma) that forces all p[i]=1 because some of the r[i]/n[i] < 1. When I max the marginal likelihood using the beta-binomial, I get mu = .89 and sd=.04 for the first group. So I'm still not sure what is going on with the MCEM...

On Fri, Apr 1, 2016 at 12:48 PM, Clifford Anderson-Bergman < notifications@github.com> wrote:

Oh I see what is wrong with this example. It’s a degenerate posterior distribution.

In particular, if X ~ beta(a, b), then the density at X = 1 is undefined if b < 1. This is exactly what is happening in this model (although it’s much harder to see due to the reparameterization). Since we have that r[1,3] / n[1,3] = 1, this means we can approach infinite density as p -> 1 by letting b -> 0.

On Apr 1, 2016, at 12:05 PM, Clifford Anderson-Bergman < pistacliffcho@gmail.com> wrote:

Hi Chris,

I’ll chime in a little here, being the responsible party, but it might be a little late…

First of all, I’m not happy with R’s implementation L-BFGS algorithm. In particular, it immediately quits whenever the target function evaluates to -Inf, which seems very foolish. This issue is especially problematic for the L-BFGS with box-constraints, as the default behavior for some reason appears to evaluate the function at all corners of the box. I simply cannot fathom what went into this decision for the optimizer. Because often the target function is -Inf on the boundary, this often means that the standard BFGS (without box-constraints) is more likely to be successful than the optimizer with box constraints; if the target function is undefined at the boundary, the box constraints algorithm will fail 100% of the time. However, without the box-constraints, if you have a good initial value and never wander outside the boundary, the unconstrained BFGS algorithm should work.

In general, R’s Nelder-Mead algorithm is much more stable; it allows for evaluations of the target function to be -Inf during the search. It can be very problematic if the solution is actually on the boundary though.

To deal with this issue of R’s L-BFGS implementation, I actually “scoot” the box constraints in by epsilon (I think I set the default to 10^-5 or something)? This way when optim tries to evaluate the function in the corners, it should get a finite value.

That being said, I’m actually confused what’s happening in this example. As I just mentioned, the box constraints should not be an issue for mu and sigma (and I even tried scooting it in even more to no avail). Also, when I tried debugging the R-version, some of the functions were not working correctly (although perhaps those were just issues with the R side, as it did compile, which was surprising given the R version’s errors). I can dig a little deeper on that.

If I had more time, I would have half a mind to improve R’s optim functions…

-Cliff

On Mar 30, 2016, at 10:04 AM, Christopher Paciorek < notifications@github.com mailto:notifications@github.com> wrote:

two things: 1) optim doesn't like -Inf as objective fun value; probably just replace -Inf in MCEM cCalc_E_llk$run() with -1e307.

2) for this example, it's not moving away from starting point. Works if use Nelder-Mead instead of BFGS. Need to investigate.

consts <- list(G = 2,N = 16, n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13, 10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6, 10, 10, 10, 7), nrow = 2)) data = list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9, 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5, 5, 3, 7, 3, 7, 0), nrow = 2)) inits <- list( a = c(2, 2), b=c(2, 2) ) codeReparam <- nimbleCode({ for (i in 1:G) { for (j in 1:N) { r[i,j] ~ dbin(p[i,j], n[i,j]); p[i,j] ~ dbeta(mean = mu[i], sd = sigma[i]) } mu[i] ~ dunif(0, 1) sigma[i] ~ dunif(0, 1) } }) inits <- list( mu = rep(.5, 2), sigma = rep(.1, 2)) modelReparam <- nimbleModel(codeReparam, constants = consts, data = data, inits = inits) box = list( list('mu', c(0, 1)), list('sigma', c(0, .5)))

modelReparam$simulate('p') mcemReparam <- buildMCEM(modelReparam, latentNodes = 'p', burnIn = 500, boxConstraints = box) outReparam <- mcemReparam(maxit = 30) outReparam

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub < https://github.com/nimble-dev/nimble/issues/127>

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/nimble-dev/nimble/issues/127#issuecomment-204541111

paciorek commented 8 years ago

also, Cliff, any reason I shouldn't include the option to use a different optimization method (e.g.,Nelder-Mead) in buildMCEM()?

On Sat, Apr 2, 2016 at 5:31 AM, Chris Paciorek paciorek@stat.berkeley.edu wrote:

Cliff, thanks, your comments about L-BFGS-F mirror what I was seeing. My thought for buildMCEM is that we set "nan" and "-Inf" to -1e307 to deal with the -Inf issue. What do you think?

In terms of degeneracy, I don't think that is what is going on. I agree that the MLE for p[i] would have an issue if r[i]/n[i]=1, but for the marginal likelihood, we shouldn't get an (alpha,beta) or (mu,sigma) that forces all p[i]=1 because some of the r[i]/n[i] < 1. When I max the marginal likelihood using the beta-binomial, I get mu = .89 and sd=.04 for the first group. So I'm still not sure what is going on with the MCEM...

On Fri, Apr 1, 2016 at 12:48 PM, Clifford Anderson-Bergman < notifications@github.com> wrote:

Oh I see what is wrong with this example. It’s a degenerate posterior distribution.

In particular, if X ~ beta(a, b), then the density at X = 1 is undefined if b < 1. This is exactly what is happening in this model (although it’s much harder to see due to the reparameterization). Since we have that r[1,3] / n[1,3] = 1, this means we can approach infinite density as p -> 1 by letting b -> 0.

On Apr 1, 2016, at 12:05 PM, Clifford Anderson-Bergman < pistacliffcho@gmail.com> wrote:

Hi Chris,

I’ll chime in a little here, being the responsible party, but it might be a little late…

First of all, I’m not happy with R’s implementation L-BFGS algorithm. In particular, it immediately quits whenever the target function evaluates to -Inf, which seems very foolish. This issue is especially problematic for the L-BFGS with box-constraints, as the default behavior for some reason appears to evaluate the function at all corners of the box. I simply cannot fathom what went into this decision for the optimizer. Because often the target function is -Inf on the boundary, this often means that the standard BFGS (without box-constraints) is more likely to be successful than the optimizer with box constraints; if the target function is undefined at the boundary, the box constraints algorithm will fail 100% of the time. However, without the box-constraints, if you have a good initial value and never wander outside the boundary, the unconstrained BFGS algorithm should work.

In general, R’s Nelder-Mead algorithm is much more stable; it allows for evaluations of the target function to be -Inf during the search. It can be very problematic if the solution is actually on the boundary though.

To deal with this issue of R’s L-BFGS implementation, I actually “scoot” the box constraints in by epsilon (I think I set the default to 10^-5 or something)? This way when optim tries to evaluate the function in the corners, it should get a finite value.

That being said, I’m actually confused what’s happening in this example. As I just mentioned, the box constraints should not be an issue for mu and sigma (and I even tried scooting it in even more to no avail). Also, when I tried debugging the R-version, some of the functions were not working correctly (although perhaps those were just issues with the R side, as it did compile, which was surprising given the R version’s errors). I can dig a little deeper on that.

If I had more time, I would have half a mind to improve R’s optim functions…

-Cliff

On Mar 30, 2016, at 10:04 AM, Christopher Paciorek < notifications@github.com mailto:notifications@github.com> wrote:

two things: 1) optim doesn't like -Inf as objective fun value; probably just replace -Inf in MCEM cCalc_E_llk$run() with -1e307.

2) for this example, it's not moving away from starting point. Works if use Nelder-Mead instead of BFGS. Need to investigate.

consts <- list(G = 2,N = 16, n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13, 10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6, 10, 10, 10, 7), nrow = 2)) data = list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9, 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5, 5, 3, 7, 3, 7, 0), nrow = 2)) inits <- list( a = c(2, 2), b=c(2, 2) ) codeReparam <- nimbleCode({ for (i in 1:G) { for (j in 1:N) { r[i,j] ~ dbin(p[i,j], n[i,j]); p[i,j] ~ dbeta(mean = mu[i], sd = sigma[i]) } mu[i] ~ dunif(0, 1) sigma[i] ~ dunif(0, 1) } }) inits <- list( mu = rep(.5, 2), sigma = rep(.1, 2)) modelReparam <- nimbleModel(codeReparam, constants = consts, data = data, inits = inits) box = list( list('mu', c(0, 1)), list('sigma', c(0, .5)))

modelReparam$simulate('p') mcemReparam <- buildMCEM(modelReparam, latentNodes = 'p', burnIn = 500, boxConstraints = box) outReparam <- mcemReparam(maxit = 30) outReparam

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub < https://github.com/nimble-dev/nimble/issues/127>

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/nimble-dev/nimble/issues/127#issuecomment-204541111

pistacliffcho commented 8 years ago

No good reason I can think of not to use Nelder-Mead. In fact, given that the analytic gradient function is not available in the MCEM algorithm, it's that much more a competitive algorithm.

On Sat, Apr 2, 2016 at 5:57 AM, Christopher Paciorek < notifications@github.com> wrote:

also, Cliff, any reason I shouldn't include the option to use a different optimization method (e.g.,Nelder-Mead) in buildMCEM()?

On Sat, Apr 2, 2016 at 5:31 AM, Chris Paciorek <paciorek@stat.berkeley.edu

wrote:

Cliff, thanks, your comments about L-BFGS-F mirror what I was seeing. My thought for buildMCEM is that we set "nan" and "-Inf" to -1e307 to deal with the -Inf issue. What do you think?

In terms of degeneracy, I don't think that is what is going on. I agree that the MLE for p[i] would have an issue if r[i]/n[i]=1, but for the marginal likelihood, we shouldn't get an (alpha,beta) or (mu,sigma) that forces all p[i]=1 because some of the r[i]/n[i] < 1. When I max the marginal likelihood using the beta-binomial, I get mu = .89 and sd=.04 for the first group. So I'm still not sure what is going on with the MCEM...

On Fri, Apr 1, 2016 at 12:48 PM, Clifford Anderson-Bergman < notifications@github.com> wrote:

Oh I see what is wrong with this example. It’s a degenerate posterior distribution.

In particular, if X ~ beta(a, b), then the density at X = 1 is undefined if b < 1. This is exactly what is happening in this model (although it’s much harder to see due to the reparameterization). Since we have that r[1,3] / n[1,3] = 1, this means we can approach infinite density as p -> 1 by letting b -> 0.

On Apr 1, 2016, at 12:05 PM, Clifford Anderson-Bergman < pistacliffcho@gmail.com> wrote:

Hi Chris,

I’ll chime in a little here, being the responsible party, but it might be a little late…

First of all, I’m not happy with R’s implementation L-BFGS algorithm. In particular, it immediately quits whenever the target function evaluates to -Inf, which seems very foolish. This issue is especially problematic for the L-BFGS with box-constraints, as the default behavior for some reason appears to evaluate the function at all corners of the box. I simply cannot fathom what went into this decision for the optimizer. Because often the target function is -Inf on the boundary, this often means that the standard BFGS (without box-constraints) is more likely to be successful than the optimizer with box constraints; if the target function is undefined at the boundary, the box constraints algorithm will fail 100% of the time. However, without the box-constraints, if you have a good initial value and never wander outside the boundary, the unconstrained BFGS algorithm should work.

In general, R’s Nelder-Mead algorithm is much more stable; it allows for evaluations of the target function to be -Inf during the search. It can be very problematic if the solution is actually on the boundary though.

To deal with this issue of R’s L-BFGS implementation, I actually “scoot” the box constraints in by epsilon (I think I set the default to 10^-5 or something)? This way when optim tries to evaluate the function in the corners, it should get a finite value.

That being said, I’m actually confused what’s happening in this example. As I just mentioned, the box constraints should not be an issue for mu and sigma (and I even tried scooting it in even more to no avail). Also, when I tried debugging the R-version, some of the functions were not working correctly (although perhaps those were just issues with the R side, as it did compile, which was surprising given the R version’s errors). I can dig a little deeper on that.

If I had more time, I would have half a mind to improve R’s optim functions…

-Cliff

On Mar 30, 2016, at 10:04 AM, Christopher Paciorek < notifications@github.com mailto:notifications@github.com> wrote:

two things: 1) optim doesn't like -Inf as objective fun value; probably just replace -Inf in MCEM cCalc_E_llk$run() with -1e307.

2) for this example, it's not moving away from starting point. Works if use Nelder-Mead instead of BFGS. Need to investigate.

consts <- list(G = 2,N = 16, n = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 11, 8, 10, 13, 10, 12, 9, 10, 9, 10, 5, 9, 9, 13, 7, 5, 10, 7, 6, 10, 10, 10, 7), nrow = 2)) data = list(r = matrix(c(13, 12, 12, 11, 9, 10, 9, 9, 8, 10, 8, 9, 12, 9, 11, 8, 9, 8, 9, 4, 8, 7, 11, 4, 4, 5, 5, 3, 7, 3, 7, 0), nrow = 2)) inits <- list( a = c(2, 2), b=c(2, 2) ) codeReparam <- nimbleCode({ for (i in 1:G) { for (j in 1:N) { r[i,j] ~ dbin(p[i,j], n[i,j]); p[i,j] ~ dbeta(mean = mu[i], sd = sigma[i]) } mu[i] ~ dunif(0, 1) sigma[i] ~ dunif(0, 1) } }) inits <- list( mu = rep(.5, 2), sigma = rep(.1, 2)) modelReparam <- nimbleModel(codeReparam, constants = consts, data = data, inits = inits) box = list( list('mu', c(0, 1)), list('sigma', c(0, .5)))

modelReparam$simulate('p') mcemReparam <- buildMCEM(modelReparam, latentNodes = 'p', burnIn = 500, boxConstraints = box) outReparam <- mcemReparam(maxit = 30) outReparam

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub < https://github.com/nimble-dev/nimble/issues/127>

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub <https://github.com/nimble-dev/nimble/issues/127#issuecomment-204541111

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/nimble-dev/nimble/issues/127#issuecomment-204711561

paciorek commented 8 years ago

Ok, I've added the ability to alter the optim method and pass optim control arguments as well as setting -Inf and nan to -1e307 to avoid issues with -Inf in L-BFGS-B.