Closed arpanbiswas52 closed 8 months ago
The piecewise function is just a demonstration to showcase how it will work with some incorrect information (which it worked still!). We can use different mean function as well.
On the prior distribution, the normal values are changed to real values and therefore the unif distribution is provided as part of the real parameter space (non-normalized one) [0, 15]. Please look below.
@jit def mean_func(x: jnp.ndarray, params: Dict[str, float]) -> jnp.ndarray:
x_data = x[:,0]
#jax.debug.print("x_norm: {}", x_data)
**lb = 0
ub = 15
x_data = x_data*(ub-lb) + lb**
#jax.debug.print("x: {}", x_data)
return jnp.piecewise(
x_data, [x_data < params["t"], x_data >= params["t"]],
[lambda x_data: (params["a2"]*jnp.sin(x_data*params["a1"])),
lambda x_data: (params["b2"]*jnp.sin(x_data*params["b1"]))])
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
b7c2577
) 95.59% compared to head (e2c8189
) 95.62%.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Yes definitely, I am advising Shakti (Sergei’s postdoc) to apply cBO on nano indentation problem. We can use that notebook as real problem.
Added notebook in example folder for 1D version. The 2D version is larger than 25 MB, so wont be able to upload. Please review. Thank you