Open goldingn opened 5 years ago
Base: 93.02% // Head: 93.29% // Increases project coverage by +0.27%
:tada:
Coverage data is based on head (
09525b6
) compared to base (34c43c7
). Patch coverage: 93.91% of modified lines in pull request are covered.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
To do:
Looks very useful. Will check it out soon(ish).
Is there a way to pass greta variables as arguments to matrix_function
? Ideally, I'd like to set this up with matrix elements as functions of parameters and predictors (e.g. survival = f(temperature), where f() is a function to be learned).
Yes, via the dots argument. The example in tests does something like this.
They need to be named, can't be lexically scoped in, and there's no checking yet to make sure you named them correctly and passed everything in, so you'll get an obscure tensorflow error if you do it wrong!
Great. I will fiddle with this and see if I can break it.
Thanks! Shouldn't be too hard :)
Two more queries:
iter
need to be passed through to tf_matrix_function
? Is it used for the ode solvers?Do those two queries answer each other? I thought iter == niter. But if it's just a counter, could I pass in a vector x
and then explicitly call x[iter]
in the matrix_function
?
Yes exactly, iter is a counter, and it's there so you can do just that!
Though actually now that I think about it, that won't work 😬
i added a hack that maybe works?
idx <- calculate(iter + 1)
x_to_use <- x[idx]
I'm pretty sure that hack will always pull out the same element of x
.
We turn the R function into a TensorFlow function via some sorcery/code-torture that mocks up the inputs as fake greta arrays, then builda a TF graph for the function operations using those. Those dummy greta arrays are arbitrarily data greta arrays full of 0s, so when you run calculate, it turns that into a scalar 0, and indexes using that.
Two possible solutions to this problem:
iterate_dynamic_matrix()
, so that instead of the iteration number, the function is just passed a static greta array for these additional arguments, and iterate_dynamic_matrix()
is passed a list of greta arrays, one for each (possible) iteration, and on the TF side we pull out the correct greta array at each iteration and pass it in.1 will be quicker but a less nice interface. 2 is a better long-term solution, but will take more time.
If we implement 1 now, we will probably want to change it back when 2 is done. But that's not really a problem. I think I'm leaning towards option 1 so that I can implement some new models soon.
I've had a go at option 1, see https://github.com/jdyen/greta.dynamics/blob/iterable_matrix/R/iterate_dynamic_matrix.R.
This passes an argument called iterables
to iterate_dynamic_matrix
, which is passed as a static greta array and sliced inside the body()
function of tf$while_loop
(in tf_iterate_dynamic_matrix
).
This works for a simple case (iterables
is a numeric vector) but has not been tested under more complex cases. I will put some more work into it but wanted to get your thoughts on a couple of things before I proceed:
iterables
is not wanted? I've currently added a if(missing(iterables))
clause that sets iterables = rep(1.0, niter)
but this seems clunky. Is there a cleaner way to deal with this? Or should we make this version of iterate_dynamic_matrix
a separate function given it's a non-standard use case?iterables
for the full array of things to iterate over and iterx
for the individual slices.state
, iter
, and iterx
, with iterables
passed as the second argument to iterate_dynamic_matrix
.
This might be of interest @jdyen: an alternative to
iterate_matrix
that lets you to pass in a function to create the matrix at each iteration. The function is written in R, but the iterations are done in tensorflow, which should be much quicker than doing the matrix creation and matrix multiplication on the greta side.Not fully tested yet. I'd appreciate feedback!