How can I write the inner product between a tensor and locations on the grid?
I have in my simulation code a function $$r(x) = e^{- C \cdot x}$$, where $C: R^{k \times d}$ where $d$ is the dimensionality of the grid/state space.
So in my code I have the following:
C = math.random_normal(batch(neurons=3), channel(vector=['x','y']))
def U_f(xs): #
r = lambda xx: C.vector['x']*xx.vector['x'] + C.vector['y']*xx.vector['y'] ##here I want to rewrite this as dot product
U = phi.math.exp( - r(xs) )
return U
DOMAIN = dict( x=nx, y=ny, bounds=Box(x=(low_x, up_x), y=(low_y, up_y) ) )
g1 = lambda xs: U_f(xs)
Ut = CenteredGrid(g1, extrapolation.ZERO, **DOMAIN)
However I would like to express the dot product $C\cdot x$ so that I wouldn't have to alter my code when I change the dimensionality of the grid.
Do you have any clue on this?
How can I write the inner product between a tensor and locations on the grid?
I have in my simulation code a function $$r(x) = e^{- C \cdot x}$$, where $C: R^{k \times d}$ where $d$ is the dimensionality of the grid/state space.
So in my code I have the following:
However I would like to express the dot product $C\cdot x$ so that I wouldn't have to alter my code when I change the dimensionality of the grid. Do you have any clue on this?