After some back-and-forth email with Phillip Verbancsics, I've learned that he encoded convolutional structure with HyperNEAT differently than I have. His coordinates are x, y, f, z:
x = x coordinate inside particular substrate
y = y coordinate inside particular substrate
f = feature: at a given layer (z value) the features are splayed out from left to right
z = actual layer/depth.
Also, in all dimensions, including f and z, he mapped from -1 to 1.
When actually connecting the neuron at (x1,y1,f1,z1) to (x2,y2,f2,z2) the actual HyperNEAT network inputs used delta offsets, as follows: (x2, y2, f2, z2, x2 – x1, y2 – y1, f2 – f1, z2 – z1)
I should make a new parameter called "convolutionalDeltas" that sets up the connections in this way.
After some back-and-forth email with Phillip Verbancsics, I've learned that he encoded convolutional structure with HyperNEAT differently than I have. His coordinates are x, y, f, z: x = x coordinate inside particular substrate y = y coordinate inside particular substrate f = feature: at a given layer (z value) the features are splayed out from left to right z = actual layer/depth.
Also, in all dimensions, including f and z, he mapped from -1 to 1.
When actually connecting the neuron at (x1,y1,f1,z1) to (x2,y2,f2,z2) the actual HyperNEAT network inputs used delta offsets, as follows: (x2, y2, f2, z2, x2 – x1, y2 – y1, f2 – f1, z2 – z1)
I should make a new parameter called "convolutionalDeltas" that sets up the connections in this way.