There needs to be a way to specify more interesting substrate network architectures. I'm thinking once again of this paper, https://arxiv.org/pdf/1312.5355.pdf, which recreates the Le-NET architecture using HyperNEAT. This architecture has different numbers of convolutional "substrates" (these correspond to feature detectors in a CNN) at each layer, and of different sizes. There is a method of specifying the architectures using a list of 3-tuples. Something similar could be used.
If I implemented such an approach, a class containing the 3-tuple list would need to be specified, and this approach would overrule the use of HNProcessDepth and HNProcessWidth
There needs to be a way to specify more interesting substrate network architectures. I'm thinking once again of this paper, https://arxiv.org/pdf/1312.5355.pdf, which recreates the Le-NET architecture using HyperNEAT. This architecture has different numbers of convolutional "substrates" (these correspond to feature detectors in a CNN) at each layer, and of different sizes. There is a method of specifying the architectures using a list of 3-tuples. Something similar could be used.
If I implemented such an approach, a class containing the 3-tuple list would need to be specified, and this approach would overrule the use of HNProcessDepth and HNProcessWidth