I just briefly looked at your Architect code for Liquid State Machines... I don't think I see any mention of spiking neural models anywhere in your code, which is a core basis of LSM. Assuming you're using sigmoidal activations (or similar) with a "reservoir", then I think what you've implemented is closer to an "Echo State Network". Both fall into the class of "Reservoir Computing", with the biggest distinction that LSM is a more biologically plausible version using spiking neurons. The other major thing to note is that there's no spatial conditions on the random connection initialization. In ESM/LSM, the neurons sit in a 3D space and the probability of a connection depends on the distance between the neurons, which creates spatially-local connections on average. This is another key component.
You should consider naming this something else entirely to reduce confusion. Maybe GatedReservoir?
I just briefly looked at your Architect code for Liquid State Machines... I don't think I see any mention of spiking neural models anywhere in your code, which is a core basis of LSM. Assuming you're using sigmoidal activations (or similar) with a "reservoir", then I think what you've implemented is closer to an "Echo State Network". Both fall into the class of "Reservoir Computing", with the biggest distinction that LSM is a more biologically plausible version using spiking neurons. The other major thing to note is that there's no spatial conditions on the random connection initialization. In ESM/LSM, the neurons sit in a 3D space and the probability of a connection depends on the distance between the neurons, which creates spatially-local connections on average. This is another key component.
You should consider naming this something else entirely to reduce confusion. Maybe GatedReservoir?