rohan843 / dnncase

GNU General Public License v3.0
1 stars 0 forks source link

Handling of complex learning architectures. #50

Open rohan843 opened 1 year ago

rohan843 commented 1 year ago

Deep learning seems to contain every algorithm imaginable! ChatGPT mentions the following kinds of slightly less conventional "neural networks":

  1. Capsule Networks (CapsNets)
  2. Neural Turing Machines (NTMs)
  3. Neural Cellular Automata
  4. Neural Architecture Search (NAS)
  5. Graph Neural Networks (GNNs)
  6. Reservoir Computing
  7. Echo State Networks (ESNs)
  8. Transformers (not so less conventional :))
  9. Radial Basis Function Networks (RBFNs)
  10. Neural Differential Equations (ODE-Nets)
  11. Self-Organizing Maps (SOMs)
  12. Extreme Learning Machines (ELMs)
  13. Holographic Neural Networks
  14. Kohonen Networks (Self-Organizing Feature Maps)
  15. Evolving Neural Networks (ENN)
  16. Fuzzy Neural Networks
  17. Counterpropagation Networks (CPNs)
  18. Retrograde Neural Networks (Retros)
  19. Hierarchical Temporal Memory (HTM)
  20. Neural Programmer-Interpreters (NPIs)

And then there are GANs, Autoencoders, Siamese Networks and more.

It is clear then, that we can't create components tailored to each network, but instead construct a system that allows the user to write any kind of program they want to write, while adhering to a layer-based paradigm (at least as long as Keras adheres to it). This is not a trivial task, as this will span the entire development of DNNCASE (in other words, will continue in next semester as well), but we will have to ensure that most if not all of the mentioned networks can be implemented in DNNCASE (even though the user may have to define custom artefacts).

This can be done by coming up with a logical design of DNNCASE's functionalities, then seeing if something is not implementable, and then modifying our designs to incorporate that thing as well.

Note: This is not a very complex task. All we need to do is keep a proper record of what component of the system does what, and how do components interact with each other. We will look into this when working with artefacts as well.