Open sirinath opened 7 years ago
Sorry but I did not get your point. Could you paste some code or pseudocode?
object A {
val a: Matrix = Matrix(3, 3)
val b: Matrix = Matrix(3, 3)
val c: Matrix = a * c
}
A.a.append(myVector)
Above similar to https://github.com/lihaoyi/scala.rx on vectors which a time dimetion.
object A {
val a: Table = Table ("Col A", "Col B")
val b: Table = Table ("Col A", "Col C")
val c: Table = a * c
}
A.a.append(myRow)
A.a.latest("Col A")
Above inspired by http://flix.github.io/
I guess you mean you want to let the same DSL syntax have different kernels. In order to achieve the goal, some implicit abstract factories are required to create DSL's ASTs, like what they did in http://okmij.org/ftp/tagless-final/ . The current DeepLearning.scala's codebase does not use this approach.
For what you are implementing for deep learning with a bit more flexibility perhaps you can make this a such that it can also be used to code application logic in LA / Data Flow / Reactive paradigms. Is it possible to give this flexibility?