Closed DanielTakeshi closed 8 years ago
hmm…
I think even if a is SMat, b is a Mat type FMat, like: val a=sprand(10,10,0.1) val b:Mat=rand(10,10) a*b will also give you the error.
They problems should due to: https://github.com/BIDData/BIDMat/blob/master/src/main/scala/BIDMat/Operators.scala#L28 def sop(a:SMat, b:FMat, c:Mat):SMat = {notImplemented(myname, a, b); a}
And the Mop_Times object should override this function (which it doesn't right now) in order to get the run function call.
Best Biye
Biye Jiang
Department of Computer Science
University of California, Berkeley
Mobile: +1-510-326-3261
E-mail: bjiang@cs.berkeley.edu
On May 5, 2015, at 8:23 PM, Daniel Seita notifications@github.com wrote:
Version: 9b1557e
I'm not sure if this is a serious bug, but instead a code design or philosophical choice, but when we call a multiplication with a generic matrix on the left hand side, BIDMat will call the multiplication operator in the Mat.scala class, but that will throw an error because the binary methods there will create an "operator xxx not implemented for ...".
The key is that the generic matrix must have a compile time type of Mat. Even if the runtime type is changed to FMat, SMat, etc., the multiplication will still search for the Mat class.
In some code I'm writing, for instance, I have either GPU or CPU mode to consider, so my matrices a and b have type Mat to be generic. Then I set them equal to something:
val a:Mat = null val b:Mat = null // Initialize a to be either a GSMat or an SMat, depending on GPU/CPU mode // Initialize b to be either a GMat or FMat, depending on GPU/CPU mode a * b And the a*b line will fail if I run with CPU mode with the errors given in the title with XMat = SMat and YMat = FMat.
There are ways I can get this code to work by wrapping SMats and FMats, as in SMat(a)*FMat(b), but I wanted to check in with you since this information has the potential to be a little confusing, because we are encouraged to use Mats to be generic, but we must also keep repeatedly checking for cases and then casting to SMat(a), FMat(b), etc. What are your thoughts?
Some command line examples:
scala> val a:Mat=sprand(1000,1000,0.1) a: BIDMat.Mat = ( 5, 0) 0.52206 ( 6, 0) 0.71377 ( 13, 0) 0.62309 ( 28, 0) 0.20338 ( 54, 0) 0.24161 ( 61, 0) 0.38814 ( 63, 0) 0.67045 ( 74, 0) 0.33437 ... ... ...
scala> val b:Mat=sprand(1000,1000,0.1) b: BIDMat.Mat = ( 2, 0) 0.32644 ( 16, 0) 0.78167 ( 18, 0) 0.35468 ( 41, 0) 0.16965 ( 53, 0) 0.60286 ( 56, 0) 0.27231 ( 64, 0) 0.70402 ( 65, 0) 0.20928 ... ... ...
scala> a*b res2: BIDMat.Mat = ( 0, 0) 2.2325 ( 1, 0) 2.6090 ( 2, 0) 1.5920 ( 3, 0) 2.0934 ( 4, 0) 2.0458 ( 5, 0) 3.6955 ( 6, 0) 1.9645 ( 7, 0) 1.3880 ... ... ...
scala> arand(1000,1000) java.lang.RuntimeException: operator \ not implemented for SMat and FMat at BIDMat.Mop$class.notImplemented(Operators.scala:327) at BIDMat.Mop_Times$.notImplemented(Operators.scala:382) at BIDMat.Mop$class.sop(Operators.scala:28) at BIDMat.Mop_Times$.sop(Operators.scala:382) at BIDMat.Mop$class.op(Operators.scala:143) at BIDMat.Mop_Times$.op(Operators.scala:382) at BIDMat.SMat.$times(SMat.scala:531) ... 33 elided
scala> rand(1000,1000)*a res4: BIDMat.Mat = 31.505 24.170 20.940 18.018 27.939 22.453 25.225 24.584 22.139 24.005 25.443 28.746 20.686 23.867 19.050 29.900 20.043 24.953 27.049 25.659 26.288 28.829 26.924 26.171... 29.118 26.772 21.919 21.094 26.713 19.681 24.325 23.950 24.116 22.451 22.003 26.480 22.981 22.656 18.127 29.811 19.218 23.364 22.904 23.538 23.502 24.597 27.557 25.794... 28.578 25.883 20.647 20.381 24.435 17.622 22.413 23.946 23.355 21.007 24.580 28.317 20.682 24.526 18.536 26.255 17.030 22.624 26.466 24.642 25.480 27.498 24.700 26.631... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. scala> val c:Mat = rand(1000,1000) c: BIDMat.Mat = 0.91933 0.054223 0.0029891 0.97654 0.045690 0.60623 0.45060 0.91492 0.00043926 0.77734 0.033706 0.27043 0.54978 0.25489 0.47594 0.35749... 0.54238 0.62099 0.89218 0.71610 0.86469 0.69615 0.28406 0.93660 0.15506 0.091589 0.48099 0.82811 0.79517 0.67379 0.15774 0.88002... 0.82720 0.88533 0.27555 0.88446 0.71065 0.61293 0.14963 0.053130 0.36748 0.25745 0.21042 0.96924 0.70845 0.77234 0.042210 0.76511... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
scala> ac java.lang.RuntimeException: operator \ not implemented for SMat and FMat at BIDMat.Mop$class.notImplemented(Operators.scala:327) at BIDMat.Mop_Times$.notImplemented(Operators.scala:382) at BIDMat.Mop$class.sop(Operators.scala:28) at BIDMat.Mop_Times$.sop(Operators.scala:382) at BIDMat.Mop$class.op(Operators.scala:143) at BIDMat.Mop_Times$.op(Operators.scala:382) at BIDMat.SMat.$times(SMat.scala:531) ... 33 elided — Reply to this email directly or view it on GitHub.
There's a complicated constraint on FMat-SMat multiply because of edge operators. By convention, all the Learners use Dense/Sparse multiply in that order, and these return a dense matrix. On the other hand, there are lots of uses of Sparse op Dense for edge operations e.g. SMat / colv divides the appropriate elements of a column vector into each sparse non-zero for normalization. Those return a sparse matrix. The solution I suppose would be to give all the edge operators their own symbol, but this is going to make the code uglier and less intuitive.
Version: 9b1557e0770ada6ac49b1e31a833b9b3d418742f
I'm not sure if this is a serious bug, but instead a code design or philosophical choice, but when we call a multiplication with a generic matrix on the left hand side, BIDMat will call the multiplication operator in the Mat.scala class, but that will throw an error because the binary methods there will create an "operator xxx not implemented for ...".
The key is that the generic matrix must have a compile time type of Mat. Even if the runtime type is changed to FMat, SMat, etc., the multiplication will still search for the Mat class.
In some code I'm writing, for instance, I have either GPU or CPU mode to consider, so my matrices a and b have type Mat to be generic. Then I set them equal to something:
And the
a*b
line will fail if I run with CPU mode with the errors given in the title with XMat = SMat and YMat = FMat.There are ways I can get this code to work by wrapping SMats and FMats, as in
SMat(a)*FMat(b)
, but I wanted to check in with you since this information has the potential to be a little confusing, because we are encouraged to use Mats to be generic, but we must also keep repeatedly checking for cases and then casting to SMat(a), FMat(b), etc. What are your thoughts?Some command line examples: