rjagerman / glint

Glint: High performance scala parameter server
MIT License
168 stars 67 forks source link

Not able to pull matrix slice with rows != cols #57

Closed MLnick closed 7 years ago

MLnick commented 7 years ago

It seems one can only pull (and push) pieces of a Matrix that have the same number of rows & columns:

e.g. this works:

val matrix = client.matrix[Double](10, 10)
val result = matrix.pull(Array(0L, 1L), Array(0, 1))

But this doesn't:

val matrix = client.matrix[Double](10, 10)
val result = matrix.pull(Array(0L, 1L), Array(0))

with

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
    at scala.collection.mutable.WrappedArray$ofInt.apply$mcII$sp(WrappedArray.scala:155)
    at scala.collection.mutable.WrappedArray$ofInt.apply(WrappedArray.scala:155)
    at scala.collection.mutable.WrappedArray$ofInt.apply(WrappedArray.scala:152)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at glint.models.client.async.AsyncBigMatrix$$anonfun$6.apply(AsyncBigMatrix.scala:101)
    at glint.models.client.async.AsyncBigMatrix$$anonfun$6.apply(AsyncBigMatrix.scala:99)
    at glint.models.client.async.AsyncBigMatrix$$anonfun$glint$models$client$async$AsyncBigMatrix$$mapPartitions$2.apply(AsyncBigMatrix.scala:169)
    at glint.models.client.async.AsyncBigMatrix$$anonfun$glint$models$client$async$AsyncBigMatrix$$mapPartitions$2.apply(AsyncBigMatrix.scala:169)

Is this intended to be supported? Actually I was trying to create a 1-column matrix to use in place of vector, so that I could try to use GranularMatrix to see if I could scale feature space. So I need to do matrix.push(keys, Array(0)) for example (so only the first, and only, column of the matrix, but a slice of rows)

rjagerman commented 7 years ago

This is supposed to be intended: the rows and columns represent pairs of indices. So in your first example you pull two values, one at (row 0, col 0) and one at (row 1, col 1):

val matrix = client.matrix[Double](10, 10)
val result = matrix.pull(Array(0L, 1L), Array(0, 1))

In your second example, you would pull (row 0, col 0) and (row 1, col undefined), which causes the error. The IndexOutOfBounds exception is probably the least descriptive for this, and we should probably change that.

In case of a one-column matrix, you'd have to pull something like: matrix.pull(Array(0, 0, 0, 0), Array(1, 2, 3, 4)) to pull the first 4 elements.

I should really create a GranularBigVector though, it shouldn't be too hard to implement, most of the behavior is analogous to (or even simplified) GranularBigMatrix behavior:

  1. Split the input array into smaller arrays of a maximum size
  2. Request those from an underlying BigVector (in sequence)
  3. Merge the requests via the asynchronous callbacks and return the result
MLnick commented 7 years ago

Ah - doh! Yes ok that makes sense. Would be quite cool to have kind of "slicing" syntax for pulling say a set of rows and cols, that would be expanded to the relevant indices. Similar to Breeze's slicing I guess, or numpy.

Like matrix.pull(rows, ::) to pull all cols for a set of rows. Or matrix.pull(rows, 0 to 1) to pull all rows but only column 0.

MLnick commented 7 years ago

I can take a look at a GranularBigVector, as it could be useful for large linear models.

rjagerman commented 7 years ago

The slice syntax would be very cool indeed, could probably even make specialized messages for that to reduce network communication. Right now we have to send all requested indices across the network, which is quite expensive! Some parameter server implementations even do key-caching so you'd only need to send a hash of the indices, a lot of options for improvement here definitely.

If you can somehow find the time to look at the GranularBigVector, that'd be great! These are the things I've been meaning to get at, but unfortunately have been too busy for recently.

MLnick commented 7 years ago

Opened #58