hughperkins / jeigen

Java wrapper for Eigen C++ fast matrix library
Other
106 stars 31 forks source link

FloatDenseMatrix #7

Closed ahmetaa closed 8 years ago

ahmetaa commented 9 years ago

For many applications, 32 bit floating point numbers are sufficient. Using them improves speed because of the memory throughput and 32 bit SIMD operations. Can we have a dense matrix that uses 32 bit float numbers instead of doubles? I am wondering if Eigen library accepts 32 bit floats anyway (in 64 bit OS).

hughperkins commented 9 years ago

Hi ahmetaa, Eigen native library does handle 32-bit float matrices. I'm not sure I have time to implement them in Jeigen at the moment though. Is this something that you could be interested in looking at? It seems to me that there are a couple of ways of doing this:

ahmetaa commented 9 years ago

Unfortunately I do not have enough time in the short term. However, once I am free, I would like to help. I looked at the code a bit and probably the first option would be the way to go for me.

hughperkins commented 9 years ago

Alright. By the way, do you mind what platform(s) you are targeting, and to what extent using GPUs does/doesnt interest you? Just asking because I'm also heavily into torch project, which has a huge community, and project devs at both Facebook and Google. GPU support... wondering whether a java wrapper for torch would be interesting? Or the use cases for Jeigen you are using (eg running on Windows perhaps? Or want a fairly light-weight matrix layer?) mean that Jeigen is a better match than a hypothetical JavaTorch?

ahmetaa commented 9 years ago

Sorry for the extremely late answer. What I needed was a lightweight fast CPU Matrix multiplication library. JBlas was problematic because of the deployment issues (also lack of good Windows support). For the project I work using GPU's is not feasible for the run time, but we do use them during training DNNs. At the end I have chosen a different path and implemented my own library that uses matrix multiplication with quantized numbers. It is Speech Recognition specific though: https://github.com/ahmetaa/fast-dnn I think a general purpose very light weight simple matrix library would be nice, Most applications only need multiplications and additions anyway.

hughperkins commented 8 years ago

Ah! Nice! :-)

hughperkins commented 8 years ago

This looks cool. I was actually pondering using ints instead of floats in DeepCL at one point actually, but it's one of many ideas I've never actually gotten around to trying :-)