arrayfire / arrayfire-haskell

Haskell bindings to ArrayFire
http://hackage.haskell.org/package/arrayfire
BSD 3-Clause "New" or "Revised" License
59 stars 5 forks source link

ArrayFire benchmarks #38

Open dmjio opened 4 years ago

dmjio commented 4 years ago

Would be very nice to have comparisons of ArrayFire vs. libraries like hmatrix, accelerate, etc. This might even warrant its own package due to the difficulty in procuring all the dependencies.

dmjio commented 4 years ago

@lehins would this interest you as well? :)

lehins commented 4 years ago

@dmjio Of course it would, but how did you know that? ;P

dmjio commented 4 years ago

@lehins saw your work on massiv :1st_place_medal:

dmjio commented 4 years ago

@lehins so would you be interested in potentially making a new repo w/ me that had massiv, hmatrix, arrayfire benchmarks (maybe accelerate, grenade too ?) Think that might be of interest to others as well.

dmjio commented 4 years ago

Adding @chessai

lehins commented 4 years ago

@dmjio That is definitely something I'd be willing to put some effort in. I even tried starting a project that would compare performance of array libraries https://github.com/lehins/massiv-benchmarks For me it is driven by my work on massiv, of course, and desrire to ciompare it to others. That attempt ended in a couple of repa benchmarks and then stalled. This is a bit too much of a side project for a single person, so I certainly welcome your suggestion of collaboration on this.

How do you wanna do this, any thoughts, plans, ideas, etc.?

The way I'd start this is by figuring out administrative questions first:

Construct a plan

Last two don't need to be solved immediately. List of libraries can always be expended, but I think it would be good if we could start with just 2 or 3 tops. The initial set of functions and inputs to be benchmarked at first we can discuss later.

dmjio commented 4 years ago

@lehins this all sounds great. Regarding your questions.

How do you wanna do this, any thoughts, plans, ideas, etc.?

I'd say we try to contribute to the existing Data Haskell movement, and use their group to house this repo If one doesn't already exist. Since I think it would be largely beneficial to the Haskell community. So maybe we could cc @ocramz @sdiehl @chessai @NickSeagull and discuss how we can contribute.

Github account for the repo? A group?

Answered above, pending Data Haskell community response.

Means of communication. Using a github issues isn't gonna work for live conversations, so something like a slack or gitter room should do.

The seem to have a gitter.

Finally, I think I could really help by procuring all of the dependencies w/ nix into a mega-repo, and then also NixOps deployment scripts to AWS so we can run them there. AWS does support on-demand GPU instances. Can make a script to automatically create an instance, run it in a SystemD unit, upload the results to an S3 bucket, host that.

Regarding the actual benchmark suite, we could keep the hardware to whatever instance AWS, and start on Linux for now. I'd rather classify things by operation (successive matrix multiplies, convolutions, matrix decompositions), and make a histogram-like thing that shows the timings for things like massiv w/ LLVM, massiv w/ NCG. ArrayFire GPU, ArrayFire CPU. It'd be nice as well if everyone started from the same initial data set in memory. Anyways, those are my thoughts.

@NickSeagull, @lehins how does this sound ?

NickSeagull commented 4 years ago

I've been disconnected for a while of dataHaskell, but I'm sure that there'll be someone that will want to help with that. I'd ask in the Gitter channel 😄

ocramz commented 4 years ago

@dmjio happy to help! adding @Magalame to the thread

Magalame commented 4 years ago

Happy to help too! @dmjio to my knowledge DataHaskell has no up to date benchmark regarding array libraries, the most we have is a matrix library bencharmark. Regarding the structure of the benchmarks, based on my past experience, I strongly suggest we include memory benchmarking with weight along with time benchmarking with criterion.