darrenldl / ocaml-SeqBox

Implementation of SeqBox in OCaml
BSD 3-Clause "New" or "Revised" License
11 stars 1 forks source link

Switch to actor model #11

Closed darrenldl closed 6 years ago

darrenldl commented 6 years ago

Stream_file interface is no longer capable of managing the complexity

Switch to Lwt

Things to work on

darrenldl commented 6 years ago

Encode module is now using actor-based concurrency using Lwt, and is ~50% slower...

Maybe I should move to Rust or wait a few years for multicore ocaml.

hannesm commented 6 years ago

@darrenldl did you use profiling to discover where the bottleneck is? OCaml can be reasonably fast if tweaked... (sorry, I haven't read through your seqbox code)

darrenldl commented 6 years ago

@hannesm All good, the code's really rough atm anyway(still looks much better than the old code tho, albeit slower).

I did some testing with different data structures (flipping between Lwt_mvar and Lwt_queue) (Lwt_queue is just a ring buffer implemented using array and condition variable) But I didn't do proper profiling with tools, only relied on the progress text printing which reports the rate.

The current arrangement of the encoder internals

---------------               -----------               ---------------
| File reader | --Lwt_mvar--> | Encoder | --Lwt_mvar--> | File writer |
|             |     (1)       |         |     (2)       |             |
---------------               -----------               ---------------

The Lwt_queue variant is just Lwt_mvar with Lwt_queue swapped

Statistics (using defaults, this means data size = 496 bytes, hash = SHA256)

Implementation Choice for (1) Choice for (2) Avg. encode rate blocks/sec
Sequential N/A N/A 105K
Lwt Lwt_queue, size=100 Lwt_queue, size=100 63K
Lwt Lwt_mvar Lwt_queue, size=100 63K
Lwt Lwt_mvar Lwt_queue, size=1000 67K
Lwt Lwt_mvar Lwt_mvar 65K

Progress reporting text didn't seem to have any impact on the performance, so just I placed a call next to where the block is created in Lwt implementations.

Description

Sequential

The sequential implementation is just a single chain of tail positioned function calls.

Lwt

For Lwt implementation, each internal component in is made from a generating function (returns a function of type : unit -> something Lwt.t, e.g. gen_encoder at encode.ml, gen_file_reader, gen_file_writer at actors.ml).

The generated threads first wait on a waiter thread, then the waiter thread is woken up to kick start the encoding process.

darrenldl commented 6 years ago

My original intuition was that using Lwt would speed things up since the encoder doesn't have to wait for file IO, but that wasn't entirely right I suppose.

My guess is that the sequential implementation was better optimized since it's just a statically written chain of tail position function calls, while a lot of anonymous functions are used in Lwt version implementation.

But I have almost zero background on compilers and language and runtimes, so I'm probably very wrong.

Currently I'm still very attached to OCaml rather than Rust mainly due to Coq, and also Rust doesn't support tail call optimization. If only Ada has a much larger FOSS community, hmm...

darrenldl commented 6 years ago

@hannesm I did more tests today with more useful results, namely disabling some code of encoder

The setup is still the same, with connections (1) and (2) both being Lwt_mvar, may switch back to Lwt_queue later for consistency but doesn't seem to matter

Statistics

Code disabled in encoder Avg. encode rate blocks/sec Slowdown compared to all disabled
Hash & data packing 257K 0
Hash 76K -181K (caused by data packing)
Data packing 112K -145K (caused by hash)

I noticed the data packing function(which converts 496 bytes of data to a sbx block) can be optimized, I'll post the results after optimization for both the sequential implementation and Lwt implementation.

The issue with delay in hashing and data packing may still persist since there's no way to run code in parallel yet. The hashing part was previously done in another actor, but it has the same level of performance, probably due to above reason.

darrenldl commented 6 years ago

After a lot of optimizations, the rate sits at around 90K blocks/sec

I'm gonna leave Reed-Solomon support to a Rust rewrite of osbx, if I am ever going to do that

darrenldl commented 6 years ago

Development effort now directed towards rsbx.

Osbx is now under maintenance mode.

Closing all issues related to adding features.