Closed cholcombe973 closed 6 years ago
Related PR for ceph-rbd: https://github.com/cholcombe973/ceph-rbd/pull/5
valgrind still returns clean :)
Since this works should we remove the disconnect_from_ceph and destroy_rados_ioctx functions?
Only if we're assuming that the user will never use the pointers directly; it may make more sense to look at splitting the FFI bits into a sub-crate like ceph-sys
, and keeping only the higher level abstractions in the primary crate
Yeah I'm for ceph-sys
. I can break that out in another PR
I'd love for some time to be spent identifying the travis failure, but the only reference for it that I can find is from a cargo bug a couple of years ago regarding badly packaged crate archives
This change adds automatic closing of ceph connections where necessary. Unfortunately it required reworking a huge amount of the library. The most important pieces are the impl Drop sections in src/ceph.rs
The basics here are I wrapped rados_t with Rados and rados_ioctx_t with IoCtx. I made the inner fields public because my ceph-rbd library needs to get access to those pointers. There's probably a better way to do that?
I checked that this worked with a small rayon program that runs rbd stat in parallel. This would consistently crash until I got the Rbd, ioctx and ceph cluster connection dropping in the right order.
It now runs fine with 10 threads and valgrind shows it's clean:
I may have messed up your ceph_client.rs and cmd.rs @ChrisMacNaughton so take a close look at those