Closed Grinkers closed 5 months ago
https://github.com/Grinkers/edn-rs/commit/e2e8f5f9778948fa8160fedafcae0e06470dcf14
A quick proof of concept. Very much not ready. Tests are all sorts of messed up because of a lack of Eq/PartialEq. Examples edn_from_str
and async
at the very least work without changes, just to show that it's not that breaking when in the 99% case.
@naomijub @evaporei bump. I'm going to start working on it as in the previous comment/link.
Error with no pub internals, non_exhaustive pub ErrorCode (box<str>
for catch-all, adding more errors as time goes on), with an extremely minimal impl Error. I'm also going to finish the pass on a 1.0 api, as there's a few other things too.
I'll probably break it into two pieces. One with the above Error and ErrorCode using only the catch-all case. Then an optimization pass with a walker implementation and slowly increase/expand the Error impl. I can do it in one go too, but it'll end up being a lot to review at once.
@evaporei
This proposal would be a breaking change to edn-derive
. One of the big things is not letting users construct their own EdnError like in
https://github.com/edn-rs/edn-derive/blob/main/src/deserialize.rs#L60
thoughts?
@naomijub
A fully working implementation is over at https://github.com/Grinkers/edn-rs/pull/6.
I can't reasonably completely isolate this change into a commit until we start to unravel https://github.com/edn-rs/edn-rs/pull/140
I'll be using it to start battle-testing it a bit more. Got some nice error messages with correct placement and a huge performance boost. I was hoping for a bit more, but I guess fedora on x86_64 does a pretty good job optimizing memory allocations. Still 4.6219 µs
is over an order of magnitude from what is advertised on the README!
@naomijub
A fully working implementation is over at Grinkers#6.
I can't reasonably completely isolate this change into a commit until we start to unravel #140
I'll be using it to start battle-testing it a bit more. Got some nice error messages with correct placement and a huge performance boost. I was hoping for a bit more, but I guess fedora on x86_64 does a pretty good job optimizing memory allocations. Still
4.6219 µs
is over an order of magnitude from what is advertised on the README!
I just commented about a missing test there
This is problematic for a few reasons.
A couple examples https://github.com/naomijub/edn-rs/blob/d852751abfa90af4e4ed57ba9d807137d80802a6/src/deserialize/parse.rs#L1114 https://github.com/naomijub/edn-rs/blob/d852751abfa90af4e4ed57ba9d807137d80802a6/src/deserialize/parse.rs#L648
Here's some references for serde json https://docs.rs/serde_json/latest/src/serde_json/error.rs.html#17-22 embedded https://github.com/rust-embedded-community/serde-json-core/blob/9deb2b91ea737df79f33f7536587b59ed5f141ee/src/de/mod.rs#L23
I quite like serde_json's process of being very abstract https://docs.rs/serde_json/latest/serde_json/error/struct.Error.html Essentially I think the end use case is 99% of the time you handle the case as "there was an error" and log it. in the 1% case, you have functions to describe the error for the end-user. This abstract behavior will also make it easy to write tests, make internal changes without API changes, and make other environments easier (no-std or heapless).
I will be taking a pass on std performance and I consider this a prerequisite. As far as a 1.0.0 interface, I think this the last breaking change needed. a
heapless
orno-std
feature doesn't currently exist, so it can't be breaking. I don't think there's much point in supporting ano-std
, but I don't want to eliminate the possibility. I will probably take an attempt for heapless for constant time performance and embedded, but I don't consider this a needed thing for 1.0.0 (a reasonable 1.1.0).I think this will end up being a pretty big rework and series of commits. The biggest performance issue we have now is all the
clone()
s, which requires a new heap allocation,memcpy
, get a count or position, and then we throw it all away. I will need to implement a walker of sorts, so a stable target would be of great help.More people the better for feedback on the api/structure.