Closed mindplay-dk closed 3 years ago
TBH I don't really understand why this was added. It's going to be a lot cheaper in most/all cases to just represent numbers as fixed lengths (1, 2, 4 or 8 bytes long). The size difference is utterly negligible and it makes the implementation a lot simpler for everyone. I don't think bandwidth between servers comes into it, we are talking about a difference of only a handful of bytes, can that really be a bottleneck?
I'm not sure how is good to implement binary protocol at client level. @mindplay-dk WDYT about supporting CSV serialization? Much easier than binary?
@lvca the CSV deserializer in PHP can only go about 1Mb/sec, for comparison that's 20x slower than Oriento. Possible that it can be optimised but it is fairly slow. That translates to roughly 12k relatively small documents / second.
@phpnode what about a binary serializer without varints?
@lvca in theory much faster because there's no need for a state machine and no need to look at every input byte.
This is what we discussed with @tglman last week: maybe managing varint doesn't take advantages at all, but on space.
@lvca I definitely vote for switching to fixed length numbers, this issue is the only reason I haven't tackled the new format in oriento yet - in JS we're also stuck with a single 53bit number type so it's a similar issue that @mindplay-dk describes where varint actually introduces quite a lot of overhead.
in my opinion, advantage of varint is only on space, i can't think about usable test-case where client need to fetch billion of records synchronously. @mindplay-dk you purpose look very interesting
@tglman WDYT ?
@lvca sorry for the late response...
The problem with CSV serialization is, as @phpnode mentioned, the CPU overhead and the complexity of a state machine - but also memory overhead, from having to preload the serialized information into memory, CPU overhead on the server to encoding in CSV format, and bandwidth overhead.
So, overall, the CSV format isn't really a good choice in any scenario, which, I take it, is why you introduced the new binary serialization format? :-)
Is a binary protocol client in a scripting language like PHP or JS a good idea in the first place? Probably not, but there are other issues with REST currently (#3202 in particular) that made me give up on it and pursue the binary protocol instead. Apart from the mentioned issue, REST suffers from the same performance overhead from JSON encoding on the server and decoding on the client. (I benchmarked orientdb-php
vs the same basic REST query (with curl) early on, and found to my surprise that a REST round-trip was many times slower than a round-trip even with the CSV protocol implemented by the binary PHP client.)
A REST client would obviously be much simpler and easier to implement in a language like PHP.
But if I hope to sway coworkers and managers from MySQL to OrientDB, performance has to be on par with that - with REST or CSV, it won't be. With binary serialization, granted, it'll be much harder to implement, but I feel it's worthwhile.
The alternative is a binary client as a PHP module, but this is terrible in terms of deployment - I really would like for OrientDB to be available to any PHP developer. JAVA runs anywhere, as does PHP - binary PHP modules need to be compiled and deployed and can easily result in a long-term maintenance nightmare. I would strongly prefer to stay in native PHP.
@mindplay-dk just to quibble with this point slightly:
Is a binary protocol client in a scripting language like PHP or JS a good idea in the first place? Probably not...
Scripting languages != scripting languages - modern javascript engines have ridiculous performance and it's possible to get within spitting distance of native performance in a lot of cases.
To illustrate this point, the "CSV" parser in oriento is twice as fast as the Java version.
What about supporting MessagePack?
@lvca talking with @tglman about this now, issue is that adopting msgpack only for record serialization is like fixing only about 5% of the pain involved in writing a client. If we could use msgpack or protobufs for the full binary protocol that would solve a lot more problems and make things a lot easier for client implementers.
@phpnode, you rock my day with this memo, will sign to every piece of it, including protobuf proposal
What about supporting MessagePack?
As far as PHP goes, it's another binary dependency - so not really any better or worse from that perspective. Of course it's (becoming) a standard, so that in itself is positive, but not really great until it becomes widely adopted, e.g. until it's a standard feature in PHP.
That said, a PHP fall-back implementation should be possible - I looked at a few different implementations, and this one looks like it's almost there, though it's very new and there's no official release yet, and it doesn't support PHP 5.3, and it would need some optimizations.
I guess it's better to build on an existing standard, and the standard does look pretty solid.
How far down the roadmap would this be though? Is it a lot of work on your end? Any chance of this making it into the 2.0 release? My personal goal was a client supporting 2.0 and forward.
The problem with msgpack is that it doesn't solve the incompatibility problem outlined in point 6 here. The OrientDB protocol provably changes rapidly over time, and using protobufs would guarantee that future changes do not break existing code, at least at the protocol layer.
Having a single canonical protocol definition which can be used by both the client and server also means that keeping clients up to date is as simple as running protoc orientdb.proto
.
Protobufs also has a JSON representation, so it would be possible to use the same code to generate responses for the REST api too. (This is quite an important point - the differences between binary and REST responses make writing a client that can deal with both consistently almost impossible)
Mature tooling is available for basically every platform and it's possible to generate the documentation directly from the .proto
definition, which means the docs and the protocol never get out of sync.
Artem looked into this 8 months ago and adopting protobufs only required changes to two files in OrientDB, it could be served on a different port so there's no need to drop the existing protocol immediately.
hi all,
the inclusion of msgpack was thought for disconnect the current internal staff with the network staff, in a relative fast way, we already have an infrastructure for implement different way to serialize records on the net, and should be relatively easy plug onother one, i'm going to do a POC soon and see what can be done.
The target is have a plugin for the 2.0.
@phpnode i know that the binary protocol is not perfect, and could be better have something more standard, but rewrite all from scratch is not so easy as it seems and for sure we cannot provide anything working in the ~2.0, anyway you did a really good job with the protobuf specs.
But before have a choice i need to do a POC and see the results, that will be done for the next week.
@tglman does it really make sense to rush the adoption of msgpack when we could just quickly fix the original varint issue now? I don't want to add msgpack support to oriento if it's going to possibly get replaced with something else in the near future. This is something that should be discussed with all of the driver authors before making a decision.
@phpnode fix the varint issue is not as easy as it seems, that is written on the disk, and all 2.0-M* releases use it for write data, so means create a new record version and manage it.
will be much more reliable have a network document format( be aware is not json, we have strict types like short,int,long,float), totally independent to the internal disc format, that will help us to do internal evolution without go in you point 6.
we are open to discussion on what use in that specific format, i actually open a message about that some time ago : https://groups.google.com/forum/#!topic/orient-database/WNi_4e1QFXY we can continue the discussion there.
Last point: we are discussing to replace the whole network format, but that will not happen any soon, we are not going to put an effort on that before ~3.0, so i think is better for now put an effort to choose a document format (can be CSV,Binary,msgpack,or any other) that stand for at least a couple of releases without any change, and that all the contributor are aware and happy to support.
@tglman in that case oriento will probably stick with the CSV format for now because we can already parse more records per second than orient can send.
i think is better for now put an effort to choose a document format (can be CSV,Binary,msgpack,or any other) that stand for at least a couple of releases without any change, and that all the contributor are aware and happy to support
I second that! :+1:
I think the Orient team are probably best qualified to decide on a specific format - MessagePack looks simple and efficient, and has support on many platforms already - in my opinion, that's probably not a bad choice.
oriento will probably stick with the CSV format for now because we can already parse more records per second than orient can send.
Orient could probably send a lot more records if it didn't have to encode them as CSV.
hi all,
So thanks to @maggiolo00 Christmas we had a quick (POC on msgpack)[https://github.com/maggiolo00/orientdb-msgpack/] and the results are not good as we expected, we found some issue about typing, msgpack don't really transmit the type used at the moment of the pack, but the type that is using to writing the data, that is an issue for us because in the document we need strict typing in java, also it not support all the kind of information we need to transmit so to build a proper encoder/decoder we need to add other structures on it, making everything less easy and less performant.
so, so far the plan will be to use for now our binary serialization, cleaned up of contributor unfriendly staff, like current varint or schemafull field.
I'm not sure if this effort worths, because clients could use CSV serialization. @phpnode demonstrated that a well written impl is fast enough ;-)
@phpnode demonstrated that a well written impl is fast enough ;-)
Ehm, no - JS may be suitable for this sort of thing, but PHP is not. This approach will never be fast in PHP.
If msgpack is not the way to go, please consider my previous proposal - using a greatly simplified integer encoding. I'm pretty sure the bandwidth overhead is hugely outweighed by the CPU benefit, and it's a fairly small change, which will simplify client implementations on all platforms.
It's a much smaller change anyhow, and may be enough to solve the problem at hand.
@mindplay-dk is using CSV presenting an actual performance problem for you or just a perceived one? This implementation does 1Mb/sec on my machine, which may not be stellar but that's still a lot of records. It's also a direct port from the JS, with no PHP specific optimisations, I'm sure it could go a lot faster (e.g. it currently allocates an array for every returned value, that could be avoided by using references instead).
Given Orient's connection model I'd bet 99.99% of apps will spend much more time establishing the connection to the database on each request than any time spent deserializing.
@phpnode granted, the problem is "perceived", but I know from past experience that state machine type parsers are something you should strictly avoid in PHP, with a few possible exceptions (like template engines) where the result can be cached.
Also, it's not only a matter of CPU overhead on the client, but also of bandwidth overhead between the client and server, and of encoding overhead on the server - turning everything into strings and then back into values is not efficient, neither in terms of CPU usage or or network/memory bandwidth.
Given Orient's connection model I'd bet 99.99% of apps will spend much more time establishing the connection to the database on each request than any time spent deserializing
Even if more time is spent establishing the connection, this is idle time on the client (e.g. non-blocking socket waiting time) which is not CPU time.
I would be very surprised if this contributes substantially to CPU time when profiled (e.g. xdebug and KCacheGrind) - I'd be surprised if most of the CPU time isn't spent in the deserializer.
Also note, this isn't only a (CPU/memory/bandwidth) performance issue - it's also a matter of complexity. Implementing a client for this (in PHP, or any language) is quite difficult - writing a parser for a structured serialization format is far more involved than simply reading sequential data.
@lvca any chance your team can prioritize a simple solution to this problem sooner, rather than a complete protocol revamp that we would have to wait much longer for? I am eager to use and promote OrientDB, but my day job is PHP, and I can't get behind the current (CSV) protocol as a durable solution.
If I'm going to get behind OrientDB at work, I need to know that the current solution is temporary and that this will be addressed - then we could proceed experimentally with the current PHP driver until a real binary driver becomes available for production use.
I would imagine that things like replication and clustering would benefit hugely from a fully binary protocol too? Memory, bandwidth and CPU use should get considerable improvements over CSV, which should make OrientDB even more competitive on performance overall, right? :-)
I think the issue is not as significant as you make out, and the solution you're proposing isn't as easy as it sounds because it requires that orient change their on disk format. I played with varints recently and they were not slow. There is already a maintained php driver that uses the binary protocol, why not use it?
Also, to be clear, there are no issues with the "durability" of the CSV format, you just don't like it.
I wasn't aware that the data is stored physically in this format - obviously this negates my concerns about encoding overhead - which now becomes a storage overhead concern instead, which is less of a concern for me, personally, but may actually be a greater concern in some scenarios.
My concerns about bandwidth overhead in transport, decoding overhead on the client, CPU and memory overhead from loading and parsing the full document, etc. are all still real, I think.
there are no issues with the "durability" of the CSV format, you just don't like it
Okay, the term "durability" was inappropriate - what I meant was, it's not a good long-term solution, since it's sub-optimal both in terms of transport and decoding.
As said, I know the problem is "perceived" to some degree, but I hope you don't think I'm just here to quibble about words? I would like to be a strong advocate for OrientDB and promote it at my workplace - I already am, but still don't feel like I can get behind it 100% as our new production database and total replacement for MySQL in new applications, until these issues are addressed.
I guess the main reason the CSV thing irks me so much, is that, this was supposed to be a binary protocol, and it isn't, really - It's a binary wrapper around a CSV format. CSV isn't binary, and since it is the protocol used for the actual data, in practice, the bulk of the overall message exchange payload in most scenarios is going to be CSV. That means we get all the complexity of having to deal with a binary format, and all the overhead of a having to deal with a CSV format. I think it's fair to say, that's not ideal.
The variable-length encoding of integers in the binary portion of the overall protocol seems like a pointless micro-optimization, seen in perspective against the CSV portion of the protocol, which encodes the majority of numbers exchanged in message payloads, overall, as strings. If the majority of the overall message protocol is going to be strings anyway, I'd rather be using a fully string-based protocol, which would be much simpler on the client-side, e.g. JSON/REST, but at the moment #3202 is keeping me from pursuing that. Also, knowing now that the native storage is CSV, I guess JSON adds both decoding and encoding overhead for every record exchanged, so I'm not sure I would want to use it all anymore.
As said though, I don't need a solution here and now - I just need an indication that this will be addressed eventually, so I can proceed with the existing driver. Is that silly?
@lvca oh, wait... so if the documents are natively stored in the CSV format on the server, does that mean the new binary protocol is going to incur an overhead for decoding that encoding in the binary serialization format every time a record is requested?? (if so, is there even going to be any net gain from supporting the binary protocol? or just more CPU overhead overall, plus some bandwidth savings?)
The binary format is the on disk format, it's much more efficient than CSV.
I hope you don't think I'm just here to quibble about words?
No, I know you're here with good intentions, I just want to clarify things:
If you're writing a new driver from scratch, and you care about performance, the binary serialization format on the binary protocol is going to beat the pants off any of the other options. But, in the time we've been discussing this, @ostico forked the PHP driver I started, simplified and removed a load of stuff I think you found distasteful (e.g. traits) and is actively maintaining it - https://github.com/Ostico/PhpOrient - have you looked at that driver? It would be better to collaborate on one single codebase rather than having a lot of competing implementations.
Thank you for those clarifications. Yeah, the varints are not a show stopper, but they did stop the show for me - it has taken me a couple of months to understand what it is and come up with a working implementation. For something as trivial as reading and writing integers, it took a lot of steam out of me - I could have been half way done with the actual driver in that time... :-/
I have looked closely at Ostico's driver - I'm still referencing it for certain things. It still uses traits though, and lots of statics, and a bunch of other patterns I don't agree with - obviously it would be better to focus our efforts on one implementation, but the way the codebase looks right now, I doubt we'd agree on most things. My approach is quite different.
@mindplay-dk if I remember correctly you wanted a lower-level driver, right? (I almost wrote "less opinionated" here, but, I think that'd be incorrect :smile:). No reason why you couldn't still collaborate - your lower level driver could be the foundation upon which a more batteries included option could be built, you could split phporient in two.
Hmm, it probably would make more sense to fork and contribute... I'll think about it :-)
@lvca FYI, msgpack is now well-supported (by a PECL extension) in PHP as well. I wonder how much complexity and overhead you could remove both from the native OrientDB drivers and clients and from third-party clients by switching to msgpack?
Hey Fynske,
How far did you get with your PHP driver?
Scott
@smolinari long-since abandoned - I don't have the time and it doesn't seem worth the effort. I've since moved from MySQL to PostgreSQL for new projects - it's not a graph DB of course, but it is quite capable as a document DB these days, and the drivers are solid, so I've settled for that.
That's too bad.
I am also pushing for a more robust driver standard (with the little influence I have). I am no expert, but I've been reading up on both gRPC and msgpack and both sound like a much better basis to work with than what is currently available, simply because they are already well supported and documented in many languages.
I too would like to know how much work would be involved to switch to one of these quasi-standards. It has to be what I would call an enabling step for ODB, because currently, all the driver support is still on wobbly legs. It must allow the actual drivers to be programmed much easier you would think. And for sure, it is the language driver support that makes or breaks things like a database.
Scott
I come to the same conclusion after years of experience with orientdb and the drivers. We need a more robust and standardized Response/Request protocol. I recommend http://www.grpc.io/ it is available for tons of languages and it would give orientdb a big boost in the right direction.
It would solve fundamental problems:
What are you going to do with all these problems with the driver orientdb? It would be very motivated if you could release some informations about the future. @luigidellaquila @tglman thanks!
I would like to propose an option to enable an alternative, simplified encoding scheme for variable-length integers.
A bit of context: I'm working on a binary client for PHP, and it took me well over a week of after-hours tinkering to get a working implementation of functions to read/write variable-size integers. The implementation is rather slow, with no way to optimize, since PHP only has one, platform-dependent, signed 32-bit or 64-bit integer type. It's also brittle, since there's no practical way to support values larger than 2 billion on a 32-bit PHP build.
Having a simplified, alternative encoding option for variable-length integers isn't just to simplify implementation in scripting languages though - a more important thing to consider, is the trade-off between bandwidth overhead and CPU overhead.
For example, in a replication scenario, where two servers are physically far from each other, conserving bandwidth may be the most desirable option.
On the other hand, in a scenario where an application server is on the same local network as the database server, saving CPU overhead from encoding on the database server, and CPU overhead from encoding/decoding on the application server (especially if this is running a client written in a scripting language like PHP or JS) a bit more bandwidth overhead may be a better trade-off.
I would propose a very simple encoding scheme:
Where
type
is e.g. 1, 2, 3 or 17 (integer, short, long or byte) and the followingvalue
is a variable number of bytes depending on the type of integer.For values < 127, that means an extra byte of overhead. For values 127-255, it's two bytes, same as now, and for higher values it's a byte more or the same as the current scheme.
Computationally though, this should be much simpler and faster to encode and decode - and much simpler to implement.
I'd be curious to see a bandwidth benchmark with this change in place in a real-world scenario - in practice, the difference in bandwidth may be almost negligible, and defaulting to a simplified encoding may actually be the best choice, slow networks being the only reasonable exception.