shuai132 / rpc_core

a tiny rpc core library, support c++11、rust.
MIT License
41 stars 4 forks source link

Serialization protocol documentation #3

Open david-drinn opened 8 months ago

david-drinn commented 8 months ago

When writing our connection layer, it's difficult to understand the built-in rpc_core serialization protocol. Is there any documentation you can provide on that?

shuai132 commented 8 months ago

connection layer don't need care serialization protocol.

serialization interface just tow functions:

namespace rpc_core {

template <typename T>
inline std::string serialize(T&& t) {
}

template <typename T>
inline bool deserialize(const detail::string_view& data, T& t) {
}

}  // namespace rpc_core

built-in rpc_core serialization just an implementation for it.

there is an option for custom serialization: -DRPC_CORE_SERIALIZE_USE_CUSTOM="custom_serialization.h"

you can see: serialize_nlohmann_json.hpp

david-drinn commented 8 months ago

connection layer don't need care serialization protocol.

In theory, yes. However, in practice, no. While troubleshooting with a logic analyzer when implementing a connection, you may not be sure you're getting the whole packet, or just pieces of it, or just garbage data. Knowing some small details about the serialization protocol in this case is very helpful.

serialization interface just tow functions:

namespace rpc_core {

template <typename T>
inline std::string serialize(T&& t) {
}

template <typename T>
inline bool deserialize(const detail::string_view& data, T& t) {
}

}  // namespace rpc_core

built-in rpc_core serialization just an implementation for it.

Ok this points me in the right direction, thank you.

there is an option for custom serialization: -DRPC_CORE_SERIALIZE_USE_CUSTOM="custom_serialization.h"

you can see: serialize_nlohmann_json.hpp

Yes I did see other serialization options, and this is very helpful to be able to do others. However, I do like the size optimization, and presumably speed optimization, provided by the built-in serialization for embedded processors.

shuai132 commented 8 months ago

The connection layer should ensure that the packet is complete and perform CRC checks after receiving from unstable transfers like a serial port.

While understanding serialization will be helpful, but it will be very difficut to read. Ensuring a complete packet by code is the best approach.

Moreover, built-in serialization should theoretically be much faster than JSON.

david-drinn commented 3 months ago

When watching a logic analyzer over long periods of time, it is helpful to have a clear idea of what the data you're looking at is.

Indeed, with a Saleae Logic device, we could create an analyzer plugin that decodes it on the fly and shows the data above each packet. I suppose this could be done by just using rpc-core in the plugin code, though I'm not sure how feasible that is.

shuai132 commented 3 months ago

when received one data packet, the only thing we known is binary data and it's size. the key is how to analyze the binary data.

so, if use build-in serialization-protocol, it's impossible to make a plugin. because the serialization erase the type info. but, if use json-like serialization, it's easy to make one.

david-drinn commented 3 months ago

I think I understand. You have to have pre-existing knowledge about what the packet means in order to decode it correctly. Both sides of the RPC know this, so it's not a problem, but a generic analyzer plugin would not. It could only guess based on patterns, but patterns alone won't tell you definitively.

That said, some Saleae plugins let you have configuration you can put in. So... a simple implementation would let you put in the packet decoding for at least one type of packet. Ideally more than one packet decoding could be put in.