Closed amomchilov closed 2 years ago
Splitting apart the problem and the solution spaces here, what specifically are the problems?
I'd likely prefer to solve the problems involved with encoding/decoding a type like Data
as opposed to solving the issue exclusively for Data
.
@amomchilov I've committed a generalized solution for any type of encoding/decoding involving unkeyed containers of exclusively fixed size elements. While this implementation does nothing special for Data
, it's encoding into an XPC type should now have no size overhead. Unsurprisingly it does have some additional CPU and memory overhead in order to achieve this. Let me know if you consider this sufficient for your needs.
Regardless, this optimization should help when passing large arrays of smaller sized types.
I haven’t gotten far enough in my implementation to have anything to profile, but I’m guessing this will be a perf bottle neck for me.
Ill update you once I have data (pun intended) on the matter
@amomchilov Did you get to a point where you determine the performance of your approach and whether SecureXPC was a hinderance?
I've punted the featured that was going to be most sensitive to this, which is basically transmitting time series data that would be used to populate a chart. Each data point has a timestamp associated, which would have been a Date. Most likely, I would have used unix epoch ints to avoid all the string allocations
I think we should support the native
xpc_data
typeIt's much worse than that. Storing it as a string (with say, Base64 for example), would have a 1.6x overhead (5 bits of payload per 8 bit character). The current encoding has an 8x overhead. It encodes one
UInt8
of payload perUInt64
of payload.Here's a section of one of my long comments
_Originally posted by @amomchilov in https://github.com/trilemma-dev/SecureXPC/pull/6#discussion_r743985682_