apache / fury

A blazingly fast multi-language serialization framework powered by JIT and zero-copy.
https://fury.apache.org/
Apache License 2.0
3.11k stars 248 forks source link

Are there any plans to support serialization of pyflink's Java and python inter-process communication? #1919

Open kaori-seasons opened 3 weeks ago

kaori-seasons commented 3 weeks ago

Feature Request

We expect to improve the performance of pyflink through fury serialization

In the python benchmark test, the time-consuming benchmark of each serialization is as follows:

avg_serde_time_1m_objects

This is the code location where the performance is relatively high when we use pyflink, which is mainly consumed in pickle encoding and decoding.

error1 error2

At present, our company's stock of historical data is 13 million, and each message is between 60kb and 75kb. After discussions with the pyflink community, it is recommended to use pemja for cross-language calls without using beam.

In this way, python's native pickle serialization is very slow

测试方法:

test1

error3

其中 output_type 定义了传输数据每个字段类型,定义方式如下图: test3

a) 测试三个算子 时长 带有 output_type 耗时(秒) 没有 output_type耗时(秒) 提升效率
1 天 9.456 9.973 5.18%
3 天 14.532 18.187 20.10%
5 天 28.911 38.786 25.46%
7 天 34.397 51.691 33.46%

b) 测试5个算子

时长 带有 output_type 耗时(秒) 没有 output_type耗时(秒) 提升效率
1 天 9.971 10.401 4.13%
3 天 20.308 25.744 21.12%
5 天 30.166 40.305 25.16%
7 天 40.340 54.405 25.85%

c) 测试10个算子

时长 带有 output_type 耗时(秒) 没有 output_type耗时(秒) 提升效率
1 天 11.468 12.130 5.45%
3 天 23.697 31.121 23.85%
5 天 38.015 49.508 23.21%
7 天 48.859 65.140 24.99%

Judging from the test results, explicitly specifying the output_type parameter in PyFlink DataStream can significantly improve serialization performance, especially when the amount of data is large and there are many operators, the improvement effect is more obvious. Using output_type can reduce the overhead of serialization and deserialization, reduce the calculation of type inference, and thus improve performance.

But now, obviously we hope that fury can improve this situation. Does @chaokunyang have any good suggestions?

Is your feature request related to a problem? Please describe

No response

Describe the solution you'd like

No response

Describe alternatives you've considered

No response

Additional context

No response

chaokunyang commented 3 weeks ago

Fury has two formats for your scenarios:

If your message body size is 60kb, do you need to read all those fields? If not, I will suggest to use the binary row format

kaori-seasons commented 3 weeks ago

Fury 针对您的场景有两种格式:

  • Xlang 对象图格式:如果您有一些共享值,则这种格式会更快,如果您有循环值,这是唯一的解决方案。
  • 二进制行格​​式:这是一种零拷贝格式,您可以读取您的字段而无需解析其他字段。

如果你的消息正文大小为 60kb,你是否需要读取所有这些字段?如果不需要,我建议使用二进制行格​​式

All fields need to be read

chaokunyang commented 3 weeks ago

In such cases, Xlang object graph format will be faster.

kaori-seasons commented 2 weeks ago

@chaokunyang At present, due to work, I may have to start doing this from next month. Since I am not familiar with xlang object graph, there may be some obstacles in implementing fury's xlang object graph in pyflink. Would you mind taking the time to give me Any help on fury performance and type mapping?

chaokunyang commented 2 weeks ago

yeah, I'd love to help, feel free to reach me out