Closed pentium3 closed 2 years ago
Apache Flink_ A Deep-Dive into Flink's Network Stack.pdf
Writing Records into Network Buffers and Reading them again:
a record ----> RecordWriter: serialize from Java object to byte[] ----> write to network buffer (fixed size) Netty Server: flush() to send to network channel (a node A only has 1 channel to communicate to another node B, no matter how many operators are running on A.) ====>>>> TCP [one large record may span multiple network buffers/channels] Netty client: write received byte[] to buffer ----> RecordReader: deserialize from byte[] to Java objects.
TODO: modify the data payload by adding a FLINK_FLAG in the beginning of byte[].
Add the information we want p4 switch know in the FLINK_FLAG part.
In Netty Server layer, deserialize byte[] received by netty server, get <int, int> tuple, and put them in network packet header.
1st step: change the src of Query1 to generate a tuple of integers. try to catch packages sent from src to map.
look at flink-runtime/.../netty
ask here dev@flink.apache.org
Reference: integrate programmable-switch with ML frameworks:
https://flink.apache.org/2019/06/05/flink-network-stack.html
https://flink.apache.org/2019/07/23/flink-network-stack-2.html
https://blog.csdn.net/u013411339/article/details/99262467
https://blog.csdn.net/huaishu/article/details/93723889
https://blog.csdn.net/yanghua_kobe/article/details/79308889