Open xwm1992 opened 8 months ago
good idea, connector can run in multi-mode for different scenarios.
Design details
Add InnerLocalServer as an embedded local Server whose primary role is to create an IInnerPubSubService while obtaining all dependent eventmesh-connectors IInnerPubSubService is a collection of all the processors in another service, such as an HTTP service, but there are really only two processors, because there are only two processing options required for the connector, one for subscription and one for publication The eventmesh-connectors also need some modifications, such as eventmesh-connector-file, starting class implements with a new interface, and adding a new file for the SPI eventmesh-openconnect-java has also been adjusted, adding a service interface, TCPClient adapter (no network interaction actually).
So all the modification points are as follows:
Required further modification points:
Use method after modification: Start recognition can be automatically achieved by introducing corresponding eventmesh-connectors through gradle
设计详情
新增 InnerLocalServer 作为内嵌的本地Server ,其主要作用是创建一个IInnerPubSubService,同时获取所有依赖的 eventmesh-connectors IInnerPubSubService相当于其他服务(如HTTP服务)中的所有Processor的集合,其中其实只有两个Processor,因为对于connector来说只需要有两种处理,一个是订阅一个是发布 eventmesh-connectors也需要一些改造,如eventmesh-connector-file,启动类 implements一个新的接口,新增了一个用于SPI的文件 eventmesh-openconnect-java同时也做了调整,新增了服务接口、TCPClient适配器(实际没有网络交互)
这样所有的修改点如下:
所需要的继续修改点:
修改完成后的使用方法: 通过gradle引入对应的eventmesh-connectors即可自动识别启动
i will pr later
Design details
Add InnerLocalServer as an embedded local Server whose primary role is to create an IInnerPubSubService while obtaining all dependent eventmesh-connectors IInnerPubSubService is a collection of all the processors in another service, such as an HTTP service, but there are really only two processors, because there are only two processing options required for the connector, one for subscription and one for publication The eventmesh-connectors also need some modifications, such as eventmesh-connector-file, starting class implements with a new interface, and adding a new file for the SPI eventmesh-openconnect-java has also been adjusted, adding a service interface, TCPClient adapter (no network interaction actually).
So all the modification points are as follows:
- All eventmesh-connectors start class implements with a new interface and add a file for SPI
- eventmesh-openconnect-java adds two interfaces and one class, and Application.class adds static variables
- eventmesh-runtime is uncertain because the implementation boundaries have not been determined
Required further modification points:
- IInnerPubSubService. Add the content of the class, including internal SubscribeProcessor PublishProcessor logic implementation
- IntegrationCloudEventTCPClientAdapter. Concrete use of IInnerPubSubService implementation class
Use method after modification: Start recognition can be automatically achieved by introducing corresponding eventmesh-connectors through gradle
设计详情
新增 InnerLocalServer 作为内嵌的本地Server ,其主要作用是创建一个IInnerPubSubService,同时获取所有依赖的 eventmesh-connectors IInnerPubSubService相当于其他服务(如HTTP服务)中的所有Processor的集合,其中其实只有两个Processor,因为对于connector来说只需要有两种处理,一个是订阅一个是发布 eventmesh-connectors也需要一些改造,如eventmesh-connector-file,启动类 implements一个新的接口,新增了一个用于SPI的文件 eventmesh-openconnect-java同时也做了调整,新增了服务接口、TCPClient适配器(实际没有网络交互)
这样所有的修改点如下:
- 所有 eventmesh-connectors 启动类 implements一个新的接口,新增一个用于SPI的文件
- eventmesh-openconnect-java 新增两个接口一个类,Application.class 新增静态变量
- eventmesh-runtime不确定因为还没有确定好实现边界
所需要的继续修改点:
- IInnerPubSubService.class 内容的添加,其中内部的PublishProcessor、SubscribeProcessor的逻辑实现
- IntegrationCloudEventTCPClientAdapter.class 具体对IInnerPubSubService的使用的实现
修改完成后的使用方法: 通过gradle引入对应的eventmesh-connectors即可自动识别启动
i will pr later
Don’t rush to submit the PR. We need to discuss it well, define the interfaces and concepts, and design the architecture.
This PR is not for merger, it's for discussion, otherwise I'm afraid I won't make it clear
Design direction for this issue at regular community meetings: https://docs.qq.com/doc/DQkdlV0ZhdWZGRXFB, 2024-03-14 Point 7
I'm sorry, I didn't understand some descriptions. Isn't there three existing nettyserver? One implementation of tcp, one implementation of http, and another implementation of grpc, [http server can be transformed to the connector, etc.], this is not understood, [connector is not limited to the process level, can be thread level] I think this design is very reasonable, I think the design is not perfect. [connector currently passes through too many nodes, considering to optimize memory mode storage], the existing connector does not pass tcp to runtime, and then the runtime carries out subsequent processing. My solution is to change the original network communication to method call, that is, memory processing. As for switching to distruptor, don't you need to write an eventmesh-storage-plugin? Or does the connector put in the distruptor and the runtime gets it from the distruptor? I haven't learned [filter\transformer] here, [connector(as a standalone function)], isn't this issue about how to integrate connector into runtime,How come independent again? 不好意思,我有一些描述没有看懂,现有的nettyserver不是有三个吗?一个实现的tcp,一个实现的http,还有一个实现的grpc,【http server可以改造对应connector等等】,这个没有理解,【connector不局限为进程级别,可以是线程级别】我认为这样设计很合理,我想到的设计还不完善,【 connector 现有经过的节点过多,考虑优化内存模式的存储】,现有的connector不是由自身tcp到runtime,再由runtime进行后续处理,我的方案是把原网络通信改为方法调用,也就是内存的处理,至于改用distruptor不是需要写一个eventmesh-storage-plugin吗?还是说要connector放入distruptor,runtime再从distruptor中获取呢?【filter\transformer】这里我还没学习,【connector(作为独立的function)】,这个issue 不是讨论如何把connector集成进runtime吗,怎么又独立了?
It has been 90 days since the last activity on this issue. Apache EventMesh values the voices of the community. Please don't hesitate to share your latest insights on this matter at any time, as the community is more than willing to engage in discussions regarding the development and optimization directions of this feature.
If you feel that your issue has been resolved, please feel free to close it. Should you have any additional information to share, you are welcome to reopen this issue.
@Ish1yu
Connector Runtime has been partially merged into the main branch, which you can refer to. Connector Runtime is deployed on the same machine as Mesh Runtime, saving network IO overhead; using a single protocol to communicate, saving protocol conversion overhead.
It has been 90 days since the last activity on this issue. Apache EventMesh values the voices of the community. Please don't hesitate to share your latest insights on this matter at any time, as the community is more than willing to engage in discussions regarding the development and optimization directions of this feature.
If you feel that your issue has been resolved, please feel free to close it. Should you have any additional information to share, you are welcome to reopen this issue.
Search before asking
Feature Request
Currently, the
connector
andeventmesh-runtime
run separately. At the same time, theconnector
integrateseventmesh-sdk
to communicate witheventmesh-runtime
, and the data synchronization process will go through aneventmesh storage
. The synchronized data passes through multiple nodes and undergoes multiple serialization and deserialization operations, which has a great loss on the efficiency of synchronized data. We hope to embed theconnector
into theeventmesh-runtime
and optimize theeventmesh-runtime
memory storage mode. The source/sink connector can complete data flow and synchronization through memory storage ineventmesh-runtime
, improving the efficiency of synchronizing data.Are you willing to submit PR?
Code of Conduct