imixs / imixs-micro

A lightweight workflow service running on plain Java VMs
GNU General Public License v3.0
1 stars 1 forks source link

communication between micro processor and the workflow engine #3

Open rsoika opened 2 months ago

rsoika commented 2 months ago

Concept how the communication between micro processor and the workflow engine should work

gmillinger commented 2 months ago

Addressing each point in separate comments.

-- Requirement: The workflow engine should accept commands to receive a process definition (BPMN xml), start the process, stop the process, accept external data that advances the execution of a task.

-- Explanation: This explanation reflects how I have done this in the past, I would expect there are better ways to do it. It is important to note the design constraints when thinking through this. The solution must be lightweight, minimum dependencies, run on computing resources such as Raspberry Pi, and so on. This rules out some of the heavyweight enterprise application infrastructures/platforms. Simplicity is the key concept.

This breaks down into an architectural description. Each software component is a stand-alone running program thought of as a service. For example, the workflow engine is a service that manages one or more running process definition instances that consumes and emits events using a publish-subscribe design pattern. The workflow engine is considered the hub which exposes a websocket for the exchange of the event messages. Other clients or services could make a secure connection to the engine and receive events and publish events to the engine. Examples of client-services are automation equipment and user interfaces such as a webpage. The event messages can be considered "commands" such as "load bpmn xml", "start process", "stop process", and so on. The event messages also publish state changes of the process and context data captured at the state change.

It may make more sense to do this locally with a type of plugin design pattern but there still exists the need to communicate more broadly with other workflow engines on the same network. This describes a decentralized approach with many workflow engines running on the same network on individual computers such as a Raspberry Pi acting as the workstation. The same requirement exists within a centralized application architecture design but is probably better using a different method of communication or an enterprise infrastructure.

The most important thing is meeting the requirement. Whatever technical solution meets requirements and constraints can be used.

gmillinger commented 2 months ago

-- Requirement use of peer-to-peer computing and mesh networking. remove internet and/or WAN dependency within the walls of the facility.

-- Explanation The hardware/computing resource architecture for our current platform has not changed much since 2005, yes 20 years. Very traditional client/server setup. Usually it is 3 server grade computers.

The App Servers - running Windows Terminal Services with workstation thin client terminals using RDP for the UI. The are 2 APP Servers that are exact duplicates with software that keeps each identical. Both computers have redundant NICs and a RAID 5 storage system. Replicated MS SQL Server databases are running on each App Server for workflow definitions, persistence, and historical process execution data. As you can see the system is almost bullet-proof except for the network infrastructure within manufacturing plant. The network is very redundant also but is the weakest link. The RDP session starts an instance of our workflow engine that is dedicated to the processes run at that workstation. A typical facility would have between 20 and 50 workstations. In some cases more.

The third server is a Database Server with RAID 5 storage. The MS SQL Server is linked to the App Servers and historical data is flushed to the Database Server at scheduled intervals and purged from the App Server DB. All historcial reporting and analytics is performed from the Database Server.

You can see how this has many moving parts and can get very expensive. Nowadays the App and Database servers are virtualized on-premise but there are times when uptime does not meet the requirements.

The challenge: create an infrastructure with the same reliability and reduce the cost.

The experiment:

The result:

rsoika commented 2 months ago

ok - I understand.

So we can think about a architecture like this:

  1. We have one Enterprise Workflow Engine - holding the meta models and controlling the micro workflow engines. This engine can run on a modern Jakarta App server.
  2. We have one or many Workstations (Raspberry ) running the micro workflow engine.

To setup a new Workstation as a System engineer you just need to do

  1. Create a meta model with the BPMN Modelling tool defining a kind of human centric meta process.
  2. Deploy the meta model on the Enterprise server. The meta model holds information about the workstation devices.
  3. Create a new Micro Workflow Model with the BPMN Modelling tool.
  4. The Micro Model holds information about the Endpoint to the Enterprise Workflow Engine and optional Endpoints to other workstations
  5. Next you build the Raspberry Image (with maven) burn it on a SD Card
  6. Put the new SD Card into the workstation (Raspberry) and reboot the device

What happens now (in my vision):

  1. Imixs-Micro starts and is loading automatically all local BPMN Micro Workflow Models from the SD Card
  2. According to the model definition, the micro engine connects automatically with the Enterprise Workflow engine (via Websockets) and says 'hello'.
  3. Now the workstation is ready to accept commands (web socket). Typically something like 'start workflow 1.0.0 '
  4. Optional - the workstation also says 'hello' to other workstations in the network

grafik

The 'Create' event holds a SignalAdapter (this is what you expect in a Task) that calls the Micro Controller. The configuration is done in the BPMN Event Details. For example this may look like this:

grafik

Each time the micro workflow completes it triggers an event on the Enterprise Workflow with optional meta data.

rsoika commented 1 month ago

Hi @gmillinger , now I think I found a working architecture. In its core, I think we need to distinguish between to different kind of message flows

I created a new architecture overview document here

https://github.com/imixs/imixs-micro/blob/main/README.md

gmillinger commented 1 month ago

Hi @rsoika, the thought process looks really good. I am going to take some time to run the concept through my use cases to flush out details. I have a list of tasks for the this project, I will add formalizing the use cases and add them to the project documents.

rsoika commented 1 month ago

I also added now JUnit tests to test the WebSocket behavior In addition I will add a Docker Compose File to stimulate the complete technical setup...