Open rsoika opened 2 months ago
Addressing each point in separate comments.
-- Requirement: The workflow engine should accept commands to receive a process definition (BPMN xml), start the process, stop the process, accept external data that advances the execution of a task.
-- Explanation: This explanation reflects how I have done this in the past, I would expect there are better ways to do it. It is important to note the design constraints when thinking through this. The solution must be lightweight, minimum dependencies, run on computing resources such as Raspberry Pi, and so on. This rules out some of the heavyweight enterprise application infrastructures/platforms. Simplicity is the key concept.
This breaks down into an architectural description. Each software component is a stand-alone running program thought of as a service. For example, the workflow engine is a service that manages one or more running process definition instances that consumes and emits events using a publish-subscribe design pattern. The workflow engine is considered the hub which exposes a websocket for the exchange of the event messages. Other clients or services could make a secure connection to the engine and receive events and publish events to the engine. Examples of client-services are automation equipment and user interfaces such as a webpage. The event messages can be considered "commands" such as "load bpmn xml", "start process", "stop process", and so on. The event messages also publish state changes of the process and context data captured at the state change.
It may make more sense to do this locally with a type of plugin design pattern but there still exists the need to communicate more broadly with other workflow engines on the same network. This describes a decentralized approach with many workflow engines running on the same network on individual computers such as a Raspberry Pi acting as the workstation. The same requirement exists within a centralized application architecture design but is probably better using a different method of communication or an enterprise infrastructure.
The most important thing is meeting the requirement. Whatever technical solution meets requirements and constraints can be used.
-- Requirement use of peer-to-peer computing and mesh networking. remove internet and/or WAN dependency within the walls of the facility.
-- Explanation The hardware/computing resource architecture for our current platform has not changed much since 2005, yes 20 years. Very traditional client/server setup. Usually it is 3 server grade computers.
The App Servers - running Windows Terminal Services with workstation thin client terminals using RDP for the UI. The are 2 APP Servers that are exact duplicates with software that keeps each identical. Both computers have redundant NICs and a RAID 5 storage system. Replicated MS SQL Server databases are running on each App Server for workflow definitions, persistence, and historical process execution data. As you can see the system is almost bullet-proof except for the network infrastructure within manufacturing plant. The network is very redundant also but is the weakest link. The RDP session starts an instance of our workflow engine that is dedicated to the processes run at that workstation. A typical facility would have between 20 and 50 workstations. In some cases more.
The third server is a Database Server with RAID 5 storage. The MS SQL Server is linked to the App Servers and historical data is flushed to the Database Server at scheduled intervals and purged from the App Server DB. All historcial reporting and analytics is performed from the Database Server.
You can see how this has many moving parts and can get very expensive. Nowadays the App and Database servers are virtualized on-premise but there are times when uptime does not meet the requirements.
The challenge: create an infrastructure with the same reliability and reduce the cost.
The experiment:
The result:
ok - I understand.
So we can think about a architecture like this:
To setup a new Workstation as a System engineer you just need to do
What happens now (in my vision):
The 'Create' event holds a SignalAdapter (this is what you expect in a Task) that calls the Micro Controller. The configuration is done in the BPMN Event Details. For example this may look like this:
Each time the micro workflow completes it triggers an event on the Enterprise Workflow with optional meta data.
Hi @gmillinger , now I think I found a working architecture. In its core, I think we need to distinguish between to different kind of message flows
I created a new architecture overview document here
Hi @rsoika, the thought process looks really good. I am going to take some time to run the concept through my use cases to flush out details. I have a list of tasks for the this project, I will add formalizing the use cases and add them to the project documents.
I also added now JUnit tests to test the WebSocket behavior In addition I will add a Docker Compose File to stimulate the complete technical setup...
Concept how the communication between micro processor and the workflow engine should work