Open gmillinger opened 2 months ago
Yes this is different in Imixs-Workflow. In your example the task is the place were something happens. And the worfklow engine does nothing else than executing one task after another until the process terminates with an end event. This is what I call a task-orientated processing concept.
In Imixs Workflow we have an event-orientated processing event. The difference is that in this concept the task defines a persistent unique state in the process flow. The workflow engine only reacts on events. If no event is fired the process status will not change.
You can see this in the wording. The task describes a fact like 'We received a new Invoice' or 'The invoice is ready for Approval'. But the task does not define what will happen. But if someone or something triggers the event 'Add Items' the workflow Engine will update the workitem status according to the process flow. So the action is the event like 'I add new items' or 'I approve the invoice'.
And from the coding perspective in your example the code (like adding items) is executed in a Task. So there is some code bound to the task that adds the items. In an event-orientated workflow engine this it different. If you want to add items you first need to fetch the process instance (workitem) from the persistence layer (data store). Now you can add data like items to the workitem and finally you call the event 'Add Items' to signal that now all is ready for the next step. The Workflow Engine coordinates this transition.
So from the coding it looks like this:
// fetch the workitem from the data base
workitem=database.load(id);
// change or add the payload
workitem.addItems();
// process the workitem
workflowEngine.process(workitem.event(100));
The advantage is that you always have a clear definition of the state because this state is only be set by the workflow kernel. You can not tweak the status!
And I always wonder how in the task-orientated model the status is defined? When "Get Invoice" is finished and "Add Items" is started, what is if something goes wrong? In which status is the process instance? Is the process instance still in the "Add Items" task? Or is it now again in "Get invoice".
In Imixs-Workflow this is an transactional flow. If your process instance is in the status "New Invoice received" you are allowed to add items and fire the event "Add Items". If this is successful - and only than - the status will change to "Approval" and the status will be persisted. But if something goes wrong the transaction is rolled-back and the status is still "New Invoice received".
And this is the reason why we depend on Jakarta EE and the EJB Container that supports transactions.
I have another example that illustrates the idea of event-orientated processing.
One important rule of thumb in Imixs-Workflow is: Do not place business logic into your Micro-Kernel (Plugin/Adapter classes). This means your micro kernel code should not make a decision about the outcome of a processing step.
Imaging you have a micro-controller in a production engine that produces parts. One production-order consists of 100 parts. Each time a new part was produced a sensor on the micro-controller sends a signal to the workflow engine.
Until less of 100 parts are produced ( <100 signals received in the status 'Production') the workflow kernel continues in the status "Production". After 100 Signals the Workflow Engine triggers another sensor on the micro-controller to eject the parts and stop the production-order.
In this way you have the production logic in the BPMN Model and not in your java code. To make it clear: until the event 'eject parts' is triggered by the WorkflowKernel no java code is needed because the micro-controller sends the event and the Imixs WorkflowKernel controls the business logic and the production flow.
Thank you for the examples. I am working through the concept but will have more questions.
In the past my designs have been task-oriented as you describe. The status/state was maintained and persisted by the workflow during the execution of tasks. The event you describe was part of the task execution. The state was maintained up to the last successfully executed task. If something happened the workflow could be restarted and resume based on the last state and a snapshot of a global context. So the state data encompassed the entire process instance. The user with high enough security could roll-back to a previous task and resume operation, this is a very common requirement due to the hybrid nature of manufacturing processes. Overall I know based on today's thinking that design is really nasty. Your design is very elegant with technologies that were only being experimented with when I did my first design in 1993.
I started doing proof-of-concepts almost two years ago and cycled through months of study with some of the platforms such as Camunda. I also built a Visual Modeler with BPMN.io libraries. Very little documentation and seriously abstract thinking in some areas. I figured it out in the end but it took months. I also read a number of books about the BPMN standard and used them for sleep aids :-). So a lot of my confusion about how you have implemented the kernel is based on the other implementations of BPML execution engines I have worked with. They have all been task-oriented, Camunda being an example of that.
Please do not take this as a criticism. Of all the workflow platforms I have found yours to be the most documented and every detailed looks to be very well thought out.
Yes I know that camunda plays a central role in this market. Camunda is a fork of Acitviti (years ago). And Activiti introduced this task-orientated concept. Camunda of course has the same. The main difference is:
The concept of the plug-in life-cyle for events is very powerful and I have never seen that in a workflow kernel before. Could the kernel be changed to allow the execution at a task in the same way? For example, what if the concept you have of the plug-in life-cycle is also added to a task? This would open up both possibilities.
Hi Greg, that's interesting, I got a similar question today in the discussion forum. I do not understand why everyone want to implement the execution of some code in a Task instead of an Event. Maybe you can help me with this.
My understanding is the following: You have some piece of code (e.g. requesting data from a external data source). And this code need to be executed during the processing life-cycle. Now, in Camunda you implement this in a Task and in Imixs-Workflow you implement the same code in an Event. I can't see why it should be important to run the code in a task?
At the end you have - as an application developer - some business logic, your execution code. And you have to bind this code somehow to the BPMN engine.
The JavaDelegate
interface from Camunda is nothing else as the Plugin
or SignalAdapter
Interface from Imixs-Workflow. Its a different naming and at the end the Imixs Plugin interface provides you with much more control of the processing life-cycle.
Some thoughts... "The JavaDelegate interface from Camunda is nothing else as the Plugin or SignalAdapter Interface from Imixs-Workflow. Its a different naming and at the end the Imixs Plugin interface provides you with much more control of the processing life-cycle."
Very much agree with this statement. Have a look at this comparison of a state diagram to a flowchart, I often look at it when my mind drifts on the subject. State diagrams versus flowcharts is toward the bottom of the page.
https://en.wikipedia.org/wiki/State_diagram
I have seen a shift in workflow technology and who uses it. To programmers the model you have makes sense because it appears to be a hybrid state machine with the events triggering the transitions. The plug-ins concept you have is very similar to before and after actions of a state transition.
In my past business, we delivered projects with our proprietary framework but as things moved along the customers wanted to make changes themselves without coming back to us for minor things. In our case a customer may have 100 workflows that were operationally dynamic based on context data that was provided when the workflow was instanced. A lot of workflows with many tasks, relying on many human and automated events. The customers were very technical but not professional programmers. Our API was event-oriented but the customers had trouble thinking in that way. Most of engineers were accustomed to flowcharts and it was hard for them to make the mental shift. So we created another API that was task-oriented that wrapped our original event-oriented API and hid the event part, moved the plug-in type functionality to the task and handled the events without people being aware. Life was good with our customers! And our entry-level programmers took to it faster too. We were technically correct in our thinking but adoption was not so good. Hide the technically correct side and life was good. Figure that out?
I am seeing the same with IT people (my son being one) they have a limit to what they want to learn and don't have the time to do it. They are doing more and more of the workflow type definition and management, and end-users want an easy graphical way to create workflows without much technical knowledge. I had a customer pay us to teach their secretary how to build workflows, very true. She did 80% and a more technical person wired up the events/triggers.
Are you sorry asked? :-)
Hi Greg, thanks for your thoughts and the wikipedia link. I think I know this arguments well and maybe the core question is:
Imixs-Workflow is the later one. The main focus is to ensure that the status is clearly defined according to a business model and no exception from this business flow is possible. This is what imixs-workflow does. The state is the most important aspect, not the processing.
But still I wonder how a process model would look like in your scenario. And I ask myself how I would solve it with Imixs-Workflow (state diagram).
Q. does someone need a processing engine to coordinate code execution (flow chart) or does some need a processing engine to coordinate a enterprise business process(state machine).
A. Both :-)
This is from my perspective and my interest in your project. Within the manufacturing context at the production line/workstation level there needs to be "flowchart" execution, but at the plant/facility level there needs to be business process state machines. And ideally they need to be on the same workflow software platform. The frustrating part is that when looking strictly at the functional requirements of a workstation process execution engine and how a workflow kernel works they are almost identical. The goal is to find a workflow platform that can work for workstation process execution and business process execution. The first question is can the kernel run on a Raspberry Pi micro-processor. If the answer is yes or maybe then there is a start.
The thought process:
This is an attempt to use the same kernel, same platform, but change the modeling method based on the user role and level of technical understanding. The only real changes to the imixs-workflow platform is taking the core and separating it from the enterprise capabilities so it run independently within a workflow engine specifically designed to run within a jvm on a micro-processor such as a Raspberry Pi. The purpose-specific workflow engine may have a different local persistence method such as a file to hold state and historical execution data. But it will eventually make it to the enterprise imixs-workflow platform to take advantage of the capabilities of the platform.
Scope of work I would sign up for delivering:
Hi Greg, yes, now I better can follow your vision. And I think what we have so far is not to bad for the moment. The Imixs-Micro is already ready to run on a raspberry PI . I have no raspberry but if you can implement a Java Hello-World on your environment, I can help you with integrating the Imixs-Micro project.
And next we should have some models - one for the workstation process and one for the enterprise business process. We can discuss the question about cycles and squares in the diagram later on a concrete example. You find two test models here to get started. Maybe you can model one to illustrate a more realistic production process so that I can show you how to translate this into a model that executes code with the existing Imixs-Workflow Kernel.
Very good. I will get started on everything you have listed and get caught up project charter by the end of the weekend. It has been busy around my house the past few days and our boys are back to university! Nice having a quiet house.
Workflow example (start) -->[new ticket] --> (submit) --> [open] --> (end)
start, submit, end being events I am having trouble with sorting out events. Why is it required to have an event before a task?
This why I have trouble with this, in the past I have seen cases where tasks can flow one to another as they are completed. For example: (start) -->[Get Invoice] --> [Add Items] --> [Approve] --> (end)
"Get Invoice", "Add Items" are service tasks. Approve is a user task. Once Get Invoice retrieves the invoice data the task is complete and the workflow transitions to the Add Items task. The Invoice data is held in context and is used for input for the Add Items service task and it is executed and the workflow transitions to the Approve task. Finally the Approve task shows up in a task list and a human will look at the updated invoice and approve/disapprove it and the workflow ends.