Open StephenOTT opened 2 years ago
further details added
Consider the as a sample of common event patterns
š please fix the modeller.... it is brutal to work with...
As we discuss in another forum, I think that the filtering of an event (security based or by any other criteria) should not be performed as part of a listener of a catch event, but before invoking the catch event node. That kind of filtering should be performed by a custom EventConsumer (or incorporated into the default one, CloudEventConsumer) base on Node metadata (which is accessible by navigating through process definition, which implement NodeContainer interface) Listeners that the ones you are describing should be used to react to an internal event and perform additional actions (in fact, we are using this listeners to notfiy the active addons that interact with external services), but not to interrupt or interfere in any way with the flow.
@elguardian FYI
@fjtirado in practice the use of the listener is/would become a VERY popular place to inject logic for a process instance. The listener provides all of the typical movement you want to add customizations and provides ~easy access to everything. It is also very similar designs as other BPM engines. If the processEventListener is essentially a Non-blocking, non-throwing, non-process side effect generator, then this should have some VERY clear indicators for use.
Example: If i wanted to restrict who can start a specific process def, the Process Listener looks to be a perfect spot to add this functionality and very easy to do. If it is acceptable to do this, then why would the same sort of logic become unacceptable for BPMN Catch Events. If it is not acceptable to do this type of listener, then where would this logic have to added: If it is overly complex, people will just default to the listener.
It is a similar consideration for persistence: a common listener to inject into persistence for BPMN is a transaction listener: if the transaction is committed then do X, if the transaction is Committing then do Y, etc.
Hi @StephenOTT
catch events filtering by payload is written down in the bpmn2 spec as collaborations/correlation keys mechanism. In the case of kogito/jbpm we have that already in there and that is a built in within the engine so the sort of filtering regarding this is already supported by the engine. (editor yet does not support this construct AFAIK but it will be done soon. You can create any filter in there or payload in there. You can always create a correlation key with role = "whatever role you want" and the filtering by it.
https://issues.redhat.com/browse/JBPM-9735
Regarding security it does not make sense... if the engine is receiving a message it is because it can access that event queue (stream or jms) so it should be capable of process it. A whole different topic is to target one specifics process instance which is covered by the above jira.
It it feels very weird to tell a process instances belongs to a user as there is no such thing. This is not related to security but modeling bpmn. bpmn does not have that sort of concept of security (it defines some sort potential owners, lanes, etc... but it is more related to the configuration of the user task) if you want to put some logic about it. It looks to me that your process instance will work in the idea of security context having some sort of virtual process depending on the process and that would not reflect in the diagram visual which defeat the idea of bpmn2.
@elguardian thanks for the details.
In real world usage of a BPMN, the BPMN definitely implements the concepts of security: Who is allowed to Start an instance of this process definition. If it is not implemented at the BPMN config, it gets implemented at the app level and is just becomes boiler plate code (Kogito being CodeGen based to essentially generate the app from the BPMN... would then seem to imply needs to control access at the app level through the bpmn config). But if there is a different arch design pattern going on, happy to receive and understand its benefits.
A Message as i mentioned could be interrupted in an administrative way (as you suggest: If you have access to the topic then you can send whatever) or can be seen as an entry point into the process instance and have process instance specific configurations. All of this i believe comes down to how you interpret the use of a BPMN: Is the BPMN a internal function that is not exposed to the end-user/client (example: are the generated endpoints designed to be accessible to the UI or is the arch design expecting a BFF server to front the APIs and add additional logic)
Consider a model like this:
This type of scenario comes up many times in real world modelling requirements: a process instance is generated and there is initial data passed into the instance. At any point during the process instance, the client wants to request the data be updated: which is some sort of action to get data from a "DB" and refresh the process variables. The client would be activating this during any moment of working on the UTs. In this scenario, there is a specific correlation going on: Message the specific process instance and that instance can only be messaged by the current Assignee of the UT.
Some updates on this based on further discussion with @fjtirado:
There appears to be some confusion between the perceived layer in Kogito where the code would actually execute: When i write "evaluate some logic at the BPMN Activity/Node instance level"; my interpretation of this means to be able to handle logic that is configured at the Activity definition level and in the context of the active activity instance. It does not necessarily mean that the underlying handling of this code is done within the deeper levels of the JBPM execution.
After further review of how CloudEventConsumer / EventConsumer functions, it would seem my previous use cases ~could be covered within the consumer:
The EventConsumer locates an instance (missing future feature to be implemented for Messaging multiple instances?): https://github.com/kiegroup/kogito-runtimes/blob/main/jbpm/jbpm-flow/src/main/java/org/kie/kogito/event/impl/CloudEventConsumer.java#L66-L68
which then returns ProcessInstance
Using this along with the returned ProcessInstance, in theory we would be able to return a list of Process instances that the Message would correlate with, and for each instance, we could find the BPMN Activity/node instances that the message could correlate with. Something like Map<ProcessInstanceID, List
Thoughts?
Issue for supporting messaging a set of process instances: https://issues.redhat.com/browse/KOGITO-6468
Description
as we understand it, BPMN Events: Messages and Signals are Caught and Thrown. Sending Messages is the equivalent of making a HTTP request: internal code is messaging something and the BPMN Throw Event is just that mechanism.
Catch Events though seem to have some varying mechanics and are unclear on the expected used and configuration options:
There is the Events addon https://docs.jboss.org/kogito/release/latest/html_single/#_kogito_events_add_on. This this feature is essentially providing the administrative mechanism to send and receiving events through mediums.
A Message or Signal Catch event has a Message/Signal Name. This name can be used by 1 or more receiving BPMN event elements within 1 or more process definitions.
In practice usage of these events there is the general "i want to administratively send events into my application", and there is a common usage for allowing events to have more business semantics. Example a BPMN can have 1 None Start Event, but it can have Many Message Start Events. Or have a Message Catch Boundary event on a User Task as another way to ~bypass the UT without doing a completion (or alternative Lifecycle event). So the Events are ways to use the BPMN to model further Application-like capabilities using business semantics.
There are ProcessEventListeners which have OnSignal and OnMessage, but these appear only for Throw Events and not for Catch Events. š” it would be good to have OnSignalReceived and OnMessageReceived listeners (which are fired at the BPMN element level of receipt of the message/signal)? Should there be a higher level general "Message Received" method that is invoked when a message is received, destinations are determined, but receipt is yet to be completed: this would allow a listener to eval if a user is allowed send a message to multiple receivers. Imagine a security rule that would prevent a message from activating more than 3 process instances.
In ProcessEventListeners there is beforeNodeTriggered, which seems to be called before a specific message/signal is delivered to a specific BPMN element instance in a Process instance. But it feels weird to hook into the system at this level to add capabilities? More below
The types of use cases that arise given the context above (especially item 2 and 3) are being able to apply configurations to a Throw Event (such as with the metadata fields) that can be evaluated at runtime. For example: 6.1. Set a list of users/roles who are allowed to send messages to this specific Catch Event 6.2. I want to eval the process variables within the specific process instance against the payload of the message and decide if the message is acceptable / meets the criteria for receipt.
6.1 becomes more interesting given that a Message/Signal is received at (at least) two levels: 1. A event is received at the App level and that message could be sent to 1 or more receiving events: so you get into further security configurations such as "Users/roles that are allowed send messages are correlate with a single Event or multiple events and which process definitions.
I don't think the above needs to be designed to support specifically for the single use cases i mentioned: What i am looking for is where can this type of logic be injected into the App/process?
I can add metadata to individual processes, but where can one add listeners, codegen logic or whatever to handle the types of messaging scenarios?