Closed GoogleCodeExporter closed 8 years ago
Sorry, accidentally pressed some button that submitted issue before I finished
description :).
Token set join behaviour is defined as follows (see here:
http://sarasvati.googlecode.com/svn/java/tags/v1.0.4/doc/reference/html/ch04s02.
html)
A tokenSetAnd join will be satisfied when all active arc tokens in the set are
on incoming arcs to the same node and there are no active node tokens in the
token set. An exception will be raised if a non-token set token arrives.
In the previous release of Sarasvati (1.0.4) TS join worked fine (according to
the definition); however in the latest release (2.0.1) actual behavior has
changed so that e.g. the following workflow will never complete:
ts-split-node ----> node1 --A--> node2 ---> ts-join-node
--B------------->
(Arks A and B have different names, e.g. one is default, another is named; only
one arc token is created in node1).
We use this pattern quite frequently in our workflows, e.g. to skip some parts
on some conditions, or use alternative route, etc.
With the new engine token set join never happens - for any number of tokens in
the token set, because process waits for TWO arc tokens per TS token (one per
incoming arc), but receives only one. As a result process hangs forever.
There is a workaround - to change process definition as follows:
ts-split-node ----> node1 --A--> node2 ---> node3 ----> ts-join-node
--B------------->
I leave it to you to decide if this is a bug or new definition of the token set
join.
However, the previous definition seamed more reasonable; also the new behavior
looks less reliable (processes are not guaranteed to end any more)
I attached unit test that reproduces the problem - the first test finishes OK,
the second one fails
Original comment by vaverche...@appsecinc.com
on 31 Jan 2013 at 3:30
Attachments:
Doing some digging into this. I had a reason for the change in behavior, but I
can't remember what it was now. When i figure it out, I'll comment the class :)
I know why I changed the and/or joins in the context of token set. But the
actual join change escapes me. I'll either figure it out soon and let you know
or add equivalent behavior to the old join.
Original comment by plor...@gmail.com
on 4 Feb 2013 at 4:08
Just to explain why I think current TS join behavior does not correspond to the
description:
... when all active arc tokens in the set are on incoming arcs to the same node
...
This description does not require that arc tokens are present on every (i.e
ALL) incoming arc; it just says that all active arc tokens should be present on
ANY incoming arcs.
I.e, for the following workflow and token set with 1 token, according to the
definition it's sufficient to have 1 arc token on any for the join (though
there may be also 2 or 3 arc tokens):
ts-split-node --A--> node2 -D-> ts-join-node
--B------------->
--C------------->
But the current behavior requires 3 arc tokens to be present - one per incoming
arc, as in the ordinary AND join; 1 is not enough any more.
All that is about 'TokenSet' (former TokenSetAnd) join; I did not try
TokenSetOr behavior.
Original comment by vaverche...@appsecinc.com
on 4 Feb 2013 at 4:29
So, I was reverting the change in behavior and fixing up the corresponding
test, when the test failed and pointed out to me why the old behavior was
broken.
If you have a workflow like this:
A ---> B ---> D
\--> C
If you declare D a tokenSet join it will only work if C is completed before B.
If B is completed first, we will evaluate the join condition on D. But when C
is completed we don't reevaluate the join condition on C, and so the workflow
will freeze.
The tokenSet behavior, as defined, is very useful. However the old
implementation is buggy. It might be possible to have an efficient
implementation that is also correct, but that would take some thinking and
redesign.
So, in the meantime, here is what I propose:
TokenSetFirst
The node will wait for at least arc token from each token set level/index to arrive before evaluating the guard. Any arc tokens which arrive afterwards will be merged into the join.
OR
TokenSetOne
The node will wait for exactly one arc token from each token set level/index to arrive before evaluating the guard.
Does either of those cover your use case? If not, can you think of logic which
does, using only the knowledge of the tokens at the token set join node in
question?
I'll keep thinking and see if I can come up with a way to fulfill the promise
the old behavior made.
Original comment by plor...@gmail.com
on 6 Feb 2013 at 2:16
Hmm... I thought problem is in the following code (from TokenSetJoinStrategy):
final AndJoinStrategy strategy = new AndJoinStrategy();
...
JoinResult result = strategy.performJoin(token, new TokenSetGroupFilter(tokenSet, idx));
if (result.getJoinAction() == JoinAction.Complete)
{
resultTokens.addAll(result.getArcTokensCompletingJoin());
}
else
{
return IncompleteJoinResult.INSTANCE;
}
The problem is that JoinAction.Complete will be received only when token arc is
present on all incoming arcs - according to the AndJoinStrategy; and that never
happens for workflows that I described. So it will always return
IncompleteJoinResult; that should not depend on node execution order.
Though there may be some other problems that I did not see.
As to your proposal - I think the TokenSetFirst will cover my needs, thanks.
Original comment by vaverche...@appsecinc.com
on 6 Feb 2013 at 8:00
[deleted comment]
Paul,
I re-read your description of TokenSetFirst join, and now I'm not sure if that
is the behavior we need; maybe I confused 'each token set level/index' with
'each token set member token'.
Here is exactly what we need:
wait on token set join node until (at least) one arc token arrives for every
active token set member token, i.e. for every active 'thread' created on token
set split; independent on number of incoming arcs in the join node. When member
token is deactivated early, before the join, - e.g. discarded or due to absence
of outgoing arcs - it is not taken into account when evaluating join
conditions.
As to the the scenario that you described, i.e. when THE LAST member token is
removed/deactivated before the join, thus not 'pushing' join node - I'm not
sure if that is a new issue or it also existed in 1.0.4.
Original comment by vaverche...@appsecinc.com
on 8 Feb 2013 at 4:03
I just committed an implementation of the old behavior that handles the
dead-end node issue. I've tested the memory implementation, code the hibernate
side but not tested it yet. So hopefully will have this wrapped up soon.
Original comment by plor...@gmail.com
on 27 Feb 2013 at 3:48
Tested with hibernate, resolved some issues. This should now be fixed.
Original comment by plor...@gmail.com
on 3 Mar 2013 at 2:56
Original comment by plor...@gmail.com
on 3 Mar 2013 at 2:57
Released with v2.0.2
Original comment by plor...@gmail.com
on 3 Mar 2013 at 2:59
Original issue reported on code.google.com by
vaverche...@appsecinc.com
on 31 Jan 2013 at 3:06