Open danaivach opened 9 months ago
The issue has been updated to include the requirements identified from the motivating scenario.
In the last CG meeting, a question was raised about the level of generality or abstraction of the identified "observation" affordances:
Is the purpose to offer a generic observation affordance for monitoring the state of an action execution, or to offer a set of more granular and specific observation affordances for the individual possible states of an action execution?
For instance, should there be a single observation affordance for monitoring whether setting the TCP is Running/Paused/Completed with success/Completed with failure, or should there be separate observation affordances for each state (e.g., see (3d), (3e))?
@scranefield, does the above reflect your question? Could you offer a more appropriate statement or more details about the intended topic?
@danaivach, my point is that you have modelled your action execution as a state machine, and if that representation is likely to be useful for many manageable affordances, then there would be value in defining more generic affordances for triggering events in and observing the state of a state machine. At present, you have scenario-specific affordances that, through their documentation, can be seen to be related to the state machine, but there is no directly modelled relationship. Why not fully embrace the state machine model and define and use "state machine affordances"? I appreciate that the state machine is an abstraction of the robot arm, and that the observation of reaching a termination state machine is not provided by the robot arm itself but rather by another service. However, that could be accommodated by defining affordances that represent different views of the state machine, including direct and precise observations as well as indirect and uncertain ones.
Title: Manufacturing Environments
Submitter(s):
Danai Vachtsevanou, Jérémy Lemée, Andrei Ciortea, Simon Mayer (University of St.Gallen)
Motivation:
We consider manufacturing environments where humans and artificial agents collaborate to achieve production goals. When executing their manufacturing tasks, agents should be able to discover dynamically how to interact with industrial equipment, services, and other agents. At the same time, agents should manage their interactions to ensure that they progress appropriately, yield desired outcomes, and adapt to the evolving context of the collaborative environment. To do so effectively, agents must discover how to monitor various aspects of their actions.
Monitoring the completion of an action execution Consider an artificial agent situated in a manufacturing workspace that has the task to grasp an object from a target point. The agent knows that for performing this task, it needs to first set the tool center point (TCP) of a robotic arm to the target point, and then close the gripper of the robotic arm. Additionally, safety regulations recommend that the gripper state does not change until a target TCP has been reached (i.e. while the robotic arm is in movement). Once the agent initiates the action of setting the TCP, it can consequently monitor the execution of the action. To this end, the agent should discover an affordance of the robotic arm for monitoring the progress of setting the TCP — such that the agent can perceive when the setting has been completed and proceed to execute the action of closing the gripper.
Monitoring the outcome of an action execution To ensure proper task progression, agents may also need to monitor the outcomes of their actions, for example, to determine whether the robotic arm was able to grab the object successfully or not (e.g., due to object re-location or collisions). In this case, the affordances of the robotic arm would not suffice for evaluating the outcome of the action execution. Instead, the agent can use an activity and object recognition service to monitor the movements of the robotic arm with respect to the object's grabbing spots. To this end, the agent should be able to discover the affordances of the monitoring service — such that the agent can perceive that the TCP setting has been completed successfully with respect to the target TCP (which, here, matches the position of a grabbing spot) and proceed to execute the action of closing the gripper).
Monitoring the context of an action execution Given that the agents' activities take place within a collaborative environment, agents should ensure that their actions are aligned to the dynamic context of the manufacturing workspace. For example, safety regulations may recommend that moving a robotic arm (its TCP) must not occur if a human is in close proximity. In this case, the agent should be able to discover an affordance for monitoring the proximity of humans to the robotic arm. Once the agent initiates the action of setting the TCP, it can consequently monitor how close humans get to the robotic arm during the execution of the action — so that the agent can pause the action execution if needed, and then resume the action execution when appropriate.
Expected Participating Entities:
We consider that different artifacts (devices and services), and agents (artificial and human) are contained in a manufacturing workspace.
Workflow:
The following diagram captures the workflow within the manufacturing workspace:
Related Use Cases (if any):
Manufacturing Use Case of the IntellIoT project (specification available in Use Case Specification & Open Call Definition (2.4); Section 2.3).
Existing solutions:
Provide links to relevant solutions to be considered if you know any.
Identified Requirements by the TF:
target entity(ies) of the motivating scenario: the execution of the action of setting the TCP of a robotic arm
life cycle of the target entity(ies):
information conveyed about affordances that enable a) affecting the life cycle of the action execution, or b) observing in which part of its life cycle the action execution is (or other aspects of the application state). The latter may enable an agent to reason about how it should affect the life cycle of the action execution:
a. Information about how to exploit the affordance for starting the action execution — here, the affordance of setting the TCP offered by a robotic arm;
b. Information about which is the affordance for pausing the action execution (and information about how to exploit this affordance) — here, the affordance of pausing offered by a robotic arm;
c. Information about which is the affordance for resuming the action execution (and information about how to exploit this affordance) — here, the affordance of resuming offered by a robotic arm;
d. Information about which is the affordance for observing the progress of the action execution, i.e. running (and information about how to exploit this affordance) — here, the affordance of observing the TCP setting status offered by a robotic arm);
e. Information about which is the affordance for observing if the action execution has terminated with success/failure (and information about how to exploit this affordance) — here, the affordance of observing the TCP-object alignment status offered by an activity and object recognition service;
f. Information about which is the affordance for observing if the action execution is contextually relevant and appropriate, i.e. if some preconditions hold for action execution to start or continue running (and information about how to exploit this affordance) — here, the affordance of observing human robot-distances offered by a proximity service.
The description of the affordance in (a) should link to the descriptions of the affordances in (b)-(f).
how the life cycle is influenced (via affordances):
communication protocols: HTTP is used as the application-layer protocol for exploiting all the affordances in (3)
representation formats: RDF representation formats (e.g., text/turtle, application/ld+json) are used as the representation format of the information in (3)
security and privacy considerations: The affordance (3e) requires monitoring the behavior of people in the manufacturing environment, towards managing the life cycle of the scenario’s target entity with respect to safety requirements.
Comments: