o3de / ROSConDemo

A robotic fruit picking demo project for O3DE with ROS 2 Gem
Other
70 stars 23 forks source link

Identify Orchestration Needs #9

Closed forhalle closed 2 years ago

forhalle commented 2 years ago

Identify the scripts necessary for the demo. For example, we may need a script to harvest the apples from the tree (if the robot uses suction to pick the apple, then make the apples disappear when they are harvested, and make them reappear when dropped into the bin).

Acceptance Criteria:

forhalle commented 2 years ago

.

adamdbrw commented 2 years ago

To initiate this task, here is a rough description of scripting. A "mission" script: gather apples from specified rows until interrupted or done (no more apples):

  1. Queue all the trees in the given rows (can be implemented with a lazy / buffered approach).
  2. For each tree in our queue:
    1. Go to a first tree in a given row (set ros2 nav goal but use ground truth position next to a tree). Each tree should have one or more gathering points (for total gathering coverage).
    2. Gather apples from the current tree until all are gathered.
      1. Calculate our manipulator's reach (x, y, z ranges coverage, it will be a cube - this could be done on startup).
        1. For each gathering position for the current tree (we can start with a single position).
        2. Query the environment for the pickable apples (in manipulator reach).
        3. Queue the picking order (some simplified algorithm for a traveling salesman problem, or just "book reading" top-bottom, left-right sweep).
        4. For each apple in the queue
          1. Calculate the desired position of a manipulator.
          2. Position manipulator in front of an apple (simultaneously approach with x and y, extend at the end).
          3. Apple picking: our apple vanishes.
          4. Add the apple to our storage.
          5. If storage is full, do the unloading (e. g. spawn a couple of full crates of apples behind the robot, become empty).
adamdbrw commented 2 years ago

If points 3a and 3b could be done through integration with moveIt2, that would be great (we would like to demonstrate it).

adamdbrw commented 2 years ago

Giving it a bit of thought - do you think it would be good to move some of this to ROS 2 packages? This would reflect a real use-case a bit more. One or more of the following could replace selected parts of scripting (based on ground truth):

If we make only one such node, the apple detector would be best. It would take camera image as well as manipulator pose(s) on topic and publish bounding boxes for apples in the image. It could even be a stub which only publishes from a ground truth topic (sim "cheating") to detector topic but it would still be an example of ROS 2 interaction.

Let me know if it makes sense to you.

adamdbrw commented 2 years ago

Orchestration Needs have been identified with the following subtasks: #42 #43 #44 #45 #46 #47. @forhalle this task can be now closed. 42-46 tasks might spawn smaller sub-tasks in the future, but they are supposed to cover all the required automation / scripting for the live demo.

Note: we should add a task for user interaction within the AWS live demo - we should ensure manual control work, there is certain gamification to it, a timer, good camera views for the task etc.

forhalle commented 2 years ago

Hi @adamdbrw - I'll close the issue this time, but feel free to close issues in the future (I don't want to be a bottleneck). Also, per these notes, thanks for creating the user interaction tasks you mention above.