o3de / ROSConDemo

A robotic fruit picking demo project for O3DE with ROS 2 Gem
Other
65 stars 22 forks source link

Document Robot Vehicle Design #4

Closed forhalle closed 2 years ago

forhalle commented 2 years ago

Document the robot vehicle design, including the specification (size, number, etc.) of all parts (wheels, motor, container, robotic arm, sensors, etc.), as well as scale, mesh, joint, and rig requirements.

Acceptance Criteria

Linked Issues

forhalle commented 2 years ago

.

forhalle commented 2 years ago

Per today's meeting, @adamdbrw will drive the next iteration of the design, focusing on sensors.

adamdbrw commented 2 years ago

After giving it some thought, I have a design in mind that is loosely inspired by Abundant Robotics demo video. I am sharing my initial thoughts so we can kickstart a discussion. I will be working on a description and some drawings (I am not good at drawing) and will also interact with https://github.com/aws-lumberyard/ROSConDemo/wiki/Demo-Walkthrough.

For the manipulation, picking: XY sliding frame with a telescopic suction gripper and pipe. Width, height and extension length are our parameters to play with. We need one on each side, but two on each side could be valid as well. Camera is mounted on top of the suction gripper. What we simulate:

Alternative: 4-8 arms (2-4 each side) on vertical sliders with 2 joints each (elbow, hand), 3-finger gripper. Bringing the apple to the storage could be a bit awkward, but we can have an extensible feeder half-pipe for each slider.

Sensors (models TBD):

Mobile base: A long rough terrain chassis with four wheels. A picture for inspiration: image

Let me know if this high-level view is something you would like me to progress with.

forhalle commented 2 years ago

@adamdbrw - Thanks for this! The AWS team reviewed it, and loves the suggestion. As a next step, @SLeibrick will create a sketch for review during tomorrow's meeting based on your above suggestion (and based on the suction design). We are hopeful @j-rivero 's team will be able to create the model based on this sketch.

SLeibrick commented 2 years ago

top Front side-rear

SLeibrick commented 2 years ago

Red are areas with cameras or lidar sensors, pretty simple design based on Adam's comments, so camera sensors on front, back and the apple tube, as well as Lidar sensor on the front. Blue arrows indicate motion for the picking array, and a black box is where the apples go and are then teleported to the back of the blackbox to come out into the container on the back of the vehicle.

adamdbrw commented 2 years ago

@SLeibrick I like it! :)

I think that adding the apple vacuum pipe would be a nice improvement since my initial impression was "where do the apples go?". It does not need to be functional physics-wise since we will teleport the apples, but in a reference use-case simulation it would be of importance to simulate the actual apple movement, especially to check possibilities such as apple bruising or clogging. We can at least relate to that visually through adding of the pipe.

Note that the pipe should be elastic (but with limited bend angles so that the apples can always go through) and extensible (or long enough for the most extreme case of suction device position). It should feed into the middle box (we assume the magic of soft landing and sorting out the bad apples happens there).

Further details include cables for sensors (power, data, perhaps sync) - these are completely optional. Consider whether we would like to make it more realistic in further iterations. The other point I mentioned in design is that another frame could be added on the other side. It is very dependent on the apple tree rows spacing vs robot width + telescopic range of our manipulators (for it to make sense, both sides need to be fully reachable). I think it just looks more cool if we have another manipulator frame on the other side. This is just a second-of improvement though and we might postpone it for later. We could place some graphics / decoration on the side of the machine: our logos, ROS2 Inside logo (after checks), or a fancy name such as "Apple Kraken".
forhalle commented 2 years ago

Notes from today's meeting:

SLeibrick commented 2 years ago

The distance between the rows for the apples should be 3m, so robot dimensions should be 1.5m wide, 3m long, max height of robot pneumatic tube arm 2m.

forhalle commented 2 years ago

@SLeibrick - Now that you have provided the robot size estimates, do you have any more work to do on this? If not, can we assign it to @j-rivero for feedback (per action item above)?

SLeibrick commented 2 years ago

No more feedback for now unless there are more questions about dimensions.

mabelzhang commented 2 years ago

Hi All,

@j-rivero has asked me to provide feedback from Open Robotics’ side.

Without a lot of technical context, this is a combination of probing questions and suggestions that hopefully helps the demo to be the best it can be. Feel free to take or ignore whatever items applicable.

Suction: we can simply teleport the apple to the storage bin.

Is this referring to the suction grasp, or the placing? I was curious if the physics for suction is real, or if it’s implemented as a translation / teleportation.

4-8 arms (2-4 each side) on vertical sliders with 2 joints each (elbow, hand), 3-finger gripper.

An opportunity to showcase O3DE, with so many joints and moving parts, might be the performance, in terms of time and accuracy. I guess accuracy comes more from PhysX than O3DE. Rendering-wise, this might not be anything special.

Sensors (models TBD):

Sensors may be more relevant for showcasing performance, since O3DE is more about graphics. With so many sensors in the world, especially with both images and point clouds, it can be challenging for simulators to perform in real time or faster than real time. Real time factor might be something to stress test and showcase. Obviously, with powerful enough computers, anything can be real time; for this to be relevant to most users, it should probably be measured on some typical hardware the target audience is expected to have.

Camera sensors on grippers, for visualization.

Are there advantages in camera image rendering that come with O3DE? How high can the camera frame rate go? Does camera frame rate matter much for agricultural applications - perhaps not as much as dynamic manipulations, since it is using a suction cup. Maybe one relevant thing is, how fast is the robot picking apples, whether it’s stopping completely before picking or might there be motion from the chassis, and whether the camera frame rate helps with more efficient picking.

General comments:

Please let me know if this type of feedback from us is adequate, as I'm essentially parachuting into this thread, and if you have any questions or clarifications to anything I said. Thanks!

adamdbrw commented 2 years ago

@mabelzhang thank you for your feedback and putting the effort to think about it!. Let me try to answer some of the questions. I might not have all the answers, but perhaps collectively we can arrive at a good understanding.

A preliminary clarifying question is, what is the primary goal the demo is trying to showcase? In my understanding, this is to showcase the combination of O3DE and ROS 2? O3DE is a graphics engine, and the physics is provided by PhysX?

O3DE is a game/simulation development engine, which includes, among other parts, a multi-threaded renderer and a physics engine (PhysX). We would like to showcase how O3DE can be used for a robotic simulation with ROS 2. I believe the message is that O3DE with its ROS 2 Gem is very promising and already quite capable. Our goal is to invite community to try it out and to contribute to its development.

What are the characteristics and advantages of O3DE that make it stand out from other engines that people may already be using? Does this demo, and specifically, this robot design, highlight these advantages? Why should people choose O3DE for the agricultural application showcased over competitor software? How do they know this isn’t just another reproduction of something that’s already working in existing engines, say, Unreal Engine or Gazebo, that makes O3DE more suitable for their applications?

O3DE is developing at a solid pace. While we certainly can not make up for years of development that some existing engines already had with robotic simulation in such a short time, I believe that O3DE has/will have substantial advantages. Some of them are:

  1. It is open source with no fees and has active and supportive community.
  2. The ROS 2 integration in O3DE is posed to be better in terms of developers' power and overall experience as well as performance (no bridging) than in some other engines.
  3. We aim for it to be well-documented. Quite some work towards this goal is already completed.
  4. We aim for it to be scalable, which means both performant and well-integrated with scale-up solutions such as AWS RoboMaker. We have already deployed such a solution with the last demo.

For the imminent demo at ROSCon 2022, we would like to underline these items and show that O3DE could be a good choice for developing a robotic use-case. Our showcase example is to be pretty (visually), relevant to an actual use-case in robotic agriculture, and demonstrating the engine and the ROS 2 Gem successfully applied to a problem. It is also easy to show scaling up, considering the area and multiple rows of apple trees.

Is this referring to the suction grasp, or the placing? I was curious if the physics for suction is real, or if it’s implemented as a translation / teleportation. How about the visual accuracy - when an apple is sucked, for example, does it shake around a bit like in the Abundant Robotics demo video? The only thing that probably requires some testing and tuning is the position of the on-hand / in-hand camera

Note that these items would be more relevant if we were simulating a real robot and providing a tool to validate it. Our approach is to show the operation as intended and look at it in a modular way: we are doing X based on ground truth, but one could replace this with an actual algorithm to validate.

  1. Suction gripper just teleports the apple - but it could just as well include simulation of forces.
  2. Transmission belt within is not simulated - but it could be, if this is the part someone would want to validate in a similar robot.
  3. Apple bruising is not simulated - but since it is important for the use-case one could add such simulation based on forces applied to its rigid body.
  4. There is no apple detection in the demo (we are using ground truth) - but one could as well run a ROS 2 detector package with sim camera data.
  5. .... (other items include: replacing ground truth position with some EKF, simulating distortion and noise of sensors, apple storage, battery life / charging of the robot, and many many more).

Note that we also use this demo to drive development of features around urdf import, vehicle dynamics and manipulation. I believe that a perfect next milestone for O3DE + ROS 2 Gem would be a simulation of a real use-case in cooperation with an end user.

What format is the robot description in? With ROS 2 in the picture, is it something like URDF?

Yes, we will create the URDF for this robot and use our Importer.

(Points about sensors and performance) These points are good and we certainly would like to have great performance. Initial work towards it has been done, but much more remains. Not sure how much we can still do for the demo, but I have some ideas. Performance benchmarking and comparison is something I enjoy doing, so I would love if we could find time for it, even if it is after the ROSCon.

These are just my answers. If anyone has something to add or dispute - please join in to the discussion.

j-rivero commented 2 years ago

Quick notes for the navigation:

Front lidar (for obstacle detection, and navigation stack). Additional lidars on the frame, (optionally): front left, front right, back.

Given than the terrain is flat, I think we can assume that the front lidar is an horizontal lidar. We need to assure that its location is not detecting the own vehicle structure.

Question: assuming that the movement is going to be managed by the navstack and given the environment design in #12, is the goal of the demo to be able to move the robot to any place in the scene? Only processing straight apple trees lines in the bottom of the scene?

Reading the scripting design seems to me like we are going to control the navstack goals but process all the apple tree lines. If that is the case, not sure if we can go with a single front lidar to perform some kind of turns with a 3m long vehicle (specially U turns between contiguous lines of tress) without crashing. For this two ideas to make our life easier:

A side note if that we need to construct the map of the scene for the navstack, SLAM makes little sense to me in the context of the demo.

j-rivero commented 2 years ago

Not related to the vehicle design but as preparation for possible answers given in the ROSCon:

Note that these items would be more relevant if we were simulating a real robot and providing a tool to validate it. Our approach is to show the operation as intended and look at it in a modular way: we are doing X based on ground truth, but one could replace this with an actual algorithm to validate.

While I understand perfectly the state of the current development and the scope of the demo, questions in the ROSCon can be picky, so for example:

1. Suction gripper just teleports the apple - but it could just as well include simulation of forces.

Imaginary attendee asking questions: "Ah great!, do you think that the simulator is fully capable of simulate the aerodynamics of this case? Do you have an example of that kind of simulation?" :wink:

2. Transmission belt within is not simulated - but it could be, if this is the part someone would want to validate in a similar robot.

same for the nonrigid bodies.

mabelzhang commented 2 years ago

@adamdbrw thank you for those detailed answers! That gives me a better sense.

@j-rivero raises good points above.

I think the last points about the capability of the simulation are very valid, and they apply to this bullet too:

3. Apple bruising is not simulated - but since it is important for the use-case one could add such simulation based on forces applied to its rigid body.

While reading it, I was thinking that this is actually really difficult to do. At the state of the art, contact forces are very difficult for any simulator to do accurately. Probably questions from advance users will be along these very technical lines, as pointed out above.

The ROS 2 integration in O3DE is posed to be better in terms of developers' power and overall experience as well as performance (no bridging) than in some other engines.

This can be a double-edge sword, as some users view ROS as a large dependency. How much of ROS 2 needs to be installed for this integration to work - does it work with just the minimum installation of ROS 2?

adamdbrw commented 2 years ago

While reading it, I was thinking that this is actually really difficult to do. At the state of the art, contact forces are very difficult for any simulator to do accurately.

It could be enough for many cases to simply simulate whether apple was bruised (not the size, placement or other characteristics of a bruise). I think that the question of "what degree of realism is possible using a certain engine" is often not easy to answer without a lot of work (trying different models of a given physical phenomenon), and often is not as important as the question "what do I need to simulate to get most of the value". Having said this, proofs of capabilities are important and it will be good to keep that in mind for stretch goals and further milestones.

How much of ROS 2 needs to be installed for this integration to work - does it work with just the minimum installation of ROS 2?

Current version of Gem needs the following (and their deps): rclcpp builtin_interfaces std_msgs sensor_msgs urdfdom tf2_ros. On top of that, the project would use additional packages such as the navigation stack (really up to the project's developers).

If we want to support a case with reduced ROS dependencies, standalone releases are possible as well - where all necessary libraries are actually included and no dependencies need to be installed.

j-rivero commented 2 years ago

black box is where the apples go and are then teleported to the back of the blackbox to come out into the container on the back of the vehicle.

Return to this black box: if someone we need to show the teleportation of the apples, the apple container needs to be open or there is another potential option not too complex? To simplify things we could go with a simple fixed designs that show partially the capacity of the open basket, something like:

adamdbrw commented 2 years ago

I think we can progress in the following way:

  1. Make an opaque storage that holds infinity apples.
  2. Show visuals for a few distinct states as you proposed. 2a. (Stretch goal) actually place the apples there (we could make them kinematic objects if needed and compute placement in reference to storage frame).
  3. Integrate with unload scripting (when full, spawn a couple of crates of apples)
  4. (Even more stretched goal) actually lower the crates through some kind of mechanism (e. g. with floor opening).
forhalle commented 2 years ago

Notes from today's conversation: