ensenso / ros_driver

Official ROS driver for Ensenso stereo cameras.
http://wiki.ros.org/ensenso_driver
BSD 3-Clause "New" or "Revised" License
28 stars 25 forks source link

Implement configurable frames for virtual objects #109

Open erblinium opened 1 year ago

erblinium commented 1 year ago

Hi @erblinium,

thank you very much for your effort.

Is there any chance you could show me the contents of an objectFile together with the corresponding linkTree (the same ways you did it in your last PR)? I am trying to get my head around the involved transformations and their frames ;)

Kind Regards Benny

Hi @benthie,

I added the NxTree as attachment (I could not record while running my application, but I recorded when I opened the camera and objects in NxView). Here is part of of an object json file:

[
    {
    "Fixed": true,
    "Lighting": {
      "Ambient": 100.0,
      "Color": [
        0,
        0,
        0
      ],
      "Diffuse": 1.0,
      "MaterialBlur": 0,
      "Shininess": 100,
      "Specular": 0.0
    },
    "Link": {
      "Inverse": true,
      "Rotation": {
        "Angle": 0,
        "Axis": [
          1,
          0,
          0
        ]
      },
      "Target": "aruco",
      "Translation": [
        -250.0,
        250.0,
        0.0
      ]
    },
    "Mass": 1,
    "Type": "Cuboid",
    "Width": 100.0,
    "Height": 20.0,
    "Depth": 100.0
  }
]

nxTree_tmp.nxlog.txt

benthie commented 1 year ago

I was actually hoping you could provide me the tf link tree as you did last time, I think you ran rosrun tf view_frames back then. You can directly post the pdf file here.

The NxLog file is not that good to read, could you maybe post the complete tree by either

If your objects file does not contain any information you do not want to be public, I would also be interested in the complete file.

I hope that is not too much to ask ;)

erblinium commented 1 year ago

Oh sorry. I remember now :)

image aruco_objects.json.txt

benthie commented 10 months ago

Hi @erblinium,

I finally found some time to have a look at your recent changes on this branch and at the new features in the other two pull requests.

As I asked in one of the above conversations, could you write a test case for your feature? This would massively help us understand this feature and make reviewing it easier. For the test you could simply create a virtual camera in NxView containing some virtual objects, save it as ZIP-file and use it as the base for your test. Additionally you could maybe also save your objects file from NxView . The test case could look something like this:

In the conversation above you said that you are adding other objects in the meantime. As far as I understand the feature at the moment, you have an objects file with initial virtual object information, containing the original transformation of that object, which is later used to determine the new object pose after - I guess - the camera has been moved (?); what happens if we add a new object? Don't we have to remember its original transformation as well? And in case we also want to be able to remove an object, the corresponding original transformation has to be removed as well, right?

Kind regards Benny