Closed abdotaarek closed 9 months ago
To make objects hidden you can do the following: from omni.isaac.core.utils.prims import get_prim_at_path prim = get_prim_at_path(prim_path) prim.SetActive(False)
Thank you for the fast response, i really appreciate it. It worked for me.
hello, can I make the objects transparent while being physically active?
Yes, you can find transparent materials online and change them to USD-compatible format or modify existing material as transparent and assign that material to the object as in https://github.com/arnold-benchmark/arnold/blob/ee6c4840517ef46f159971b611bf3df998db261b/tasks/base_task.py#L272
I've tried to modify existing materials, but it did not work for me, can you tell me how exactly can i apply it to objects like for example ( /World_0/house/furniture )?
Assuming the material is located at a url or file location: floor_material_url Then you can create material in USD using this: omni.kit.commands.execute( "CreateMdlMaterialPrim", mtl_url=floor_material_url, mtl_name=floor_mtl_name, mtl_path=floor_material_prim_path, select_new_prim=False, ) Then you need to bind the material to the object using this: omni.kit.commands.execute( "BindMaterial", prim_path=floor_prim.GetPath(), material_path=floor_material_prim_path, strength=UsdShade.Tokens.strongerThanDescendants )
Thanks a lot, i was able to apply different materials. But another question, how do i access the train and test demonstrations episodes and get the frames out of them. Because i want to do an experiment, which include encoding all frames in a CLIP video encoder space. Thanks in advance and appreciate your help.
Hi, you can refer to training scripts. The details are here: https://arnold-docs.readthedocs.io/en/latest/tutorial/setup/index.html#quickstart
In dataset.py
, we have similar operations of extracting frames and corresponding observations from the demonstration npz
file. You can refer to this script. Specifically, each frame is a dict
and you can access the observation by indexing with images
. The value is a list
and each entry is dict
containing all information from one camera. The logic is like below:
import numpy as np
npz_file = np.load('demo.npz', allow_pickle=True)
gt_frames = npz_file['gt'] # this may require the package pxr, I am not sure
step = gt_frames[0]
obs_all = step['images']
obs_front = obs_all[0]
rgb_front = obs_front['rgb'] # shape: (H, W, 4)
So, is it possible to extract the frames without being able to train, because my pc isn't powerful enough. Also i tried this code you provided should i add it to the dataset script or make a different script with this code only, and should i refer to the file path to any npz file i want.
Yes, you can run it outside the Isaac Sim. To extract the frames, you can simply run the code snippet in ipython, with whatever npz file.
I did this but it says it needs pxr, and when i import pxr it doesn't differ.
What do you mean by it doesn't differ? Can you import pxr
? If you cannot, you can try:
source ${Isaac_Sim_Root}/setup_conda_env.sh
. And then try import pxr
.SimulationApp
first and then import pxr
.i tried all these solutions and still got this error:
Traceback (most recent call last):
File "/media/local/atarek/arnold/helptest.py", line 2, in
i ran the code you told me which is this:
import numpy as np
import pxr
npz_file = np.load('/media/local/atarek/arnold/data_root/open_drawer/train/Steven-open_drawer-0-0-0.0-0.5-2-Mon_Jan_30_06:06:44_2023.npz', allow_pickle=True)
gt_frames = npz_file['gt']
step = gt_frames[0]
obs_all = step['images']
obs_front = obs_all[0]
rgb_front = obs_front['rgb']
The problem is pxr do not exist until you start SimulationApp.
A workaround is to pip install usd-core if you do not need isaac sim
and pip uninstall usd-core if you need isaac sim.
Thank you so much, it worked
Hello, thank you all for this amazing work. i'm trying to do some customizations to the environment, so I basically want to make some objects transparent or hidden when evaluating. Can you guide me on how to do it?