haosulab / ManiSkill

SAPIEN Manipulation Skill Framework, a GPU parallelized robotics simulator and benchmark
https://maniskill.ai/
Apache License 2.0
766 stars 140 forks source link

[Question/Bug] ValueError Capturing Video #399

Closed CreativeNick closed 3 months ago

CreativeNick commented 3 months ago

I'm facing an issue where I am unable to capture video for training and evaluation. This requires me to have to add --no-capture-video to my commands in order to not receive this error.

For instance, when running python ppo.py I receive the following error message:

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ python ppo.py
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field wandb_entity is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field checkpoint is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
Running training
Saving eval videos to runs/PickCube-v1__ppo__1__1717470391/videos
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_observation_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_observation_space` for environment variables or `env.get_wrapper_attr('single_observation_space')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_action_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_action_space` for environment variables or `env.get_wrapper_attr('single_action_space')` that will search the reminding wrappers.
  logger.warn(
####
args.num_iterations=390 args.num_envs=512 args.num_eval_envs=2
args.minibatch_size=800 args.batch_size=25600 args.update_epochs=4
####
Epoch: 1, global_step=0
Evaluating
Traceback (most recent call last):
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/ppo.py", line 332, in <module>
    eval_envs.step(agent.get_action(eval_obs, deterministic=True))
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/vector/wrappers/gymnasium.py", line 96, in step
    obs, rew, terminations, truncations, infos = self._env.step(actions)
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 492, in step
    self.flush_video()
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 698, in flush_video
    images_to_video(
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/visualization/misc.py", line 50, in images_to_video
    writer.append_data(im)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/core/format.py", line 590, in append_data
    return self._append_data(im, total_meta)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 565, in _append_data
    h, w = im.shape[:2]
ValueError: not enough values to unpack (expected 2, got 0)

Is there perhaps something I need to install? I already have ffmpeg installed (as shown below)

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ pip install ffmpeg
Requirement already satisfied: ffmpeg in /home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages (1.4)
(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ ffmpeg -version
ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
libavutil      58. 29.100 / 58. 29.100
libavcodec     60. 31.102 / 60. 31.102
libavformat    60. 16.100 / 60. 16.100
libavdevice    60.  3.100 / 60.  3.100
libavfilter     9. 12.100 /  9. 12.100
libswscale      7.  5.100 /  7.  5.100
libswresample   4. 12.100 /  4. 12.100
libpostproc    57.  3.100 / 57.  3.100
StoneT2000 commented 3 months ago

This seems like a strange error. Could you try running this script first? it's a barebones demo script that tests video generation:

python -m mani_skill.examples.demo_random_action -e "PickCube-v1" \
  --render-mode="rgb_array" --record-dir="videos"
CreativeNick commented 3 months ago

I get a TypeError error when running that command. It seems to run normally at first but crashes very soon after. I made sure ImageIO/ffmpeg were fully updated.

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien/mani_skill$ python -m mani_skill.examples.demo_random_action -e "PickCube-v1"   --render-mode="rgb_array" --record-dir="videos"
opts: []
env_kwargs: {}
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
2024-06-29 20:12:55,256 - mani_skill  - WARNING - mani_skill is not installed with git.
Observation space Dict()
Action space Box(-1.0, 1.0, (8,), float32)
Control mode pd_joint_delta_pos
Reward mode normalized_dense
reward 0.0654616728425026
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([1], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07041200250387192
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([2], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07000993192195892
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([3], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07007213681936264
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([4], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.0751783475279808
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([5], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07195260375738144
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([6], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06785187870264053
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([7], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07412290573120117
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([8], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.081265889108181
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([9], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08121011406183243
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([10], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08826567977666855
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([11], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08837494999170303
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([12], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08710501343011856
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([13], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.09348142147064209
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([14], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.09583278000354767
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([15], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.09545160830020905
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([16], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08594397455453873
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([17], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08376497030258179
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([18], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.08221471309661865
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([19], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07473476976156235
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([20], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07122822850942612
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([21], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07566054910421371
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([22], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07674664258956909
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([23], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07480253279209137
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([24], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.0717344731092453
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([25], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06560256332159042
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([26], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06409884989261627
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([27], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.059779562056064606
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([28], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06089073419570923
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([29], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06511417776346207
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([30], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.0713682770729065
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([31], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.07224950939416885
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([32], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06413805484771729
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([33], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.061099328100681305
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([34], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.06148700788617134
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([35], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.060384202748537064
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([36], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.05702202394604683
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([37], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.05669599771499634
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([38], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.05256301164627075
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([39], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.04364471510052681
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([40], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.038432370871305466
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([41], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.0364818200469017
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([42], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.035757146775722504
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([43], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.03239961713552475
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([44], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.02878131903707981
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([45], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.02964496612548828
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([46], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.030508577823638916
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([47], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.034325528889894485
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([48], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.034152042120695114
terminated tensor([False])
truncated tensor([False])
info {'elapsed_steps': array([49], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
reward 0.030697394162416458
terminated tensor([False])
truncated True
info {'elapsed_steps': array([50], dtype=int32), 'success': array([False]), 'is_obj_placed': array([False]), 'is_robot_static': array([False]), 'is_grasped': array([False])}
Traceback (most recent call last):
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/creativenick/Desktop/SimToReal/ManiSkill/mani_skill/examples/demo_random_action.py", line 98, in <module>
    main(parse_args())
  File "/home/creativenick/Desktop/SimToReal/ManiSkill/mani_skill/examples/demo_random_action.py", line 91, in main
    env.close()
  File "/home/creativenick/Desktop/SimToReal/ManiSkill/mani_skill/utils/wrappers/record.py", line 726, in close
    self.flush_video()
  File "/home/creativenick/Desktop/SimToReal/ManiSkill/mani_skill/utils/wrappers/record.py", line 698, in flush_video
    images_to_video(
  File "/home/creativenick/Desktop/SimToReal/ManiSkill/mani_skill/utils/visualization/misc.py", line 50, in images_to_video
    writer.append_data(im)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/core/format.py", line 590, in append_data
    return self._append_data(im, total_meta)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 587, in _append_data
    self._initialize()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 648, in _initialize
    self._write_gen.send(None)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 508, in write_frames
    codec = get_first_available_h264_encoder()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 124, in get_first_available_h264_encoder
    compiled_encoders = get_compiled_h264_encoders()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 58, in get_compiled_h264_encoders
    cmd = [get_ffmpeg_exe(), "-hide_banner", "-encoders"]
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 28, in get_ffmpeg_exe
    exe = _get_ffmpeg_exe()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 44, in _get_ffmpeg_exe
    exe = os.path.join(_get_bin_dir(), FNAME_PER_PLATFORM.get(plat, ""))
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 69, in _get_bin_dir
    ref = importlib.resources.files("imageio_ffmpeg.binaries") / "__init__.py"
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/resources.py", line 147, in files
    return _common.from_package(_get_package(package))
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/_common.py", line 14, in from_package
    return fallback_resources(package.__spec__)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/_common.py", line 18, in fallback_resources
    package_directory = pathlib.Path(spec.origin).parent
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 1082, in __new__
    self = cls._from_parts(args, init=False)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 707, in _from_parts
    drv, root, parts = self._parse_args(args)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 691, in _parse_args
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
StoneT2000 commented 3 months ago

@CreativeNick can you share your full conda setup and what commands you ran to set it up, and also share pip list? If you did more than just create the conda env + pip install maniskill and torch, try a new conda environment with just those 2 pip installs.

CreativeNick commented 3 months ago

Here's what I do to setup the environment and it's python version:

creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ conda env list
# conda environments:
#
base                     /home/creativenick/anaconda3
ms_dev                   /home/creativenick/anaconda3/envs/ms_dev

creativenick@creativenick-ROG-Strix-G712LWS-G712LWS:~/Desktop/SimToReal/bimanual-sapien$ conda activate ms_dev
(ms_dev) creativenick@creativenick-ROG-Strix-G712LWS-G712LWS:~/Desktop/SimToReal/bimanual-sapien$ python --version
Python 3.9.18
(ms_dev) creativenick@creativenick-ROG-Strix-G712LWS-G712LWS:~/Desktop/SimToReal/bimanual-sapien$ which python
/home/creativenick/anaconda3/envs/ms_dev/bin/python

I also pasted below my environment.yml file for my current conda environment ms_dev:

name: ms_dev
channels:
  - conda-forge
  - defaults
dependencies:
  - _libgcc_mutex=0.1=conda_forge
  - _openmp_mutex=4.5=2_gnu
  - abseil-cpp=20230802.0=h6a678d5_2
  - absl-py=2.1.0=pyhd8ed1ab_0
  - bzip2=1.0.8=hd590300_5
  - c-ares=1.28.1=hd590300_0
  - ca-certificates=2024.6.2=hbcca054_0
  - grpc-cpp=1.48.2=he1ff14a_4
  - grpcio=1.48.2=py39he1ff14a_4
  - gtest=1.14.0=hdb19cb5_1
  - importlib-metadata=7.1.0=pyha770c72_0
  - ld_impl_linux-64=2.38=h1181459_1
  - libblas=3.9.0=22_linux64_openblas
  - libcblas=3.9.0=22_linux64_openblas
  - libffi=3.4.2=h7f98852_5
  - libgcc-ng=13.2.0=h77fa898_7
  - libgfortran-ng=13.2.0=h69a702a_7
  - libgfortran5=13.2.0=hca663fb_7
  - libgomp=13.2.0=h77fa898_7
  - liblapack=3.9.0=22_linux64_openblas
  - libnsl=2.0.1=hd590300_0
  - libopenblas=0.3.27=pthreads_h413a1c8_0
  - libprotobuf=3.20.3=he621ea3_0
  - libsqlite=3.45.3=h2797004_0
  - libstdcxx-ng=11.2.0=h1234567_1
  - libuuid=2.38.1=h0b41bf4_0
  - libxcrypt=4.4.36=hd590300_1
  - libzlib=1.2.13=h4ab18f5_6
  - markdown=3.6=pyhd8ed1ab_0
  - markupsafe=2.1.5=py39hd1e30aa_0
  - ncurses=6.4=h6a678d5_0
  - openssl=3.3.0=h4ab18f5_3
  - pip=24.0=py39h06a4308_0
  - protobuf=3.20.3=py39h6a678d5_0
  - python=3.9.18=h0755675_1_cpython
  - python_abi=3.9=4_cp39
  - re2=2022.04.01=h27087fc_0
  - readline=8.2=h5eee18b_0
  - setuptools=69.5.1=py39h06a4308_0
  - six=1.16.0=pyh6c4a22f_0
  - sqlite=3.45.3=h5eee18b_0
  - tensorboard=2.16.2=pyhd8ed1ab_0
  - tensorboard-data-server=0.7.0=py39hd4f0224_1
  - tk=8.6.14=h39e8969_0
  - werkzeug=3.0.3=pyhd8ed1ab_0
  - wheel=0.43.0=py39h06a4308_0
  - xz=5.4.6=h5eee18b_1
  - zlib=1.2.13=h4ab18f5_6
  - pip:
      - asttokens==2.4.1
      - certifi==2024.2.2
      - cfgv==3.4.0
      - charset-normalizer==3.3.2
      - click==8.1.7
      - cloudpickle==3.0.0
      - contourpy==1.2.1
      - cycler==0.12.1
      - dacite==1.8.1
      - decorator==5.1.1
      - distlib==0.3.8
      - docker-pycreds==0.4.0
      - docstring-parser==0.16
      - eval-type-backport==0.2.0
      - exceptiongroup==1.2.1
      - executing==2.0.1
      - farama-notifications==0.0.4
      - fast-kinematics==0.2.2
      - ffmpeg==1.4
      - filelock==3.14.0
      - fonttools==4.53.0
      - fsspec==2024.5.0
      - gitdb==4.0.11
      - gitpython==3.1.43
      - gymnasium==0.29.1
      - h5py==3.11.0
      - huggingface-hub==0.23.2
      - identify==2.5.36
      - idna==3.7
      - imageio==2.34.2
      - imageio-ffmpeg==0.5.0
      - importlib-resources==6.4.0
      - ipython==8.18.1
      - jedi==0.19.1
      - jinja2==3.1.4
      - kiwisolver==1.4.5
      - lxml==5.2.2
      - mani-skill==3.0.0b4
      - markdown-it-py==3.0.0
      - matplotlib==3.9.0
      - matplotlib-inline==0.1.7
      - mdurl==0.1.2
      - mplib==0.1.1
      - mpmath==1.3.0
      - networkx==3.2.1
      - nodeenv==1.9.0
      - numpy==1.26.4
      - nvidia-cublas-cu12==12.1.3.1
      - nvidia-cuda-cupti-cu12==12.1.105
      - nvidia-cuda-nvrtc-cu12==12.1.105
      - nvidia-cuda-runtime-cu12==12.1.105
      - nvidia-cudnn-cu12==8.9.2.26
      - nvidia-cufft-cu12==11.0.2.54
      - nvidia-curand-cu12==10.3.2.106
      - nvidia-cusolver-cu12==11.4.5.107
      - nvidia-cusparse-cu12==12.1.0.106
      - nvidia-nccl-cu12==2.20.5
      - nvidia-nvjitlink-cu12==12.5.40
      - nvidia-nvtx-cu12==12.1.105
      - opencv-python==4.9.0.80
      - packaging==24.0
      - pandas==2.2.2
      - parso==0.8.4
      - pexpect==4.9.0
      - pillow==10.3.0
      - platformdirs==4.2.2
      - pre-commit==3.7.1
      - prompt-toolkit==3.0.45
      - psutil==5.9.8
      - ptyprocess==0.7.0
      - pure-eval==0.2.2
      - pygments==2.18.0
      - pyparsing==3.1.2
      - pyperclip==1.8.2
      - python-dateutil==2.9.0.post0
      - pytz==2024.1
      - pyyaml==6.0.1
      - requests==2.32.3
      - rich==13.7.1
      - rtree==1.2.0
      - sapien==3.0.0b1
      - scipy==1.13.1
      - sentry-sdk==2.3.1
      - setproctitle==1.3.3
      - shtab==1.7.1
      - smmap==5.0.1
      - stack-data==0.6.3
      - sympy==1.12.1
      - tabulate==0.9.0
      - toppra==0.6.0
      - torch==2.3.0
      - torch-tb-profiler==0.4.3
      - torchaudio==2.3.0
      - torchvision==0.18.0
      - tqdm==4.66.4
      - traitlets==5.14.3
      - transforms3d==0.4.1
      - trimesh==4.4.0
      - triton==2.3.0
      - typing-extensions==4.12.1
      - tyro==0.8.4
      - tzdata==2024.1
      - urllib3==2.2.1
      - virtualenv==20.26.2
      - wandb==0.17.0
      - wcwidth==0.2.13
      - zipp==3.19.1
prefix: /home/creativenick/anaconda3/envs/ms_dev

I created a new environment called ms_env and was able to run your script python -m mani_skill.examples.demo_random_action -e "PickCube-v1" --render-mode="rgb_array" --record-dir="videos". I also tried changing the environment, python -m mani_skill.examples.demo_random_action -e "PickSingleYCB-v1" --render-mode="rgb_array" --record-dir="videos", which also ran successfully.

I pasted the environment.yml file below for ms_env:

name: ms_env
channels:
  - defaults
dependencies:
  - _libgcc_mutex=0.1=main
  - _openmp_mutex=5.1=1_gnu
  - ca-certificates=2024.3.11=h06a4308_0
  - ld_impl_linux-64=2.38=h1181459_1
  - libffi=3.3=he6710b0_2
  - libgcc-ng=11.2.0=h1234567_1
  - libgomp=11.2.0=h1234567_1
  - libstdcxx-ng=11.2.0=h1234567_1
  - ncurses=6.4=h6a678d5_0
  - openssl=1.1.1w=h7f8727e_0
  - pip=24.0=py39h06a4308_0
  - python=3.9.0=hdb3f193_2
  - readline=8.2=h5eee18b_0
  - setuptools=69.5.1=py39h06a4308_0
  - sqlite=3.45.3=h5eee18b_0
  - tk=8.6.14=h39e8969_0
  - tzdata=2024a=h04d1e81_0
  - wheel=0.43.0=py39h06a4308_0
  - xz=5.4.6=h5eee18b_1
  - zlib=1.2.13=h5eee18b_1
  - pip:
      - absl-py==2.1.0
      - asttokens==2.4.1
      - certifi==2024.6.2
      - charset-normalizer==3.3.2
      - cloudpickle==3.0.0
      - contourpy==1.2.1
      - cycler==0.12.1
      - dacite==1.8.1
      - decorator==5.1.1
      - docstring-parser==0.16
      - eval-type-backport==0.2.0
      - exceptiongroup==1.2.1
      - executing==2.0.1
      - farama-notifications==0.0.4
      - fast-kinematics==0.2.2
      - filelock==3.15.4
      - fonttools==4.53.0
      - fsspec==2024.6.1
      - gitdb==4.0.11
      - gitpython==3.1.43
      - grpcio==1.64.1
      - gymnasium==0.29.1
      - h5py==3.11.0
      - huggingface-hub==0.23.4
      - idna==3.7
      - imageio==2.34.2
      - imageio-ffmpeg==0.5.1
      - importlib-metadata==8.0.0
      - importlib-resources==6.4.0
      - ipython==8.18.1
      - jedi==0.19.1
      - jinja2==3.1.4
      - kiwisolver==1.4.5
      - lxml==5.2.2
      - mani-skill==3.0.0b4
      - markdown==3.6
      - markdown-it-py==3.0.0
      - markupsafe==2.1.5
      - matplotlib==3.9.0
      - matplotlib-inline==0.1.7
      - mdurl==0.1.2
      - mplib==0.1.1
      - mpmath==1.3.0
      - networkx==3.2.1
      - numpy==2.0.0
      - nvidia-cublas-cu12==12.1.3.1
      - nvidia-cuda-cupti-cu12==12.1.105
      - nvidia-cuda-nvrtc-cu12==12.1.105
      - nvidia-cuda-runtime-cu12==12.1.105
      - nvidia-cudnn-cu12==8.9.2.26
      - nvidia-cufft-cu12==11.0.2.54
      - nvidia-curand-cu12==10.3.2.106
      - nvidia-cusolver-cu12==11.4.5.107
      - nvidia-cusparse-cu12==12.1.0.106
      - nvidia-nccl-cu12==2.20.5
      - nvidia-nvjitlink-cu12==12.5.82
      - nvidia-nvtx-cu12==12.1.105
      - opencv-python==4.10.0.84
      - packaging==24.1
      - parso==0.8.4
      - pexpect==4.9.0
      - pillow==10.4.0
      - prompt-toolkit==3.0.47
      - protobuf==4.25.3
      - psutil==6.0.0
      - ptyprocess==0.7.0
      - pure-eval==0.2.2
      - pygments==2.18.0
      - pyparsing==3.1.2
      - pyperclip==1.9.0
      - python-dateutil==2.9.0.post0
      - pyyaml==6.0.1
      - requests==2.32.3
      - rich==13.7.1
      - rtree==1.2.0
      - sapien==3.0.0b1
      - scipy==1.13.1
      - shtab==1.7.1
      - six==1.16.0
      - smmap==5.0.1
      - stack-data==0.6.3
      - sympy==1.12.1
      - tabulate==0.9.0
      - tensorboard==2.17.0
      - tensorboard-data-server==0.7.2
      - toppra==0.6.0
      - torch==2.3.1
      - tqdm==4.66.4
      - traitlets==5.14.3
      - transforms3d==0.4.2
      - trimesh==4.4.1
      - triton==2.3.1
      - typing-extensions==4.12.2
      - tyro==0.8.5
      - urllib3==2.2.2
      - wcwidth==0.2.13
      - werkzeug==3.0.3
      - zipp==3.19.2
prefix: /home/creativenick/anaconda3/envs/ms_env

@StoneT2000 However, I am encountering this error when trying to capture a video for a custom environment. I have this custom environment with the env-id "Bimanual_Allegro_Cube" but am still receiving the same ValueError. I'm not sure if this is because of the way I'm running ppo.py in the terminal or the environment itself.

Edit: It seems that any environment does not work, I tried PickSingleYCB-v1 and PickClutterYCB-v1, and those also resulted in the same error, so I don't think it has to do with the environment.

Here's the error output:

(ms_env) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ python ppo.py --env-id Bimanual_Allegro_Cube --capture-video
/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/tyro/_fields.py:330: UserWarning: The field wandb_entity is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/tyro/_fields.py:330: UserWarning: The field checkpoint is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
Running training
Saving eval videos to runs/Bimanual_Allegro_Cube__ppo__1__1719896744/videos
/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_observation_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_observation_space` for environment variables or `env.get_wrapper_attr('single_observation_space')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_action_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_action_space` for environment variables or `env.get_wrapper_attr('single_action_space')` that will search the reminding wrappers.
  logger.warn(
####
args.num_iterations=97 args.num_envs=512 args.num_eval_envs=2
args.minibatch_size=3200 args.batch_size=102400 args.update_epochs=4
####
Epoch: 1, global_step=0
Evaluating
Traceback (most recent call last):.09 hand_close: 0.20
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/ppo.py", line 334, in <module>
    eval_envs.step(agent.get_action(eval_obs, deterministic=True))
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/vector/wrappers/gymnasium.py", line 96, in step
    obs, rew, terminations, truncations, infos = self._env.step(actions)
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 492, in step
    self.flush_video()
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 698, in flush_video
    images_to_video(
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/visualization/misc.py", line 50, in images_to_video
    writer.append_data(im)
  File "/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/imageio/core/format.py", line 590, in append_data
    return self._append_data(im, total_meta)
  File "/home/creativenick/anaconda3/envs/ms_env/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 565, in _append_data
    h, w = im.shape[:2]
ValueError: not enough values to unpack (expected 2, got 0)
StoneT2000 commented 3 months ago

Great so your renderer works. Can you share your environment code here? Do the other PPO commands in the README work?

CreativeNick commented 3 months ago

The PickSingleYCB-v1 and PickClutterYCB-v1 environments are from the mani_skill folder.

My custom "Bimanual_Allegro_Cube" environment can be found in my SimToReal repo.

I tested the other PPO commands in the README and they only work if I add --no-capture-video to the commands. (Just to make sure, when you say README you're referring to the baseline PPO README.md right?)

StoneT2000 commented 3 months ago

For sure your custom task will probably not generate videos because no sensors/cameras have been configured. See the PushCube task in this repo for example of how to add cameras to render/record videos.

For the other issue could you edit /home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/visualization/misc.py and add a line right before the line that says writer.append_data(im) add print(type(im), im) and share the output.

Also it seems you are not using this repository's PPO code. Could you use https://github.com/haosulab/ManiSkill/blob/main/examples/baselines/ppo/ppo.py instead? (It might be the same code but just to check).

An example command to test should be e.g.


python ppo.py --env_id="PushCube-v1"   --num_envs=2048 --update_epochs=8 --num_minibatches=32   --total_timesteps=2_000_000 --eval_freq=10 --num-steps=20
``` from the readme you referenced.
CreativeNick commented 3 months ago

When run the example command after adding print(type(im), im) and using this repository's PPO code, I now get a TypeError imageio_ffmpeg error:

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ python ppo.py --env_id="PushCube-v1"   --num_envs=2048 --update_epochs=8 --num_minibatches=32   --total_timesteps=2_000_000 --eval_freq=10 --num-steps=20
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field wandb_entity is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field checkpoint is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
Running training
Saving eval videos to runs/PushCube-v1__ppo__1__1720038622/videos
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_observation_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_observation_space` for environment variables or `env.get_wrapper_attr('single_observation_space')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_action_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_action_space` for environment variables or `env.get_wrapper_attr('single_action_space')` that will search the reminding wrappers.
  logger.warn(
####
args.num_iterations=48 args.num_envs=2048 args.num_eval_envs=8
args.minibatch_size=1280 args.batch_size=40960 args.update_epochs=8
####
Epoch: 1, global_step=0
Evaluating
<class 'numpy.ndarray'> [[[  0   0   0]
  [  0   0   0]
  [  0   0   0]
  ...
  [  0   0   0]
  [  0   0   0]
  [  0   0   0]]

 [[  0   0   0]
  [  0   0   0]
  [  0   0   0]
  ...
  [  0   0   0]
  [  0   0   0]
  [  0   0   0]]

 [[  0   0   0]
  [  0   0   0]
  [  0   0   0]
  ...
  [  0   0   0]
  [  0   0   0]
  [  0   0   0]]

 ...

 [[131 120 114]
  [128 118 111]
  [127 116 111]
  ...
  [145  86  53]
  [167  99  61]
  [149  88  56]]

 [[128 117 111]
  [131 119 114]
  [131 120 115]
  ...
  [143  85  55]
  [141  84  52]
  [173 101  64]]

 [[125 113 108]
  [130 119 113]
  [129 118 112]
  ...
  [196 113  67]
  [169  98  59]
  [140  83  52]]]
Traceback (most recent call last):
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/ppo.py", line 261, in <module>
    eval_obs, _, eval_terminations, eval_truncations, eval_infos = eval_envs.step(agent.get_action(eval_obs, deterministic=True))
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/vector/wrappers/gymnasium.py", line 96, in step
    obs, rew, terminations, truncations, infos = self._env.step(actions)
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 492, in step
    self.flush_video()
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 698, in flush_video
    images_to_video(
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/visualization/misc.py", line 51, in images_to_video
    writer.append_data(im)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/core/format.py", line 590, in append_data
    return self._append_data(im, total_meta)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 587, in _append_data
    self._initialize()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 648, in _initialize
    self._write_gen.send(None)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 508, in write_frames
    codec = get_first_available_h264_encoder()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 124, in get_first_available_h264_encoder
    compiled_encoders = get_compiled_h264_encoders()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 58, in get_compiled_h264_encoders
    cmd = [get_ffmpeg_exe(), "-hide_banner", "-encoders"]
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 28, in get_ffmpeg_exe
    exe = _get_ffmpeg_exe()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 44, in _get_ffmpeg_exe
    exe = os.path.join(_get_bin_dir(), FNAME_PER_PLATFORM.get(plat, ""))
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 69, in _get_bin_dir
    ref = importlib.resources.files("imageio_ffmpeg.binaries") / "__init__.py"
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/resources.py", line 147, in files
    return _common.from_package(_get_package(package))
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/_common.py", line 14, in from_package
    return fallback_resources(package.__spec__)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/_common.py", line 18, in fallback_resources
    package_directory = pathlib.Path(spec.origin).parent
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 1082, in __new__
    self = cls._from_parts(args, init=False)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 707, in _from_parts
    drv, root, parts = self._parse_args(args)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 691, in _parse_args
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType

When I run the example command using my PPO code, I get the same ValueError: not enough values to unpack (expected 2, got 0) error as shown below because in my PPO file, I set render_mode="human" instead of render_mode="rgb_array" like in the repo's PPO file. Note: When I set render_mode="rgb_array" in my PPO code, I end up with the exact same error as the above TypeError output.

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ python ppo.py --env_id="PushCube-v1"   --num_envs=2048 --update_epochs=8 --num_minibatches=32   --total_timesteps=2_000_000 --eval_freq=10 --num-steps=20
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field wandb_entity is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field checkpoint is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
Running training
Saving eval videos to runs/PushCube-v1ppo1__1720039209/videos
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do env.unwrapped.max_episode_steps for environment variables or env.get_wrapper_attr('max_episode_steps') that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_observation_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do env.unwrapped.single_observation_space for environment variables or env.get_wrapper_attr('single_observation_space') that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_action_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do env.unwrapped.single_action_space for environment variables or env.get_wrapper_attr('single_action_space') that will search the reminding wrappers.
  logger.warn(
####
args.num_iterations=48 args.num_envs=2048 args.num_eval_envs=2
args.minibatch_size=1280 args.batch_size=40960 args.update_epochs=8
####
Epoch: 1, global_step=0
Evaluating
<class 'numpy.ndarray'> <sapien.utils.viewer.viewer.Viewer object at 0x719393fc7430>
Traceback (most recent call last):
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/ppo.py", line 334, in <module>
    eval_envs.step(agent.get_action(eval_obs, deterministic=True))
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/vector/wrappers/gymnasium.py", line 96, in step
    obs, rew, terminations, truncations, infos = self._env.step(actions)
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 492, in step
    self.flush_video()
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 698, in flush_video
    images_to_video(
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/visualization/misc.py", line 51, in images_to_video
    writer.append_data(im)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/core/format.py", line 590, in append_data
    return self._append_data(im, total_meta)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 565, in appenddata
    h, w = im.shape[:2]
ValueError: not enough values to unpack (expected 2, got 0)
StoneT2000 commented 3 months ago

render_mode="human" just opens the GUI so that shouldn't save videos. The RecordEpisode wrapper which records videos calls env.render() and that returns an image if its render_mode="rgb_array" and something different if its render_mode="human". So stick with render_mode="rgb_array" for now.

Can you try running sudo apt-get install ffmpeg instead? I'm not sure the pip installed version is the correct one. Install via apt-get then run the ppo code with original setup "rgb_array"

CreativeNick commented 3 months ago

Hmm, I still seem to be running into the same error even after installing ffmpeg via apt-get

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ sudo apt-get install ffmpeg
[sudo] password for creativenick: 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ffmpeg is already the newest version (7:6.1.1-3ubuntu5).
The following packages were automatically installed and are no longer required:
  gstreamer1.0-libcamera libcamera0.2 libei1 libfreerdp-server3-3 liblttng-ust-common1t64 liblttng-ust-ctl5t64 liblttng-ust1t64 libtss2-tcti-libtpms0t64 libtss2-tcti-spi-helper0t64
  libtss2-tctildr0t64
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 7 not upgraded.
(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ python ppo.py --env_id="PushCube-v1"   --num_envs=2048 --update_epochs=8 --num_minibatches=32   --total_timesteps=2_000_000 --eval_freq=10 --num-steps=20
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field wandb_entity is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field checkpoint is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
Running training
Saving eval videos to runs/PushCube-v1__ppo__1__1720050010/videos
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_observation_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_observation_space` for environment variables or `env.get_wrapper_attr('single_observation_space')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_action_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_action_space` for environment variables or `env.get_wrapper_attr('single_action_space')` that will search the reminding wrappers.
  logger.warn(
####
args.num_iterations=48 args.num_envs=2048 args.num_eval_envs=8
args.minibatch_size=1280 args.batch_size=40960 args.update_epochs=8
####
Epoch: 1, global_step=0
Evaluating
<class 'numpy.ndarray'> [[[  0   0   0]
  [  0   0   0]
  [  0   0   0]
  ...
  [  0   0   0]
  [  0   0   0]
  [  0   0   0]]

 [[  0   0   0]
  [  0   0   0]
  [  0   0   0]
  ...
  [  0   0   0]
  [  0   0   0]
  [  0   0   0]]

 [[  0   0   0]
  [  0   0   0]
  [  0   0   0]
  ...
  [  0   0   0]
  [  0   0   0]
  [  0   0   0]]

 ...

 [[131 120 114]
  [128 118 111]
  [127 116 111]
  ...
  [145  86  53]
  [167  99  61]
  [149  88  56]]

 [[128 117 111]
  [131 119 114]
  [131 120 115]
  ...
  [143  85  55]
  [141  84  52]
  [173 101  64]]

 [[125 113 108]
  [130 119 113]
  [129 118 112]
  ...
  [196 113  67]
  [169  98  59]
  [140  83  52]]]
Traceback (most recent call last):
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/ppo.py", line 261, in <module>
    eval_obs, _, eval_terminations, eval_truncations, eval_infos = eval_envs.step(agent.get_action(eval_obs, deterministic=True))
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/vector/wrappers/gymnasium.py", line 96, in step
    obs, rew, terminations, truncations, infos = self._env.step(actions)
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 492, in step
    self.flush_video()
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/wrappers/record.py", line 698, in flush_video
    images_to_video(
  File "/home/creativenick/Desktop/SimToReal/bimanual-sapien/mani_skill/utils/visualization/misc.py", line 51, in images_to_video
    writer.append_data(im)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/core/format.py", line 590, in append_data
    return self._append_data(im, total_meta)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 587, in _append_data
    self._initialize()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio/plugins/ffmpeg.py", line 648, in _initialize
    self._write_gen.send(None)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 508, in write_frames
    codec = get_first_available_h264_encoder()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 124, in get_first_available_h264_encoder
    compiled_encoders = get_compiled_h264_encoders()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_io.py", line 58, in get_compiled_h264_encoders
    cmd = [get_ffmpeg_exe(), "-hide_banner", "-encoders"]
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 28, in get_ffmpeg_exe
    exe = _get_ffmpeg_exe()
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 44, in _get_ffmpeg_exe
    exe = os.path.join(_get_bin_dir(), FNAME_PER_PLATFORM.get(plat, ""))
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/_utils.py", line 69, in _get_bin_dir
    ref = importlib.resources.files("imageio_ffmpeg.binaries") / "__init__.py"
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/resources.py", line 147, in files
    return _common.from_package(_get_package(package))
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/_common.py", line 14, in from_package
    return fallback_resources(package.__spec__)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/importlib/_common.py", line 18, in fallback_resources
    package_directory = pathlib.Path(spec.origin).parent
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 1082, in __new__
    self = cls._from_parts(args, init=False)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 707, in _from_parts
    drv, root, parts = self._parse_args(args)
  File "/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/pathlib.py", line 691, in _parse_args
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
StoneT2000 commented 3 months ago

Can you try

pip uninstall "imageio-ffmpeg" "imageio"
pip install "imageio-ffmpeg"=="0.4.9" imageio=="2.34.0"

These are versions I am using that work for me.

This error has also cropped up in another package before seen here: https://github.com/Zulko/moviepy/issues/2189. The error originates from somehow being unable to find the ffmpeg executable file. For me it is here: "/home/stao/mambaforge/envs/ms3-dev/lib/python3.11/site-packages/imageio_ffmpeg/binaries/ffmpeg-linux64-v4.2.2"

After installing and running again. If it does not work can you run

ls /home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/imageio_ffmpeg/binaries

and share the output

CreativeNick commented 3 months ago

Can you try

pip uninstall "imageio-ffmpeg" "imageio"
pip install "imageio-ffmpeg"=="0.4.9" imageio=="2.34.0"

This fixed the issue! I was able to evaluate it as well. Thank you!

(ms_dev) creativenick@creativenick:~/Desktop/SimToReal/bimanual-sapien$ python ppo.py --env_id="PushCube-v1" --evaluate --checkpoint=./runs/PushCube-v1__ppo
__1__1720065605/final_ckpt.pt --num_eval_envs=1 --num-eval-steps=1000
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field wandb_entity is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/tyro/_fields.py:343: UserWarning: The field checkpoint is annotated with type <class 'str'>, but the default value None has type <class 'NoneType'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
  warnings.warn(
Running evaluation
Saving eval videos to ./runs/PushCube-v1__ppo__1__1720065605/test_videos
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
2024-07-03 22:00:00,879 - mani_skill  - WARNING - mani_skill is not installed with git.
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_observation_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_observation_space` for environment variables or `env.get_wrapper_attr('single_observation_space')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.single_action_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.single_action_space` for environment variables or `env.get_wrapper_attr('single_action_space')` that will search the reminding wrappers.
  logger.warn(
/home/creativenick/anaconda3/envs/ms_dev/lib/python3.9/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.max_episode_steps to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.max_episode_steps` for environment variables or `env.get_wrapper_attr('max_episode_steps')` that will search the reminding wrappers.
  logger.warn(
####
args.num_iterations=390 args.num_envs=512 args.num_eval_envs=1
args.minibatch_size=800 args.batch_size=25600 args.update_epochs=4
####
Epoch: 1, global_step=0
Evaluating
Evaluated 1000 steps resulting in 77 episodes
eval_success_rate=1.0
eval_episodic_return=4.722284317016602