Large-Trajectory-Model / ATM

Official codebase for "Any-point Trajectory Modeling for Policy Learning"
https://xingyu-lin.github.io/atm/
MIT License
182 stars 19 forks source link

problem about training the track transformer #4

Closed KaiTang98 closed 4 months ago

KaiTang98 commented 5 months ago

Hello, thank you for presenting this interesting work. I am attempting to reproduce the training based on the Libero dataset.

However, I've noticed that when training the track transformer, the train_dataset_list and val1_dataset_list obtained through:

train_dataset_list = glob(os.path.join(root_dir, f"{suite_name}//train/")) val1_dataset_list = glob(os.path.join(root_dir, f"{suite_name}//val/"))

are empty. I'm wondering if the data preprocessing stage requires additional processing of the data obtained from preprocess_libero.py? Many thanks!

image

image

KaiTang98 commented 5 months ago

I checked the dataset inside linero_goal as an example. The dataset structure is shown as follows. -- libero_goal ------ open_the_middle_drawer.... ----------images ----------videos ----------demo_0.hdf5 ----------demo_1.hdf5 ----------....hdf5 ----------env_meta.json

KaiTang98 commented 5 months ago

I realized that I just need to create these two folders by myself... Now I can train the model...๐Ÿ˜‚๐Ÿ˜‚

caichuang0415 commented 5 months ago

I realized that I just need to create these two folders by myself... Now I can train the model...๐Ÿ˜‚๐Ÿ˜‚

Thank you very much I have just come with this problem as well ๐Ÿ˜‚๐Ÿ˜‚

yolo01826 commented 4 months ago

@KaiTang98 ่ฎญ็ปƒ็š„็ป“ๆžœๆ€Žไนˆๆ ท๏ผŒ่ƒฝๅคŸๅฎŒๆˆไปปๅŠกๅ—

ahadjawaid commented 4 months ago

I found the script you are suppose to run here is the command:

python3 scripts/split_libero_dataset.py --folder data/atm_libero --train_ratio 0.9
AlvinWen428 commented 4 months ago

Thank you very much for pointing this out. The "train" and "val" folders are generated by scripts/split_libero_dataset.py. Sorry for missing this important step in README. I have updated it.