liruiw / HPT

Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.
https://liruiw.github.io/hpt
MIT License
412 stars 21 forks source link

Issue about running HPT on the Simpler Benchmark #15

Closed KaiLiu18 closed 6 days ago

KaiLiu18 commented 1 week ago

Thank you for your commendable work! Recently, I have been attempting to finetune HPT on the RT1-X supervised datasets based on your paper. However, the finetuned model doesn't played as well as predicted. Could you please provide more detailed information about finetuning HPT on the RT1-X supervised datasets, including how to correctly generate and prepare the training data?

liruiw commented 6 days ago

Thanks for asking this. I only tried HPT on the tasks with the Google Robots in the Simpler Benchmark. The generated data has proprioception inputs (See this PR).