Closed xavihart closed 8 months ago
Hi @xavihart I'm not quite sure I understand your question. The native observation space for the kitchen task is 60 (though a lot of these dimensions are empty), and the dataset is directly extracted from *.mjl files published in the original relay-policy-learining repo inside KitchenMjlLowdimDataset
I got it, so it is not vision-based?
Yes, we only have state-based version of the kitchen task in this repo.
In the dataset, the observation sequence is sized N*60, is there any code snippets to generate the low dim embeddings?