Closed ezhang7423 closed 2 years ago
If this does sound useful, could I get some advice on what the cleanest/simplest way to implement this in the existing framework is? My current code is a bit hacky...
Not sure if this is what you mean, but we provide an option to evaluate single tasks without resetting the robot to a neutral position. https://github.com/mees/calvin#multi-task-language-control-mtlc
I'm talking about training on a single task.
Here you have an example of using task indicators to learn single task rl policies https://github.com/mees/calvin#reinforcement-learning-with-calvin
Sorry, let be more clear. I specifically mean offline imitation learning through the usage of a subset of the provided dataset filtered on a single task. In essence, breaking up the existing datasets (such as Task D->D) into each individual task that it consists of.
You could try to proceed as follows:
lang_ann.yaml
hydra config to two new files in this folder which only contain that task that you are interested in.auto_lang_ann.npy
in the lang folder that you specfied in the config. However, since you are not interested in language for a single task, you can create a new ep_start_end_ids.npy
which you can use to train on a single task:
auto_lang_ann = np.load("auto_lang_ann.npy", allow_pickle=True).item()
ep_start_end_ids = auto_lang_ann["info"]["indx"]
datamodule/datasets=vision_only
I haven't tried this approach, so there could be additional steps to make it work :smile:
@ezhang7423 did you get further with what you wanted to do?
Hi Luka! Thank you so much for your detailed approach. I was able to get further, and will hopefully be able to submit a pull request soon.
Hi there! I think it would be really nice if there was a script and dataset for a selection of individual tasks in CALVIN, so that one could test their method on just a single task. I've started working on this already, does it sound like a useful feature?