abdulhaim / LMRL-Gym

MIT License
75 stars 9 forks source link

BC train maze dataset #7

Closed PioneerAlexander closed 10 months ago

PioneerAlexander commented 10 months ago

Hello,

While trying to train BC model on the FO Maze task with the command python -m llm_rl_scripts.maze.bc.fully_observed_bc HF gpt2 data_path --outputs_path=output_path, I have noticed that the eval_frac parameter is set by default to 0.1. The data is splitted using this code:

 train_items = all_items[:int(len(all_items)*eval_frac)]
 eval_items = all_items[int(len(all_items)*eval_frac):]

Is it correct that you use 10% of train data to actually train and 90% just to evaluate during the train process? train_items has length 124, which is less than the train_bsize = 128, batch size you are using, that is why it seems to me counterintuitive. Please, clarify this part.

Additionally, I had an issue with the training with multiple epochs, because it seems that Seq2SeqDataset is needed to be created instead of Seq2SeqIterableDataset, because Seq2SeqIterableDataset is iterable and has no length, and after steps_per_epoch = len(dataset) // bsize if isinstance(dataset, Dataset) else None line, every epoch has no steps to train.

Finally, I could not run the 'easier BC code' you have added in one of the commits because of the import module issues (jax_agent, jax_bc and some others are missing)

I look forward to your response.

PioneerAlexander commented 10 months ago

Hello, are there any updates on that issue? I still have issues with the BC model training on the FO task(

icwhite commented 10 months ago

Hello! I made a new pull request which should resolve this issue. :) Let me know if not!

icwhite commented 10 months ago

Hello! Closing as issue is resolved. Open another issue if there is another problem.