openai / Video-Pre-Training

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
MIT License
1.28k stars 142 forks source link

Video-Pre-Training

Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos

:page_facing_up: Read Paper \ :mega: Blog Post \ :space_invader: MineRL Environment (note version 1.0+ required) \ :checkered_flag: MineRL BASALT Competition

Running agent models

Install pre-requirements for MineRL. Then install requirements with:

pip install git+https://github.com/minerllabs/minerl
pip install -r requirements.txt

⚠️ Note: For reproducibility reasons, the PyTorch version is pinned as torch==1.9.0, which is incompatible with Python 3.10 or higher versions. If you are using Python 3.10 or higher, install a newer version of PyTorch (usually, pip install torch). However, note that this might subtly change model behaviour (e.g., still act mostly as expected, but not reaching the reported performance).

To run the code, call

python run_agent.py --model [path to .model file] --weights [path to .weight file]

After loading up, you should see a window of the agent playing Minecraft.

Agent Model Zoo

Below are the model files and weights files for various pre-trained Minecraft models. The 1x, 2x and 3x model files correspond to their respective model weights width.

Demonstration Only - Behavioral Cloning

These models are trained on video demonstrations of humans playing Minecraft using behavioral cloning (BC) and are more general than later models which use reinforcement learning (RL) to further optimize the policy. Foundational models are trained across all videos in a single training run while house and early game models refine their respective size foundational model further using either the housebuilding contractor data or early game video sub-set. See the paper linked above for more details.

Foundational Model :chart_with_upwards_trend:

Fine-Tuned from House :chart_with_upwards_trend:

Fine-Tuned from Early Game :chart_with_upwards_trend:

Models With Environment Interactions

These models further refine the above demonstration based models with a reward function targeted at obtaining diamond pickaxes. While less general then the behavioral cloning models, these models have the benefit of interacting with the environment using a reward function and excel at progressing through the tech tree quickly. See the paper for more information on how they were trained and the exact reward schedule.

RL from Foundation :chart_with_upwards_trend:

RL from House :chart_with_upwards_trend:

RL from Early Game :chart_with_upwards_trend:

Running Inverse Dynamics Model (IDM)

IDM aims to predict what actions player is taking in a video recording.

Setup:

To run the model with above files placed in the root directory of this code:

python run_inverse_dynamics_model.py --weights 4x_idm.weights --model 4x_idm.model --video-path cheeky-cornflower-setter-02e496ce4abb-20220421-092639.mp4 --jsonl-path cheeky-cornflower-setter-02e496ce4abb-20220421-092639.jsonl

A window should pop up which shows the video frame-by-frame, showing the predicted and true (recorded) actions side-by-side on the left.

Note that run_inverse_dynamics_model.py is designed to be a demo of the IDM, not code to put it into practice.

Using behavioural cloning to fine-tune the models

Disclaimer: This code is a rough demonstration only and not an exact recreation of what original VPT paper did (but it contains some preprocessing steps you want to be aware of)! As such, do not expect replicate the original experiments with this code. This code has been designed to be run-able on consumer hardware (e.g., 8GB of VRAM).

Setup:

If you downloaded the "1x Width" models and placed some data under data directory, you can perform finetuning with

python behavioural_cloning.py --data-dir data --in-model foundation-model-1x.model --in-weights foundation-model-1x.weights --out-weights finetuned-1x.weights

You can then use finetuned-1x.weights when running the agent. You can change the training settings at the top of behavioural_cloning.py.

Major limitations:

Contractor Demonstrations

Versions

Over the course of the project we requested various demonstrations from contractors which we release as index files below. In general, major recorder versions change for a new prompt or recording feature while bug-fixes were represented as minor version changes. However, some recorder versions we asked contractors to change their username when recording particular modalities. Also, as contractors internally ask questions, clarification from one contractor may result in a behavioral change in the other contractor. It is intractable to share every contractor's view for each version, but we've shared the prompts and major clarifications for each recorder version where the task changed significantly.

Initial Prompt We are collecting data for training AI models in Minecraft. You'll need to install java, download the modified version of minecraft (that collects and uploads your play data), and play minecraft survival mode! Paid per hour of gameplay. Prior experience in minecraft not. necessary. We do not collect any data that is unrelated to minecraft from your computer.

The following is a list of the available versions:

Sometimes we asked the contractors to signify other tasks besides changing the version. This primarily occurred in versions 6 and 7 as 8, 9 and 10 are all task specific.

Prompt to contractors (click to show) Another request about additional time - please use some of it to chop trees. Specifically, please start the recorder by adding --username treechop argument to the script (i.e. use play --username treechop on windows, ./play.sh --username treechop on osx/linux), and spend some time chopping trees! Getting wooden or stone tools is ok, but please spend the majority of the with username treechop specifically chopping. I did it myself for about 15 minutes, and it does get boring pretty quickly, so I don't expect you to do it all the time, but please do at least a little bit of chopping. Feel free to play normally the rest of the time (but please restart without --username treechop argument when you are not chopping) However, it is preferable that you start a new world though, and use only the tools that are easily obtainable in that world. I'll see what I can do about getting player an iron axe - that sounds reasonable, and should not be hard, but will require a code update.

Environment

We restrict the contractors to playing Minecraft in windowed mode at 720p which we downsample at 20hz to 360p to minimize space. We also disabled the options screen to prevent the contractor from changing things such as brightness, or rendering options. We ask contractors not to press keys such as f3 which shows a debug overlay, however some contractors may still do this.

Data format

Demonstrations are broken up into up to 5 minute segments consisting of a series of compressed screen observations, actions, environment statistics, and a checkpoint save file from the start of the segment. Each relative path in the index will have all the files for that given segment, however if a file was dropped while uploading, the corresponding relative path is not included in the index therefore there may be missing chunks from otherwise continuous demonstrations.

Index files are provided for each version as a json file:

{
  "basedir": "https://openaipublic.blob.core.windows.net/data/",
  "relpaths": [
    "8.0/cheeky-cornflower-setter-74ae6c2eae2e-20220315-122354",
    ...
  ]
}

Relative paths follow the following format:

Note that due to network errors, some segments may be missing from otherwise continuous demonstrations.

Your data loader can then find following files:

The action file is not a valid json object: each line in action file is an individual action dictionary.

For v7.x, the actions are in form

{
  "mouse": {
    "x": 274.0,
    "y": 338.0,
    "dx": 0.0,
    "dy": 0.0,
    "scaledX": -366.0,
    "scaledY": -22.0,
    "dwheel": 0.0,
    "buttons": [],
    "newButtons": []
  },
  "keyboard": {
    "keys": [
      "key.keyboard.a",
      "key.keyboard.s"
    ],
    "newKeys": [],
    "chars": ""
  },
  "isGuiOpen": false,
  "isGuiInventory": false,
  "hotbar": 4,
  "yaw": -112.35006,
  "pitch": 8.099996,
  "xpos": 841.364694513396,
  "ypos": 63.0,
  "zpos": 24.956354839537802,
  "tick": 0,
  "milli": 1649575088006,
  "inventory": [
    {
      "type": "oak_door",
      "quantity": 3
    },
    {
      "type": "oak_planks",
      "quantity": 59
    },
    {
      "type": "stone_pickaxe",
      "quantity": 1
    },
    {
      "type": "oak_planks",
      "quantity": 64
    }
  ],
  "serverTick": 6001,
  "serverTickDurationMs": 36.3466,
  "stats": {
    "minecraft.custom:minecraft.jump": 4,
    "minecraft.custom:minecraft.time_since_rest": 5999,
    "minecraft.custom:minecraft.play_one_minute": 5999,
    "minecraft.custom:minecraft.time_since_death": 5999,
    "minecraft.custom:minecraft.walk_one_cm": 7554,
    "minecraft.use_item:minecraft.oak_planks": 5,
    "minecraft.custom:minecraft.fall_one_cm": 269,
    "minecraft.use_item:minecraft.glass_pane": 3
  }
}

BASALT 2022 dataset

We also collected a dataset of demonstrations for the MineRL BASALT 2022 competition, with around 150GB of data per task.

Note: To avoid confusion with the competition rules, the action files (.jsonl) have been stripped of information that is not allowed in the competition. We will upload unmodified dataset after the competition ends.

Contribution

This was a large effort by a dedicated team at OpenAI: Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune The code here represents a minimal version of our model code which was prepared by Anssi Kanervisto and others so that these models could be used as part of the MineRL BASALT competition.