VirtualHome 2.3 is out! Here are the latest updates:
Activities in VirtualHome are represented through two components: programs representing the sequence of actions that compose an activity, and graphs representing a definition of the environment where the activity takes place. Given a program and a graph, the simulator executes the program, generating a video of the activity or a sequence of graphs representing how the environment evolves as the activity takes place. To this end, VirtualHome includes two simulators: the Unity Simulator and Evolving Graph. You can find a more complete documentation with examples and the different executables at http://virtual-home.org/documentation.
This simulator is built in Unity and allows generating videos of activities. To use this simulator, you will need to download the appropiate executable and run it with the Python API. You can check a demo of the simulator in demo/unity_demo.ipynb
This simulator runs fully in python and allows to generate a sequence of graphs when a program is executed. You can run it in simulation/evolving_graph. Note that some of the objects and actions in this simulator are not supported yet in Unity Simulator.
$ pip install virtualhome
We also provide a Jupyter notebook with a demo and starting code. If you want to run the demo, install Jupyter and run it on your host. If you are new to Jupyter, see Running the Jupyter Notebook for a walkthrough of how to use this tool.
Download the VirtualHome UnitySimulator executable and move it under simulation/unity_simulator
.
To test the simulator in a local machine, double click the executable, or run it via terminal. When running it via the temrinal, we recommend setting windowed mode (so that the simulator does not take the full screen), as such:
./path_to_exec -screen-fullscreen 0 -screen-quality 4
Once the simulator has started, run the demo in demo/unity_demo.ipynb.
If you do not have a monitor or want to test the simulator remotely, you can either use Docker or use an X server (find the installation instructions in this medium post). When running the executable with an X server, use -batchmode. For Linux, you would do:
First run the X server on a terminal. You will have to specify which display you want to use, and on which GPUs. By default it will use all the gpus available
sudo python helper_scripts/startx.py $display_num
On a separate terminal, launch the executable
DISPLAY=:display_num ./{path_sim}/{exec_file}.x86_64 -batchmode
For Linux, you can also launch the UnityCommunication specifying an executable file. This will directly open the executable on the right sceen. You can do it as follows:
After running the X server, run:
from simulation.unity_simulator import comm_unity
comm = comm_unity.UnityCommunication(file_name=file_name, port={your_port}, x_display={your_display})
It will open an executable and create a communication object to render scripts or simulate actvities. You can open multiple executables at the same time, to train models or generate data using multiple processes.
You can also run Unity Simulator using Docker. You can find how to set it up here.
VirtualHome Unity Simulator allows generating videos corresponding to household activities. In addition, it is possible to use Evolving Graph simulator to obtain the environment for each execution step and use UnitySimulator to generate snapshots of the environment at each step.
Open the simulator and run:
cd demo/
python generate_video.py
Open the simulator and run:
cd demo/
python generate_snapshots.py
A grid of snapshots for the given script will be generated and saved in demo/snapshot_test.png.
VirtualHome can be used as an environment for Reinforcement Learning. We provide a base class UnityEnvironment
in simulation/environment/unity_environment.py. You can test how the class works by running
cd demo
python test_unity_environment.py
The provided environment can be combined with Ray, to run multiple environments in parallel, allowing to scale your Reinforcement Learning algorithms. You can test parallel environments by running:
cd demo
python test_unity_environment_mp.py
We collected a dataset of programs and augmented them with graphs using the Evolving Graph simulator. You can download them here.
Once downloaded and unzipped, move the programs into the dataset
folder. You can do all this by executing the script
./helper_scripts/download_dataset.sh
The dataset should follow the following structure:
dataset
└── programs_processed_precond_nograb_morepreconds
|── initstate
├── withoutconds
├── executable_programs
| ├── TrimmedTestScene7_graph
| └── ...
└── state_list
├── TrimmedTestScene7_graph
└── ...
The folders withoutconds
and initstate
contain the original programs and pre-conditions.
When a script is executed in an environment, the script changes by aligning the original objects with instances in the environment. You can view the resulting script in executable_programs/{environment}/{script_name}.txt
.
To view the graph of the environment, and how it changes throughout the script execution of a program, check state_list/{environment}/{script_name}.json
.
You can find more details of the programs and environment graphs in dataset/README.md.
In Synthesizing Environment-Aware Activities via Activity Sketches,
we augment the scripts with two knowledge bases: KB-RealEnv
and KB-ExceptonHandler
.
You can download the augmented scripts in KB-RealEnv and KB-ExceptionHandler.
Here, we provide the code to augment the sripts:
KB-RealEnv
cd dataset_utils
python augment_dataset_locations.py
KB-ExceptionHandler
cd dataset_utils
python augment_dataset_exceptions.py
We originally collected a set of programs to predict from language descriptions, and generated a larger of programs via a scripted language. Those programs are described here as VirtualHome Activity (collected programs) ActivityPrograms (scripted programs). You can download them here:
To do the above generation and augmentation, some valuable resource files are used to set the properties of objects, set the affordance of objects, etc. Check resources/README.md for more details.
To learn more about VirtualHome, please check out VirtualHome Docs.
If you would like to contribute to VirtualHome, or modify the simulator for your research needs. Check out the repository with the Unity Source Code. You will need to download the Unity Editor and build your own executable after having made the updates.
VirtualHome has been used in:
VirtualHome: Simulating HouseHold Activities via Programs. PDF
X. Puig, K. Ra, M. Boben*, J. Li, T. Wang, S. Fidler, A. Torralba.
CVPR2018.
Synthesizing Environment-Aware Activities via Activity Sketches.
A. Liao, X. Puig, M. Boben, A. Torralba, S. Fidler.
CVPR2019.
Watch-and-Help: A Challenge for Social Perception and Human-AI Collaboration.
X. Puig, T. Shu, S. Li, Z. Wang, J. Tenenbaum, S. Fidler, A. Torralba.
ICLR2021, spotlight.
NeurIPS Cooperative AI Workshop 2020, Best Paper Award.
Pre-Trained Language Models for Interactive Decision-Making. Project | PDF
S. Li, X. Puig, C. Paxton, Y. Du, C. Wang, L. Fan, T. Chen, D. Huang, E. Akyürek, A. Anandkumar, J. Andreas, I. Mordatch, A. Torralba, Y. Zhu.
NeurIPS 2022, Oral.
If you plan to use the simulator, please cite both of the following papers (the first one introduced v1.0 and the second introduced v2.0, aka VirutalHome-Social):
@inproceedings{puig2018virtualhome,
title={Virtualhome: Simulating household activities via programs},
author={Puig, Xavier and Ra, Kevin and Boben, Marko and Li, Jiaman and Wang, Tingwu and Fidler, Sanja and Torralba, Antonio},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={8494--8502},
year={2018}
}
@misc{puig2020watchandhelp,
title={Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration},
author={Xavier Puig and Tianmin Shu and Shuang Li and Zilin Wang and Joshua B. Tenenbaum and Sanja Fidler and Antonio Torralba},
year={2020},
eprint={2010.09890},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
The VirtualHome API and code have been developed by the following people.