buildingamind / NewbornEmbodiedTuringTest

A testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.
MIT License
9 stars 1 forks source link
Banner # **Newborn Embodied Turing Test** Benchmarking Virtual Agents in Controlled-Rearing Conditions ![PyPI - Version](https://img.shields.io/pypi/v/nett-benchmarks) ![Python Version from PEP 621 TOML](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fbuildingamind%2FNewbornEmbodiedTuringTest%2Fmain%2Fpyproject.toml) ![GitHub License](https://img.shields.io/github/license/buildingamind/NewbornEmbodiedTuringTest) ![GitHub Issues or Pull Requests](https://img.shields.io/github/issues/buildingamind/NewbornEmbodiedTuringTest) [Getting Started](#getting-started) • [Documentation](https://buildingamind.github.io/NewbornEmbodiedTuringTest/) • [Lab Website](http://buildingamind.com/)

The Newborn Embodied Turing Test (NETT) is a cutting-edge toolkit designed to simulate virtual agents in controlled-rearing conditions. This innovative platform enables researchers to create, simulate, and analyze virtual agents, facilitating direct comparisons with real chicks as documented by the Building a Mind Lab. Our comprehensive suite includes all necessary components for the simulation and analysis of embodied models, closely replicating laboratory conditions.

Below is a visual representation of our experimental setup, showcasing the infrastructure for the three primary experiments discussed in this documentation.

Digital Twin

How to Use this Repository

The NETT toolkit comprises three key components:

  1. Virtual Environment: A dynamic environment that serves as the habitat for virtual agents.
  2. Experimental Simulation Programs: Tools to initiate and conduct experiments within the virtual world.
  3. Data Visualization Programs: Utilities for analyzing and visualizing experiment outcomes.

Directory Structure

The directory structure of the code is as follows:

├── docs                          # Documentation and guides
├── examples
│   ├── notebooks                 # Jupyter Notebooks for examples
│      └── Getting Started.ipynb  # Introduction and setup notebook
│   └── run                       # Terminal script example
├── src/nett
│   ├── analysis                  # Analysis scripts
│   ├── body                      # Agent body configurations
│   ├── brain                     # Neural network models and learning algorithms
│   ├── environment               # Simulation environments
│   ├── utils                     # Utility functions
│   ├── nett.py                   # Main library script
│   └── __init__.py               # Package initialization
├── tests                         # Unit tests
├── mkdocs.yml                    # MkDocs configuration
├── pyproject.toml                # Project metadata
└── README.md                     # This README file

Getting Started

To begin benchmarking your first embodied agent with NETT, please be aware:

Important: The mlagents==1.0.0 dependency is incompatible with Apple Silicon (M1, M2, etc.) chips. Please utilize an alternate device to execute this codebase.

Installation

  1. Virtual Environment Setup (Highly Recommended)

    Create and activate a virtual environment to avoid dependency conflicts.

    conda create -y -n nett_env python=3.10.12
    conda activate nett_env

    See here for detailed instructions.

  2. Install Prerequistes

    Install the needed versions of setuptools and pip:

    pip install setuptools==65.5.0 pip==21 wheel==0.38.4

    NOTE: This is a result of incompatibilities with the subdependency gym==0.21. More information about this issue can be found here

  3. Toolkit Installation

    Install the toolkit using pip.

    pip install nett-benchmarks

    NOTE:: Installation outside a virtual environment may fail due to conflicting dependencies. Ensure compatibility, especially with gym==0.21 and numpy<=1.21.2.

Running a NETT

  1. Download or Create the Unity Executable

    Obtain a pre-made Unity executable from here. The executable is required to run the virtual environment.

  2. Import NETT Components

    Start by importing the NETT framework components - Brain, Body, and Environment, alongside the main NETT class.

    from nett import Brain, Body, Environment
    from nett import NETT
  3. Component Configuration:

  1. Run the Benchmarking

    Integrate all components into a NETT instance to facilitate experiment execution.

    benchmarks = NETT(brain=brain, body=body, environment=environment)

    The NETT instance has a .run() method that initiates the benchmarking process. The method accepts parameters such as the number of brains, training/testing episodes, and the output directory.

    job_sheet = benchmarks.run(output_dir="path/to/run/output/directory/", num_brains=5, trains_eps=10, test_eps=5)

    The run function is asynchronous, returning the list of jobs that may or may not be complete. If you wish to display the Unity environments running, set the batch_mode parameter to False.

  2. Check Status:

To see the status of the benchmark processes, use the .status() method:

   benchmarks.status(job_sheet)

Running Standard Analysis

After running the experiments, the pipeline will generate a collection of datafiles in the defined output directory.

  1. Install R and dependencies

    To run the analyses performed in previous experiments,this toolkit provides a set of analysis scripts. Prior to running them, you will need R and the packages tidyverse, argparse, and scales installed. To install these packages, run the following command in R:

    install.packages(c("tidyverse", "argparse", "scales"))

    Alternatively, if you are having difficulty installing R on your system, you can install these using conda.

    conda install -y r r-tidyverse r-argparse r-scales
  2. Run the Analysis

    To run the analysis, use the analyze method of the NETT class. This method will generate a set of plots and tables based on the datafiles in the output directory.

    benchmarks.analyze(run_dir="path/to/run/output/directory/", output_dir="path/to/analysis/output/directory/")

Documentation

For a link to the full documentation, please visit here.

Experiment Configuration

More information related to details on the experiment can be found on following pages.

🔼 Back to top