Understanding visually grounded spoken language via multi-tasking
Clone this repo and cd into it:
git clone https://github.com/spokenlanguage/platalea.git
cd platalea
To install in a conda environment, assuming conda has already been installed, run the following to download and install dependencies:
conda create -n platalea python==3.8 pytorch -c conda-forge -c pytorch
conda activate platalea
pip install torchvision
Then install platalea with:
pip install .
Different experiments may have different additional dependencies.
The basic
experiment needs the following:
pip install sklearn python-Levenshtein
The repository has been developed to work with Flickr8K dataset. The code can be made to work with other datasets but this will require some adaptations.
To use Flickr8K, you need to download:
Create a folder to store the dataset (we will assume here that the folder is
~/corpora/flickr8k
) and move all the files you downloaded there, then
extract the content of the archives. You can now setup the environment and
start preprocessing the data.
We use ConfigArgParse for setting necessary input variables, including the location of the dataset. This means you can use either a configuration file (config.ini or config.yml), environment variables or command line arguments to specify the necessary configuration parameters.
To specify the location of the dataset, one option is to create a configuration
file under your home directory (~/.config/platalea/config.yml
)., with
follwing content:
flickr8k_root /home/<user>/corpora/flickr8k
The same result can be achieved with an environment variable:
export FLICKR8K_ROOT=/home/<user>/corpora/flickr8k
You could also specify this option directly on the command line when running
an experiment (the respective options would be --flickr8k_root=...
).
Run the preprocessing script to extract input features:
python platalea/utils/preprocessing.py flickr8k
This repository has support for a subset of the howto100M dataset. The subset contains all videos with creative commons license that claim to be in english language according to their metadata.
This code contains functionality for extracting audio features from the videos. These files need to be in the datafolder before preprocessing the dataset. Preprocessing will create an index file with references to the video feature files. The video features (S3D) need to be acquired elsewhere. Videos from howto100m need to be downloaded from youtube using the metadata from the howto100m dataset provided above.
To start preprocessing, run the following:
python -m platalea.utils.preprocessing howto100m-encc --howto100m_root /corpora/howto100m/
Running any experiment using the howto100M dataset has not yet been implemented.
You can now train a model using one of the examples provided under
platalea/experiments
, e.g.:
cd platalea/experiments/flickr8k
mkdir -p runs/test
cd runs/test
python -m platalea.experiments.flickr8k.basic
After the model is trained, results are available in results.json
.
Some experiments support the use of wandb for cloud logging of results.
In the examples we provide under platalea/experiments
, this option is disabled by default.
To force-enable it, the call to experiment()
should be changed from experiment(..., wandb_mode='disabled')
to experiment(..., wandb_mode='online')
. To default back to wandb normal behavior (where the mode can be set through command line or environment variable), use wandb_mode=None
.
If you want to contribute to the development of platalea, have a look at the contribution guidelines.
We keep track of what is added, changed and removed in releases in the changelog.
[1] Hodosh, M., Young, P., & Hockenmaier, J. (2013). Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. Journal of Artificial Intelligence Research, 47, 853–899. https://doi.org/10.1613/jair.3994.
[2] Harwath, D., & Glass, J. (2015). Deep multimodal semantic embeddings for speech and images. 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 237–244. https://doi.org/10.1109/ASRU.2015.7404800.