WFP-VAM / anticipatory-action

0 stars 1 forks source link

anticipatory-action

Validation and ad-hoc evaluations

The validation-nbs folder contains the notebooks used to validate this pipeline in Python, based on reference results produced in R. In this folder you'll find several analyses for each stage of the workflow, showing the reliability of this new implementation, and comments explaining the sources of any discrepancies.

The purpose of the template-eval-nbs folder is to host notebook templates that will enable system performance to be re-evaluated in terms of ROC scores, coverage, and Hit Rate / Failure Rate; after running the AA scripts with different parameters or different datasets.

Finally, we'll store in the ad-hoc-evaluations folder the results of specific evaluations obtained using the notebooks found in the template-eval-nbs folder. For instance, results related to the performance of the AA system using blended chirps or another type of forecast will be stored in this folder.

Running

$ conda activate aa-env

How to run jupytext files

In the different folders of this repository, you will find different notebooks with the extension .py. These are jupytext files that can be run as notebooks. They facilitate version control and have a much smaller memory size. They are simple to use, after activating the environment:

  1. Open jupyter lab by executing the command jupyter lab in the terminal.
  2. Find the file you need to work on in the file explorer.
  3. Right-click on it and select Open with -> Notebook. You can now execute your notebook cell by cell.

An .ipynb file will immediately be created, storing the cell outputs. As these two files are paired, any changes you make to one will immediately be reflected in the other, and vice versa. You can also work directly on the .ipynb file when you return to your workspace. Be careful, however, not to modify both files at the same time, as this may create conflicts and errors when you save or reopen them.

Full workflow through the script

You can now run the workflow for a given country.

Analytical script

$ python analytical.py <ISO> <SPI/DRYSPELL>

Triggers script

$ python triggers.py <ISO> <SPI/DRYSPELL>

After running this script for SPI / DRYSPELL and General / Non-Regret Triggers you can use the merge-spi-dryspell-gt-nrt-triggers.py notebook to filter the triggers for each district regarding the selected vulnerability and merge spi and dryspell. It actually provides the very final output.

Operational script

$ python operational.py <ISO> <issue-month> <SPI/DRYSPELL>

Check outputs

In data/outputs/FbF_Pilot_MockUp/, the final outputs that will serve as input for the Tableau dashboard are stored in a csv format. It is possible to open these in excel to inspect the results.

Set-up

These steps must only be done the first time.

You can do the set-up in JupyterHub and/or locally on your machine.

  1. Set up SSH for your GitHub account

    • Follow the set up steps here
  2. Configuring GitHub with your name and email

    • Copy and paste the text below in the terminal

      git config --global user.name "Your GitHub user name"

    • Copy and paste the text below in the terminal

      git config --global user.email YourGitAdress@example.com

    And now you are good to go with GitHub as you like!

  3. Clone the repository in your folder system

To start using the AA pipeline, you will have to import the repository from GitHub (you are lucky, you just set up SSH to access GitHub from this distant server!!).

Get back in the Terminal and run the commands below:

Once it's done, you should see the anticipatory-action folder and all its files in in your system.

  1. Create conda environment specific to the workflows

Before using anticipatory-action, you have to create the anaconda environment specific to the workflow containing all the python libraries needed. To do so, run the commands below:

You can now activate your environment: conda activate aa-env

Make sure it is active before running any workflow.

  1. Add your HDC credentials

You will need to set-up credentials for the WFP HDC STAC API.

To obtain the HDC STAC API token, go to HDC token to generate a key. Then copy the key and get back in your files to create a hdc_stac.tk file in your home folder:

You are now done with setting up credentials for the HDC data.

You are now good to run the AA workflow