alan-turing-institute / simulate-middleware

Simulate middleware service.
http://simulate.readthedocs.io
0 stars 0 forks source link

Simulate Middleware

The simulate middleware stores the current status of all cases and jobs. It does minimal processing, and serves primarily as a persistent store of state (but not of data).

Configuration

In order to run the system you must setup the config.json file with the correct urls for the database and job manager. An example configuration is:

{
    "database_url": "postgres://sg:sg@postgres/sg",
    "job_manager_url": "localhost:9000"
}

Note that the database_url is an SqlAlchemy database connection string.

Running The System

  1. Ensure that you have installed Docker.

  2. Start the Docker daemon if it is not already running.

  3. If this is the first time you are running the system, run the Postgres server individually in order for it to set itself up.

    docker-compose run postgres
  4. Shutdown the Postgres server.

    docker-compose down
  5. Bring up the full system.

    docker-compose up
  6. If you need to add demo data to the system send a POST request to http://localhost:5000/test. This will return null.

  7. Connect to the running server at (http://localhost:5000).

  8. To bring the system down (saving the database state)

    docker-compose down
  9. If you ever need to work with the database run (assuming you have already brought the system down):

    docker-compose run -d postgres
    docker ps

    This will show a CONTAINER ID for the container that you just created. Connect to this container with:

    docker exec -it <container id> /bin/bash

    Once inside the shell you can connect to the database with:

    psql -U sg -W sg 

    with the password sg When you are finished. Quit from both psql and the shell and run:

    docker-compose down
  10. If you have Postgres installed on your local machine, you can connect to the docker Postgres instance directly by running:

    docker-compose run -d -p "8082:5432" postgres
    psql -U sg -W -p 8082 -h localhost sg

    When you are finished run:

    docker-compose down

Helpful SQL Commands

Just as a reminder here are some helpful PostgreSQL commands that may be helpful:

Using The Middleware

The Flask app creates a server at localhost:5000.

The following is a list of endpoints and their functionality.

/case

This endpoint is responsible for managing the list of cases.

GET

A GET on this end point allows for paginated listing of all the cases in the system.

Arguments

It supports the following query args:

No information about whether the next page exists is returned. This is because I am expecting this to be used as part of an infinite scrolling system, where there is no explicit display to the user to ask for more (or if there a button to support error situations, returning no extra data is a valid response).

Return

Returns a list of case metadata for the selected cases. (May be empty). The format is:

[
    {
        "name": "Case name",
        "id": "Case id",
        "links": {
            "self": "Link to more details about this case"
        }
    },
    ...
]

/cases/<id>

This endpoint is responsible for managing the details of a specific case.

GET

Gets the full details for a single case. This retrieves the entire case and serialises it for the user.

Arguments

No arguments are accepted.

Return

Returns the full details of a specific case with the following structure:

{
    "name": "Case Name",
    "id": "Case Id",
    "fields": [
        {
            "name": "Case Field Name",
            "specs": [],
            "child_fields": [
                {
                    "name": "Case Field Name",
                    "specs": [
                        {
                            "name": "Parameter name",
                            "value": "Parameter Value",
                            "id": "Parameter id"
                        },
                        ...
                    ],
                    "child_fields": []
                },
                ...
            ]
        }
    ]
}

/job/

This endpoint manages the list of all jobs.

GET

Paginate through a list of all existing jobs.

Arguments

It supports the following query args:

No information about whether the next page exists is returned. This is because I am expecting this to be used as part of an infinite scrolling system, where there is no explicit display to the user to ask for more (or if there a button to support error situations, returning no extra data is a valid response).

Return

Returns a list of jobs with the following format:

[
    {
        "name": "Job name",
        "user": "Job creation user",
        "id": "Job id",
        "links": {
            "self": "Link to full job details",
            "case": "Link to full details of generating case"
        },
    },
    ...
]

POST

Create a new job

Arguments

Takes the following JSON structure in the body:

{
    "user": "Creating user",
    "name": "Job name",
    "case_id": "Parent case"
}
Return

If not enough fields are provided the following structure will be returned:

{ 
    "messages": { 
        "author": [ "Missing data for required field." ],
        "name": [ "Missing data for required field." ],
        "case_id": [ "Missing data for required field." ]
    }
}

If the job name and details are not accepted (i.e. the pair of job name and author already exists) the following will be returned:

{ 
    "message": "Sorry, these parameters have already been used. You have requested this URI [/job] but did you mean /job or /job/ ?"
}

If the job is successfully created, the following will be returned:

{ 
    "job_id": new_job_id 
}

/job/<id>

This endpoint is responsible for dealing with the details of a specific job.

GET

Get the details of the current job

Arguments

This end point accepts no arguments

Return
{
    "user": "username",
    "id": job_id,
    "name": "Job_name",
    "values": [
        {
            "value": "parameter value",
            "parent_template": "source template or null",
            "id": value_id,
            "name": "parameter name",
        },
        ...
    ],
    "parent_case": { case object as from /case/id }
}

PATCH

Update details of the current job. Note that the author and source case of a job can not be changed after creation.

Arguments

Takes a JSON object of what to replace. Fields that are not included will not be changed.

The largest possible structure is to replace is:

{
    "name": "New name of the job",
    "values": [
        {
            "name": "parameter name",
            "value": "parameter value"
        },
        ...
    ]
}

Note that the values list is replaced wholesale. So if the client sends back a shorter list, the removed values will be deleted.

Return

The following structure is returned.

{ 
    "status": "success" | "failed", 
    "changed": [ "name", "values" ],
    "errors": [ "error message" ]
}

Note that the changed field is a list of which fields where successfully changed. If the request failed then the errors will explain why. A request will not succeed if there are any errors.

POST

Starts the current job. Does not take any arguments.

Return

The following structure is returned.

{ 
    "status": "success" | "failed", 
    "errors": [ "error message if failed" ]
}

/test/<id>/status

Set the status for a given job to a specfic value.

PUT

Set the status of the given job to the requested.

Arguments

The argument must be the json object:

{
    "status": "<new status>"
}

where <new status> must be a status from the following list (case insensitive):

Return

The following structure is returned.

{ 
    "status": "success" | "failed", 
    "errors": [ "error message if failed" ]
}

/test

This endpoint is used purely for testing the system

POST

This populates the database with some fake data.

Arguments

No arguments are supported

Return

Returns null.