VUIIS / dax

Distributed Automation for XNAT
MIT License
25 stars 24 forks source link

Multiple assessors derived from multiple assessors #157

Closed mmodat closed 6 years ago

mmodat commented 7 years ago

We would like to add a new feature but now that Benj is travelling the world it would be great if we could get a bit of guidance from you.

Here is a typical example. Let's say we have a scan "1" and an assessor that runs on this scan "proj-x-sub-x-sess-x-1-x-proc1". We now have another assessor that takes one output from the first assessor as an input. It leads to "proj-x-sub-x-sess-x-proc2". We start here having discrepancy as one assessor is scan based and the other is session based.

The real problem for us starts when we have more than one input scan going through the first process. If I have two scans, "1" and "2", they both have their own assessor, which is the behavior we want. The second process now grabs data from both proc1 assessors and combine them into a single assessor. Ideally, we would like to have:

scans:
- 1
- 2
assessors:
- proj-x-sub-x-sess-x-1-x-proc1
- proj-x-sub-x-sess-x-2-x-proc1
- proj-x-sub-x-sess-x-1-x-proc2 or  proj-x-sub-x-sess-x-1-x-proc1-x-proc2
- proj-x-sub-x-sess-x-2-x-proc2 or  proj-x-sub-x-sess-x-2-x-proc1-x-proc2

Does it makes sense to you? Is there already something in place to replicate this behavior and did we miss it? If not, could you give us a few pointers on where to start, especially to @bariskanber, who is going to have a look at this one.

In case, it is of any use and we should have written it differently, below is the yalm processor associated with the second assessor:

---
inputs:
  default:
    spider_path: /home/dax/Xnat-management/ucl_processing/pipelines/BrainTivFromGIF/v1.0.0/Spider_BrainTivFromGIF_v1_0_0.py
    working_dir: /scratch0/dax/
    nipype_exe: perform_brain_tiv_from_gif.py
    env_source: /share/apps/cmic/NiftyPipe/v2.0/setup_v2.0.sh
    omp: 1
  xnat:
    assessors:
      - assessor1:
        proctypes: GIF_Parcellation_v3
        needs_qc: False
        resources:
          - resource: SEG
            varname: seg
command: python {spider_path} --exe {nipype_exe} --seg {seg}
attrs:
  suffix:
  xsitype: proc:genProcData
  walltime: 01:00:00
  memory: 4096
  ppn: 1
  type: session
justinblaber commented 7 years ago

Sorry for the late response!

I think you would either have to make proc2 a scan processor or add a suffix with the session processor. Right now multiple session assessors are only possible through a suffix (to my knowledge).

Let me know what you guys decide to do.

Thanks,

-Justin

BennettLandman commented 7 years ago

P.s. We think this is a great idea. We would love to setup a Skype/Google meeting with your team to talk about engineering / release / bug fixes, etc.

Best wishes, Bennett

bariskanber commented 7 years ago

Hi Justin, Thanks you for reply.

I would have thought we cannot make proc2 a scan processor as it operates on an assessor rather than a scan?

Regarding your second suggestion: I am new to DAX, so I have no idea how to add a suffix to a session processor but looking at the init function of the AssessorHandler class in dax/dax/XnatUtils.py, it seems like DAX currently supports only two types of assessor labelling:

ProjectID-x-Subject_label-x-SessionLabel-x-ScanId-x-proctype or ProjectID-x-Subject_label-x-SessionLabel-x-proctype

How would we be able to append a suffix if this is the case? Even if we could, I think it wouldn't solve the problem of being able to differentiate exactly which assessor it has used as input?

Many thanks & Kind regards, Baris.

mmodat commented 7 years ago

Hi @jucestain, @BennettLandman et al.,

Sorry for the late reply here. For a Skype/other, this sounds great however, we still haven't been able to recruit anyone to actively work on DAX yet. We are re-advertising at the moment. As a result, we are using dax as user rather than developer. Hopefully it might change soon.

@jucestain would you be able to elaborate how we could make proc2 a scan assessor since it only uses the output from another accessor and thus does not use any scan. Based on you comment, I wonder if my initial explication was not clear. All I described is in the same session, e.g. scan1 is a T1w and scan2 is a repeat T1w.

Marc

baxpr commented 7 years ago

@jucestain We do have an example of a scan assessor (fmri_conngraph_v2) that operates only on outputs of another scan assessor (fmri_connpre_v1). If I'm reading it right, it looks like it's done via some custom logic in the processor.py. The conngraph assessor is actually "attached" to the original fmri scan, and the associated connpre assessor name is determined at run time from the scan label. Not sure if there's any clear way to carry that over to the YAML way.

justinblaber commented 7 years ago

@mmodat I'm not sure how Ben has things set up over there, but for here if we have a "scan assessor", for example Multi_Atlas, then this will run once per scan (based on input scan types). If we want something run off the outputs of Multi_Atlas (e.g. MaCRUISE), then we also set this as a "scan assessor", and we run it on the same scan types, but there should still be one MaCRUISE per scan. Inside the logic of the MaCRUISE processor we just download the outputs from Multi_Atlas. Hopefully this makes sense. For us, we only allow for "session assessors", which run once per session, and "scan assessors" which will run once per scan matching an input scantype.

mmodat commented 7 years ago

Thanks @jucestain, it makes sense. I will give it a shot and will keep you posted.

Marc

mmodat commented 7 years ago

Actually ... I failed!!! Would you have a yalm example? How do I set that I want to use an assessor that is assigned to a scan? If I use the scan and assessor independently, it create the second assessors just fine but then complains that I have too many first assessor (which makes sense).

justinblaber commented 7 years ago

@mmodat

Here's an example: https://github.com/byvernault/ucl_processing/blob/master/yaml_processors/Processor_bamos.yaml

Specifically this part:

    assessors:
     - assessor1:
       proctypes: GIF_Parcellation_v3
       resources:
         - resource: TIV
           varname: tiv
         - resource: PRIOR
           varname: prior
         - resource: SEG
           varname: seg
         - resource: LABELS
mmodat commented 7 years ago

When I use a similar setup it complains (rightfully) that I have more than one GIF.

justinblaber commented 7 years ago

And type: scan is set? Can you post your yaml processor?

mmodat commented 7 years ago

I have done "a few" versions while trying. I guess that what you have in mind is along the line of the following:

---
inputs:
  default:
    spider_path: /home/dax/Xnat-management/ucl_processing/pipelines/BrainTivFromGIF/v1.0.0/Spider_BrainTivFromGIF_v1_0_0.py
    working_dir: /scratch0/dax/
    nipype_exe: perform_brain_tiv_from_gif.py
    env_source: /share/apps/cmic/NiftyPipe/v2.0/setup_v2.0.sh
    omp: 1
  xnat:
    scans:
      - scan1:
        types: T1W,T1w,MPRAGE 
        ressources:
          - resource: NIFTI
    assessors:
      - assessor1:
        proctypes: GIF_Parcellation_v3
        needs_qc: False
        resources:
          - resource: SEG
            varname: seg
command: python {spider_path} --exe {nipype_exe} --seg {seg}
attrs:
  suffix:
  xsitype: proc:genProcData
  walltime: 01:00:00
  memory: 4096
  ppn: 1
  type: scan
  scan_nb: scan1

We I run dax test with dax test --file yaml_processors/Processor_BrainTivFromGIF.yaml --project ADNI_N --session 002_S_0413_20070601 --host ${DPUK} that has two T1w weighted scan, I obtain the following output:

======================================================================
DAX TEST
----------------------------------------------------------------------
Platform  : Linux
Python v. : 2.7.13
Dax v.    : 0.7.1
XNAT host : https://dpuk-ucl.cs.ucl.ac.uk
Username  : user in dax netrc file.
======================================================================
Running test for dax files generated by user ...
----------------------------------------------------------------------

======================================================================
Test -- BrainTivFromGIF_v1 ...

----------------------------------------------------------------------
 + Testing method test_has_inputs 

Processor.has_inputs(cobj) running on ADNI_N - 002_S_0413_20070601 - 2 ...
2017-10-24 20:05:13,729 - DEBUG - processors - BrainTivFromGIF_v1: Too many GIF_Parcellation_v3 assessors found.
Outputs: state = NEED_INPUTS and qcstatus = Too many GIF_Parcellation_v3 found
Processor.has_inputs(cobj) running on ADNI_N - 002_S_0413_20070601 - 1 ...
2017-10-24 20:05:13,731 - DEBUG - processors - BrainTivFromGIF_v1: Too many GIF_Parcellation_v3 assessors found.
Outputs: state = NEED_INPUTS and qcstatus = Too many GIF_Parcellation_v3 found

has_inputs SUCCEEDED

----------------------------------------------------------------------
 + Testing method test_dax_build 

dax_build on ADNI_N - 002_S_0413_20070601 ...
2017-10-24 20:05:13,731 - INFO - launcher - -------------- Build --------------

2017-10-24 20:05:13,731 - INFO - launcher - launcher_type = xnatq-combined
2017-10-24 20:05:13,731 - INFO - launcher - mod delta = None
2017-10-24 20:05:13,732 - INFO - launcher - Connecting to XNAT at https://dpuk-ucl.cs.ucl.ac.uk
2017-10-24 20:05:14,051 - INFO - launcher - ===== PROJECT: ADNI_N =====
2017-10-24 20:05:14,051 - INFO - launcher -   * Modules Prerun
2017-10-24 20:05:14,051 - DEBUG - launcher - 

2017-10-24 20:05:17,747 - INFO - launcher -   + Session 002_S_0413_20070601: building...
2017-10-24 20:05:17,983 - DEBUG - launcher - == Build modules (count:0) ==
2017-10-24 20:05:18,145 - DEBUG - launcher - == Build scan processors ==
2017-10-24 20:05:18,146 - DEBUG - launcher - +SCAN: 2
2017-10-24 20:05:18,147 - DEBUG - launcher - +SCAN: 1
2017-10-24 20:05:18,149 - DEBUG - launcher - == Build session processors ==
2017-10-24 20:05:18,318 - DEBUG - launcher - setting last_updated for: 002_S_0413_20070601 to 2017-10-24 20:06:18
Assessor ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-2-x-BrainTivFromGIF_v1: 
 - proctype: BrainTivFromGIF_v1
 - procstatus: NEED_TO_RUN
 - qcstatus: Job Pending
 - date: 2017-10-24
Assessor ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-BrainTivFromGIF_v1: 
 - proctype: BrainTivFromGIF_v1
 - procstatus: NEED_TO_RUN
 - qcstatus: Job Pending
 - date: 2017-10-24

build SUCCEEDED

----------------------------------------------------------------------
 + Testing method test_dax_launch 

Launching tasks for ADNI_N - 002_S_0413_20070601 with writeonly ...
2017-10-24 20:05:20,445 - INFO - launcher - ===== PROJECT:ADNI_N =====
2017-10-24 20:05:23,105 - WARNING - launcher - no matching processor found: ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-GIF_Parcellation_v3
2017-10-24 20:05:23,280 - WARNING - launcher - no matching processor found: ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-2-x-GIF_Parcellation_v3
2017-10-24 20:05:23,605 - INFO - task -    filepath: /home/dax/.dax_test/ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-BrainTivFromGIF_v1.pbs
2017-10-24 20:05:23,919 - INFO - task -    filepath: /home/dax/.dax_test/ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-2-x-BrainTivFromGIF_v1.pbs
PBS Example:

#!/bin/bash
#$ -S /bin/sh
#$ -M None
#$ -m bae
#$ -l h_rt=01:00:00
#$ -l tmem=4096M
#$ -l h_vmem=4096M
#$ -l tscratch=20G
#$ -o /cluster/project0/DAX/RESULTS_XNAT_SPIDER/OUTLOG/ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-BrainTivFromGIF_v1.output
#$ -pe smp 1
#$ -j y
#$ -cwd
#$ -V
uname -a # outputs node info (name, date&time, type, OS, etc)
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=1 #set the variable to use only good amount of ppn
export OMP_NUM_THREADS=1
export XNAT_HOST=https://dpuk-ucl.cs.ucl.ac.uk
SCREEN=$$$$
echo 'Screen display number for xvfb-run' $SCREEN
echo 'Setting HOST for XNAT to: ' $XNAT_HOST
xvfb-run --wait=5 -a -e /tmp/xvfb_$SCREEN.err -f /tmp/xvfb_$SCREEN.auth --server-num=$SCREEN --server-args="-screen 0 1920x1200x24 -ac +extension GLX" python /home/dax/Xnat-management/ucl_processing/pipelines/BrainTivFromGIF/v1.0.0/Spider_BrainTivFromGIF_v1_0_0.py --exe perform_brain_tiv_from_gif.py --seg xnat:/project/ADNI_N/subject/002_S_0413/experiment/002_S_0413_20070601/assessor/ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-2-x-GIF_Parcellation_v3/resource/SEG,xnat:/project/ADNI_N/subject/002_S_0413/experiment/002_S_0413_20070601/assessor/ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-GIF_Parcellation_v3/resource/SEG --working_dir /scratch0/dax/ --omp 1 --env_source /share/apps/cmic/NiftyPipe/v2.0/setup_v2.0.sh -a ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-BrainTivFromGIF_v1 -d /cluster/project0/DAX/jobsdir/ADNI_N-x-002_S_0413-x-002_S_0413_20070601-x-1-x-BrainTivFromGIF_v1

launch SUCCEEDED

----------------------------------------------------------------------
ran 4 test(s) in 11.409s

OK

Where the two GIF outputs (seg) are in the call of this pipeline instead of one per processor.

Thanks for your help! Greatly appreciated!

bariskanber commented 7 years ago

Hi Marc, My understanding is that we can only do this by having some custom logic in processor.py and it's not possible via yaml because it cannot know which gif to take.

mmodat commented 7 years ago

Thanks @bariskanber. Let's see what can and can't be done. My understanding is that we are likely to move towards more yaml so it might be a feature we want to expand as it will be required for most pipelines in our studies.

mmodat commented 7 years ago

Up! Sorry to be pushy. Just wondering if there is already a solution in place that we are missing or not? E.g. if it is not feasible trough the yaml processor, would you have non yaml example we could use for "inspiration"?

BennettLandman commented 6 years ago

So, I think the gist is that this is a new feature that is "not hard," but it would take a bit of engineering to handle all the edge cases. We are happy to work with your team to get it integrated as things progress.

mmodat commented 6 years ago

Sounds good. Maybe we can organise a TC in the coming days?

Apart from this one, we might want to integrate a few other things and I'll be keen to discuss it so that we all agree on how we should do it. On the top of my head, I have:

BennettLandman commented 6 years ago

Next week is thanksgiving. The 11/28 and 11/30 are rather open for a video chat. Let's e-mail to pick a time.

mmodat commented 6 years ago

Hi all,

Reviving this thread. @atbenmurray joint our team recently and will be looking at this issue. We have been discussing how the auto spider could evolve and we will come back to you with a concrete plan to move things forward.

atbenmurray commented 6 years ago

Hi All, My apologies for the two long posts that are about to happen; this is my first pass proposal for how to handle the issues raised here by Marc, and I'd be grateful for feedback / corrections. Ben (M)

atbenmurray commented 6 years ago

TL;DR

I propose the following changes, for discussion:

  1. ScanSpider / SessionSpider / proposed AssessorSpider are merged into Spider
    1. ScanSpider functionality is provided by 'select' keyword/namespace that describes how multiple scans / assessors are handled
    2. one / some(n) / all / foreach
      1. 'foreach' provides scan-level functionality replacement
      2. can mix and match for different inputs
      3. outstanding question as to foreach and multiple inputs - don't want a combinatorial explosion of scan-level assessors

Long Version

This is a proposal that addresses the multiple assessor of multiple assessor use-case detailed by Marc in terms of changes to spiders

Merging ScanSpider and SessionSpider

Starting with an example, minimal xnat database:


- project_a
  - subject_x
    - session_1
      - scan_1
      - scan_2
    - session_2
      - scan_3

A scan-level spider generates three assessors:

A session-level spider generates two assessors:

For the scan-level spider, session_1's scan_1 and scan_2 are treated as separate, unrelated scans; it generates an assessor for each scan in that session. For the session-level spider, session_1's scans are treated as a collection of scans. We can mimic the scan-level behaviour in a session-level spider by providing a directive to the spider that tells it how we want multiple scans / assessors of the same type handled. I'm calling this 'select' for now but I'm sure that there is a better name.

Consider the following yaml fragment for a session spider, with the proposed keyword


xnat:
  scans:
    - scan1:
      types: T1W
      select: foreach
      resources:
        - resource: NIFTI
          varname: t1

For session1, this results in an assessor for scan1 and an assessor for scan2 For session2, this results in an assessor for scan3.

Enumerating the namespace for 'select' keyword:


one #select a single matching entity in this session
some(n) #select n matching entities in this session
all #select all matching entities in this session (default behaviour)
foreach #create one assessor instance for each matching entity

This preserves both the existing behaviour of session-level spiders, whilst allowing scan level semantics and mixed semantics.

Extension to multiple inputs

I haven't done enough work on this yet / I don't have enough understanding of this yet, but let's consider the following example:


- project_a
  - subject_x
    - session_1
      - t1w_1
      - t1w_2
      - flair_1
      - flair_2

and the following spider snippet:


xnat:
  scans:
    - scan1:
      types: T1W
      select: foreach
      resources:
        - resource: NIFTI
          varname: t1
    - scan2:
      types: FLAIR
      select: foreach
      resources:
        - resource: NIFTI
          varname: flair

Semantically, this could feasibly generate one of two results:

  1. the cartesian product of {t1w_1, t1w_2} and {flair_1, flair_2}. This would probably not be desirable
  2. a 'zip-like' operation on the two inputs. This is deliberately vague at present as I don't know the proper answer

Of course, I don't know how realistic this kind of scenario is, as I am still pretty unfamiliar with the underlying research use-cases.

Please comment.

atbenmurray commented 6 years ago

TL;DR

  1. id modifications to cope with assessor of assessor spiders
    1. One of:
      1. Replace 'source' assessor name component with 'target' assessor name component
        • i.e. proj-x-sub-x-sess-x-1-x-assr_a ->
          proj-x-sub-x-sess-x-1-assr_b
      2. Allow ids to grow to unbounded length to provide part of an audit trail
        • i.e.
          proj-x-...-x-assr_a-x-assr_b-x-assr_c
      3. Replace id with guid and add 'audit graph' showing how artefacts have been generated

Long Version

Handling ids for assessors of assessors

With respect to ids for assessors of assessors, any of the following seem feasible, from less drastic to more drastic:

  1. replace the assessor component at the end with the new assessor component name:
    • from: proj-x-sub-x-sess-x-1-x-proc1
    • to: proj-x-sub-x-sess-x-1-x-proc2
  2. append onto the end of the id, with the assumption that assessors of assessors don't get too deep, chain-wise:
    • from: proj-x-sub-x-sess-x-1-x-proc1
    • to: proj-x-sub-x-sess-x-1-proc1-x-proc2
  3. replace the id with a guid, and store the full tree of scans / assessors that contributed to an artefact in the session:
    • ['1 -> proc1', 'proc1 -> proc2']

I raise option three because the id currently seems to be serving both as an audit trail and as an id. if the id doesn't need to be human readable, we can instead generate an audit trail that captures the graph of scans / assessors that generated an assessor output. This could be very useful if we need to select between a set of scans, one of which was used to generate an assessor, but is subsequently needed as an input to other, dependant, assessors.

Here is an example:


t1w_1 ------\
             proc_1 ---> proc_2
flair_1 ----/---------/

If I understand correctly, this full graph of dependencies isn't available at present, as proc_1 relies on multiple inputs and but only one of those inputs gets to contribute to the id, and the id is the only source of information about how a particular artefact has been generated. Of course, I may be wrong about that.

Feedback please.

justinblaber commented 6 years ago

Here are my opinions:

For post 1: I think we should keep things simple, but also allow extensions for more complex cases. I like that ScanSpiders generate one assessor per scan (given a scan type) and that SessionSpiders create one assessor per session. This format has worked pretty well for us and covers a majority of the use cases. I think if we want more complex logic this should be done on a case by case basis using something like CustomSpiders which would allow an engineer to encode any sort of logic you wanted (i.e. forming an assessor for each element of a cartesian product of two scan types, forming an assessor for a "zip-like" format, etc...). I don't think its a good idea to hardcode anything more complex than the scan and session spider format we already have. I think dax should do something like:

switch spider_type:
    case 'scan':
        for scan containing scan type:
            create scan_assessor
    case 'session:
        create session_assessor
    case 'custom':
        custom_assessors  = custom_spider.get_assessors(info)
        for custom_assessor in custom_assessors:
             create custom_assessor

I think the "select" option should also be simple. For session spiders I think it should be along the lines of

With a similar thing as above. "all" would grab all scans matching the scan type. "first" would just grab the first scan matching the scan type. "unique" would grab a single scan; if less than or more than 1 of the scan type exists it would throw and exception. Then "custom" could be some custom logic.

Fort post 2:

1) I believe this is currently how things are basically done.

2) Seems really cool and interesting. However, I think it will probably break some existing internal dax logic (probably some assumptions are made where assessor_label.split('-x') is performed then it's assumed the proctype is located in index 4 or something.

3) I think a design decision needs to be made whether or not the assessor label contains a full audit trail or if the log file is sufficient (or some other means, like possibly using some field in the assessor type to display a dependency DAG)

atbenmurray commented 6 years ago

Hi Justin, and thanks for the response. This is a placeholder post; Marc and I have discussed a couple of other motivating examples and I have a better understanding of the Auto Processsor/Spider approach vs. the Scan/Session Processor/Spider approach. I'm putting together the motivating examples to have a further discussion. What I'm talking about really applies to the Auto Processor/Spider code-path, but I should be able to provide a proper response soon.

justinblaber commented 6 years ago

Sounds great!

atbenmurray commented 6 years ago

edit

I should say that the goal with all of this, as I understand it, is to make the yaml configuration of processors as broadly applicable and simple as possible. The thinking presented here is based on the idea that Marc's example is one part of a larger set of use-cases for yaml pipelines

Handling Multiple Assessor of Multiple Assessor use-case

Back to the problem as initially described in this issue.

proc1 is a scan-based assessor that takes scans 1 and 2 and generates

proc2 is a session-based assessor that expects only one proc1 assessor artefact and so only generates one proc2 data artefact. The desired behaviour is a proc2 assessor is generated for each proc1 assessor

Is there any real difference between an assessor that operates on a per-scan basis and an assessor that operates on a per-assessor basis? It seems that they are both essentially 'artefact'-level assessors and should be able to handle scan-type inputs and assessor-type inputs interchangeably. I'll use the term 'artefact-level assessor' for the rest of the post to mean that.

If the prior assumption is correct, then proc2 in Marc's motivating example is an artefact-level assessor.

Let's make the example a little more complex:

Now we have the problem that proc2 cannot be an artefact-level assessor because it needs multiple inputs, and that implies session-level association (that 1 and proc1 are part of the same session). There are some ways around this, however.

For Marc's example, the scan used to generate proc1 is detailed in proc1's id so it should be possible for an artefact-level assessor to get the scan from which proc1 was generated without having to be a session-level assessor.

Let's make the example a little more complex again:

The problems are as follows:

A gold-standard solution could be to have state on each assessor that explicitly details what entities it was generated from. This would allow a downstream assessor that needs the scans/assessors that a given assessor was generated with to select those unambiguously regardless of whether a session has multiple artefacts of that type, or even whether it is a session-level or artefact-level assessor itself.

It should be noted that Marc's example problem only seems to require a generalisation of scan-level semantics to artefact-level. It might be worth having a more general-case solution if the more complex examples here are realistic.

Apologies once more for the long post. Does it make sense?

mmodat commented 6 years ago

@justinblaber , @BennettLandman We (aka @atbenmurray) are starting to have a good idea of the modifications that would be required. Would you have time for a catch-up? Ideally, we would like your views in case you forsee any downside.

atbenmurray commented 6 years ago

@justinblaber @BennettLandman I'd like to organise a call to go over what I have been discussing with @mmodat with you guys. I think the posts I'm making are too long without further context and a discussion would make that much simpler. Can we organise a day for next week? Any day other than Tuesday is fine for me. Thanks, Ben

bud42 commented 6 years ago

Implemented as of release v0.9.0