BIDMCDigitalPsychiatry / LAMP-platform

The LAMP Platform (issues and documentation).
https://docs.lamp.digital/
13 stars 10 forks source link

Automations framework beta #382

Open avaidyam opened 3 years ago

avaidyam commented 3 years ago

LAMP-worker currently uses the filesystem (code here) but there should be zero usage of filesystem across any LAMP backend code as there is no guarantee that these files/folders will remain available across container restart or resets.

Linoy339 commented 3 years ago

@avaidyam .Currently, we are writing to filesystem in LAMP-worker and taking the path to execute inside the container.

Just to clarify: we must take zip file from tag via tag api(currently there) and run the base64 encoded file directly inside the docker container instance. Right?

avaidyam commented 3 years ago

Yes. You should not be using the filesystem at all. Stream the files to the container if needed instead.

Linoy339 commented 3 years ago

Thanks @avaidyam . We will update that

Linoy339 commented 3 years ago

@avaidyam .As we said, the base64 encoded string (script) uploaded against a token contains a zip file. Again this zip file contain script.js(or py) and requirements.txt. With the present implemenation(writing to filesystem),we get each file and its content. Now, we are trying to do it without uploading to filesystem, which makes difficult to read the individual files inside.

Can we make some tweaks on script upload which we would like to implement in such as way that the base64 encoded string itself contains the requirements and the script file. for example the sample string uploaded would be : type:js,requirements:node-fetch;jsencrypt,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAAANS//8IBKE0DHxgljNBA000DSSSSSAO==

Here, type is the file type(js or py) and the semi-colon seperated strings inside requirements are the dependancy of the script file and the the one starting after requirements part corresponds to script file(base64_encoded string). so whenever the LAMP-worker gets this string for a token, it can easily convert this whole string into corresponding requirements and the script file easily We would like to go with this. Can you suggest?

avaidyam commented 3 years ago

The contents of the ZIP file should be provided directly to the Docker container - we do not want to "break out" the requirements.txt contents like this. Why does LAMP-worker need to know what the contents of the requirements.txt file are? It should not need to know this info.

Linoy339 commented 3 years ago

Okay. But In that case, reading content inside encoded(base64) string becomes difficult . Do you have idea on how to read zip file data without uploading it? To clarify, let me give you an example: Lets assume script.zip is our file and inside there requirements.txt and scriptfile.js. script.zip gets encoded(base64) and it becomes iDDgAAAAUAAAAAANS//8IBKE0DHxgljNBA000DSSSSSAO==. From this, its difficult to read individual files like requirement.txt and scriptfile.js and its content. If its uploaded, we can easily do it using the library which we are using now(admZip). If not uploaded, it becomes difficulty to read each file and it content. Buffer method to read content will give the whole content and separating requirements.txt and scriptfile.js becomes complicated. If you have any thoughts, please let us know.

avaidyam commented 3 years ago

We do not want to, nor do we need to, read the contents of the ZIP file? Let me recap this format for posterity with further explanation.


The Tag's value should be in this format: data:application/lamp-automation;language=python3;trigger=/participant/U1234/sensor_event/lamp.gps;base64,<some base64 text representing a zip file here>

  1. data:: This is the marker for a Data URL, similar to http: or https:. In http:, for example, a URL looks like http://www.example.com/index.html; the URL scheme is http: and the URL location is //www.example.com/index.html. Here, the // indicates "this is an absolute path, not a relative path."
  2. application/vnd-lamp-automation: This is the MIME type representing the data in this URL. We've made up lamp-automation for ourselves, and vnd- is short for "third-party vendor provided," meaning, this is not an official MIME type as declared by the IANA.
  3. language=: Data URLs support arbitrary companion parameters, and we declare language and trigger as parameters we need to know for the automation code.
  4. python3: The format of the language parameter is just a set of possible strings; here we support python3, javascript (or js), and rscript. These are arbitrary strings that we can change/control in the LAMP-worker code to map to the proper Dockerfile/base container required to run the automation code.
  5. /participant/U1234/sensor_event/lamp.gps: The format of the trigger parameter is identical to the NATS topic name, except using / instead of . to match actual API endpoints (this is what it should also be in NATS, if possible).
  6. base64: This indicates the contents of this Data URL are base64-encoded.
  7. <some data>: This is the actual base64-encoded data; we require this to be a ZIP file containing. In the case of python3 (the contents can differ depending on language), it contains:
    1. an __init__.py file with a main function. [REQUIRED]
    2. a requirements.txt file with pip statements for requirements to be installed. [OPTIONAL]
    3. any other supporting files/libraries as required by the code.

The LAMP-worker code should:

  1. Upon initialization, locate all Tags declared this way (specified above, value begins with data:application/vnd-lamp-automation).
  2. Register the topic handlers with NATS that match the triggers specified by these automations.
  3. Listen for NATS callbacks on these topics. If a callback is received:
    1. Create a new Docker container with the base image specified (i.e. python3).
    2. Copy the base64 ZIP file into this container and unpack it in a standard location.
    3. Invoke the script with the payload received from NATS.
    4. Wait for script completion OR automatic timeout (say, 5 minutes maximum).
      1. Once termination is required, tear down and delete the Docker container.
      2. Record execution stats (trigger, running length, etc.) to a log.

I realize that not all of these pieces are built yet - that is fine. Following this above specification (and NO deviation from it), @Linoy339, please go ahead with any code changes required.

Linoy339 commented 3 years ago

Thanks @avaidyam.
We believe all pieces except the filesystem ignore are done in LAMP-worker as we discussed in https://github.com/BIDMCDigitalPsychiatry/LAMP-platform/issues/150.

The Tag's value should be in this format: data:application/lamp-automation;language=python3;trigger=/participant/U1234/sensor_event/lamp.gps;base64,<some base64 text representing a zip file here> This can be done.

The changes would be : 1) change tag value as mentioned above and key in tag would be like study.cadn2efwhrxkppq9tn9t.activity.pqhm5zd2rbgpt5bjx5by(its already there)

2) Instead of filesystem, we should copy whole string file(tag value) into new docker container and from there, we should unpack it and process. Here I have some doubts: We need to run the contanier based on the language(py or js)used in tag value. Right? If so, if python, it would be python3 image or else it will node:alpine. Is that correct ? Also, the image used for js should contain npm or yarn, because installing these while script execution becomes time consuming and might results in memory issue.Can you confirm on these things? Also a suggestion: Instead of pulling a base image,can we have our own images with all these (JS,python,npm,pip etc)installed?

3)Both ways of script completion as well as automatic timeout has been done in LAMP-worker(https://github.com/BIDMCDigitalPsychiatry/LAMP-worker/blob/master/src/helpers/ScriptRunner.ts#L252). We can implement accordingly.

4) Also, In our impelemntation, we find the tag for a token when it gets published to nats. For example if study.cadn2efwhrxkppq9tn9t.activity.pqhm5zd2rbgpt5bjx5by is the token which we get in LAMP-worker, we fetch the tag api with the attachment key as study.cadn2efwhrxkppq9tn9t.activity.pqhm5zd2rbgpt5bjx5by , so that we get the script file(base 64 encoded) in tag value as mentioned above(data:application/lamp-automation;language=python3;trigger=/participant/U1234/sensor_event/lamp.gps;base64,). Please confirm.

5) In our implementation, only packages (py or js ) will be listed and not any statements in requirements.txt. If py, we use pipe and if js, we use npm. That would be fine. Right?

avaidyam commented 3 years ago

We need to run the contanier based on the language(py or js)used in tag value. Right?

Correct.

If so, if python, it would be python3 image or else it will node:alpine. Is that correct ?

Correct.

Also, the image used for js should contain npm or yarn, because installing these while script execution becomes time consuming and might results in memory issue.Can you confirm on these things? Also a suggestion: Instead of pulling a base image,can we have our own images with all these (JS,python,npm,pip etc)installed?

Correct, and this makes sense.

Also, In our impelemntation, we find the tag for a token when it gets published to nats.

This implementation is the reverse of what we should do. We need to locate all the Automations first, and set up ONLY the listeners required to handle them. To minimize code change at this point, upon worker start, build a dictionary mapping of these listener tokens mapped to the automation itself.

In our implementation, only packages (py or js ) will be listed and not any statements in requirements.txt. If py, we use pipe and if js, we use npm. That would be fine. Right?

This is not correct. requirements.txt is a Python (pip, specifically) format to list what dependencies the code has, you should use pip install -r requirements.txt. JS code has package.json and should use npm install there. We should either have one Dockerfile containing support for all supported languages or an individual Dockerfile for each supported language.

If you need examples, there are existing system like Fx or OpenFaaS that do this already.

Linoy339 commented 3 years ago

@avaidyam . Thanks for the explanation and example. With the discussion which we made here https://github.com/BIDMCDigitalPsychiatry/LAMP-platform/issues/150#issuecomment-845844999, Can you please brief which you said here : This implementation is the reverse of what we should do. We need to locate all the Automations first, and set up ONLY the listeners required to handle them. To minimize code change at this point, upon worker start, build a dictionary mapping of these listener tokens mapped to the automation itself.

When LAMP-worker starts and if we need to locate all the Automations tags, which tag api can be used ? Again, I need you look at this thread(https://github.com/BIDMCDigitalPsychiatry/LAMP-platform/issues/150#issuecomment-845958052) which we already discussed on an issue if we save key as 'lamp.automation'.

If we store key as 'lamp.automation', Fetching of data from tag database for a particular token becomes complicated, as the token(data:application/lamp-automation;language=python3;trigger=/participant/U1234/sensor_event/lamp.gps;base64,) is mixed in value field .In this case, we need to query with _parent:researcher_id, key:"lamp.automation". Here, there would be multiple records with binary data and it would be complicated to fetch (may results in data transfer error).

avaidyam commented 3 years ago

Right, I think a good compromise right now is:

  1. Get the list of all Researchers
  2. For each, check if Tag "lamp.automation" exists.
  3. If a new Tag is uploaded the Worker should be getting a NATS message about it - check if it is a "lamp.automation" key.
  4. If a "lamp.automation" tag is created, updated, or deleted, update our internal mapping.

This is a simple solution that will work for now. In the future we will want a more complex system instead but that will be too much work to implement for this beta release of Automations.

Linoy339 commented 3 years ago

Okay. So the internal mapping would be something like this: {researcherid_1:['study.123.activity'.', 'study.123.activity'.124'], researcherid_2:['study.345.participant'.', 'study.123.activity'.567']........} When a researcher uploads a script for particular token(key would be LAMP.automation as said above), this will be known by LAMP-worker via nats and add to this mapping list Is this right? Also, do we also need to store the whole script in this internal mapping, which seems not a good idea

  1. For each, check if Tag "lamp.automation" exists: Can you please tell what to do if it exists
avaidyam commented 3 years ago

You want the opposite mapping: the NATS token should map to an array of Researcher IDs. This way you know where the automation code is located for when the NATS message is received. (If the "lamp.automation" tag exists, we add the researcher ID to the mapping we created.)

We don't need to store the base64 data in memory, that is not a good idea as you have pointed out correctly.

Linoy339 commented 3 years ago

Hi @avaidyam Can you please answer this: If a particular researcher R1 has uploaded script for 2 tokens study.123.activity.* and study.456.activity.2. Also the script uploaded for both tokens are script1<base64 encoded> and script2<base64 encoded>

How would it appear in tag database?

Sample answer expected:

{
_id:60a765103a2802fcfcd1e1e7,
_parent:"R1",
key:"lamp.automation",
type:"me",
_deleted:false,
value:"[
data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;base64;script1,
data:application/lamp-automation;language=js;trigger=/study/456/activity/2;base64;script2]"
}

Thanks

avaidyam commented 3 years ago

How would it appear in tag database?

This is not actually allowed or possible. A researcher-level user should not be able to create more than one Tag with the same key (here, lamp.automation). The lamp.automation key specifically needs to be a STRING in data-url format (as you have here). We do not need to implement support for multiple scripts with different triggers/languages at this time.


For future reference only, if there is a need to subscribe to multiple topics (i.e. multiple triggers), we can support multiple trigger attributes (i.e. data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;trigger=/study/456/activity/2;base64,... or some other syntax (i.e. data:application/lamp-automation;language=python3;trigger=/study/123/activity/*&/study/456/activity/2;base64,...). These will run into the issue of not being able to mix and match different languages (python/js, in your example), which is fine as a limitation for now.


Also, please note that according to the data-url syntax, base64; should instead be base64, (with a COMMA, not a semicolon).

Further, please try to use correct formatting syntax on GitHub issues (like ` for inline code or ``` for code blocks). I've edited your comment for readability.

Linoy339 commented 3 years ago

@avaidyam . Sorry for my syntax.

So, you are saying that the Data-URL itself is the key here: { _id:60a765103a2802fcfcd1e1e7, _parent:"R1", key:"data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;base64", type:"me", _deleted:false, value:"[ script1]" }

{ _id:80a7652103a2802fcfcd1e1e7, _parent:"R1", key:"data:application/lamp-automation;language=js;trigger=/study/456/activity/2;base64", type:"me", _deleted:false, value:"[ script2]" } We need to get all the tags of this researcher when LAMP-worker starts (api : https://api.lamp.digital/type/R1/attachment) and if matches lamp-automation in the keys retrieved, pick it and push to our map array. Is that correct? Can you confirm ?

avaidyam commented 3 years ago

So, you are saying that the Data-URL itself is the key here:

No, I apologize, the key is still lamp.automation, the value is the data-url string, in the format data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;base64,<some base64 zip file here containing code and such>. (There should not be [ or ]around the base64 zip file string.)

We need to get all the tags of this researcher when LAMP-worker starts and if matches lamp-automation in the keys retrieved, pick it and push to our map array. Is that correct?

Yes, this part is correct. This needs to be done for ALL non-deleted researchers.

Linoy339 commented 3 years ago

Yes, this part is correct. This needs to be done for ALL non-deleted researchers.

That is understood.

So the value itself assumes to contain multiple scripts(i.e. data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;base64,script1| data:application/lamp-automation language=js;trigger=/study/456/activity/2;base64, script2- something like this). Right?

avaidyam commented 3 years ago

No. I have specifically stated in my prior comment:

We do not need to implement support for multiple scripts with different triggers/languages at this time.

Linoy339 commented 3 years ago

Okay. So, what if a particular researcher have 2 scripts for 2 different activities(or study or sensor...) respectively?

avaidyam commented 3 years ago

Let's go with this:

[...] if there is a need to subscribe to multiple topics (i.e. multiple triggers), we can support multiple trigger attributes (i.e. data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;trigger=/study/456/activity/2;base64,...)

It should also be possible to use trigger=/study/*/activity/* - is this implemented?

Linoy339 commented 3 years ago

It should also be possible to use trigger=/study/*/activity/* - is this implemented?

No. Right now, key is the token and value is the script based on that particular key, So, even if the researcher has multiple scripts, its easy for them to upload with new token. The structure based on the present implementation would be:

{ _id:60a765103a2802fcfcd1e1e7, _parent:"R1", key:"study.123.activity.*", type:"me", _deleted:false, value:""script1..."//base 64 script }, { _id:80a765103a2802fcfcd1e1e7, _parent:"R1", key:"study.456.activity.2", type:"me", _deleted:false, value:""script2..." //base 64 script }

[...] if there is a need to subscribe to multiple topics (i.e. multiple triggers), we can support multiple trigger attributes (i.e. data:application/lamp-automation;language=python3;trigger=/study/123/activity/*;trigger=/study/456/activity/2;base64,...)

My question is: Each trigger would have different scripts and this should be included in the value itself. (i.e. trigger=/study/123/activity/*base64, script1;trigger=/study/456/activity/2;base64, script2. Right? If yes, the value would be higher in size for a researcher if lot of scripts are present for the same.

avaidyam commented 3 years ago

No.

Let's implement proper wildcarding (i.e. trigger=/study/*/activity/*) as a priority.

The structure based on the present implementation would be:

Right, so we want to move away from this to the implementation I shared in prior comments.

Right? If yes, the value would be higher in size for a researcher if lot of scripts are present for the same.

No - we do not want to support this. Please note in my example that it is one script/data-url that has multiple triggers. We should allow one script to have multiple triggers but not multiple scripts.

Linoy339 commented 3 years ago

Okay. So In the script/data-url, it should be a zip file even for multiple triggers. Right?

If yes, there would be multiple scripts inside the script file?(ie...if we unzip it, we get multiple folders, say study_456_activity_2 and study_123activity*)

If yes, how would be invoke only one particular script (eg:study/456/activity/2 )

avaidyam commented 3 years ago

So In the script/data-url, it should be a zip file even for multiple triggers. Right?

Yes.

If yes, there would be multiple scripts inside the script file?

No.

If yes, how would be invoke only one particular script

It is not possible. There is only one script that needs to be run when any of its triggers are activated. The script needs to be informed with what trigger activated the script.

Linoy339 commented 3 years ago

Okay. Now, Understood. So, we can assume there would be something like a switch case inside the script to run particular trigger(eg:study//activity/,study/123/activity/456...) which we don't need to take care of .

We just need to give the token to the script and the script will do the rest. Is that Right?

avaidyam commented 3 years ago

Yes. You can pass the trigger that invoked the script to the script as an environment variable (i.e. LAMP_TRIGGER).

Linoy339 commented 3 years ago

Okay. Understood @avaidyam

To minimize code change at this point, upon worker start, build a dictionary mapping of these listener tokens mapped to the automation itself.

Can we make these array mapping with Redis Cache Support. i.e, each token(study..activity.,study.123.activity.456 ....) would act as key and value will be an array of researchers

avaidyam commented 3 years ago

You don't need Redis for this - a simple in-memory dictionary will do. Once we have a working implementation we can consider alternate ones.

Linoy339 commented 3 years ago

Okay. So it will be a variable like this:

const triggers = {
  'activity_event':[],
  'participant.*.activity_event':[],
  'activity.*.activity_event':[],
  'participant.*.activity.*.activity_event':[],
  'activity':[],
  'activity.*':[],
  'study.*.activity':[]
}
avaidyam commented 3 years ago

Yes.

Linoy339 commented 3 years ago

Okay. Thanks. I'll detail the whole steps discussed above:

  1. We will initialize the trigger dictionary in LAMP-worker as :

    export const triggers = {
    'researcher.*':[],
    'researcher.*.study.*':[],
    'study.*.participant.*':[],
    'study.*.activity.*':[],
    'study.*.sensor.*':[],
    'participant.*.activity.*.activity_event':[],
    'participant.*.sensor.*.sensor_event':[]  
    }
  2. When Lamp-worker starts, locate the automations of all researchers using the api:

    (await LAMP.Type.getAttachment(
                      researcher.id,
                      `lamp.automation`
                    )) as any
  3. If automation tag exists for particular researcher, we will process the value which will be a base 64 encoded string in the following way:

    • Identify the trigger (study.*.activity.*,researcher.*,study.123.activity.456 ...) from the value and push the particular researcher id to our internal dictionary mapping(triggers) with the key as trigger. If trigger specified in the value do not exists (for example, study.123.activity.456 won't be there in initialized triggers dictionary ) in the triggers dictionary, the same key will be created as new and push the researcher id to the same. Sample dictionary mapping would be : {'study.*.activity.*':['R1','R2','R3'],'researcher.*':['R1','R3','R5'],'study.123.activity.456':['R5']....}
  4. In parellel, if any researcher upload a script, publishing of particular trigger with the researcher_id happens and Lamp-worker will take this trigger and either push the researcher_id to existing key or make a new key in dictionary and push the researcher_id to that key.

  5. Now, when lamp api(post,put or delete) for the resources - researcher, study, participant, activity, sensor, activity_event, sensor_event gets called, individual token gets published and we cross checks with our dictionary mapping key and if exists for any researcher, we will call the api with researcher id as param for the retrieving script:

    (await LAMP.Type.getAttachment(
                      researcher.id,
                      `lamp.automation`
                    )) as any
  6. This script which is fetched from above api is checked for identifying the language and new docker container is created based on the language given in the base 64 string in the value. Next, the script alone is captured and passed to the docker container (JS or PYTHON) with payload from nats and also the token published to nats is also provided to container as argument variable(LAMP_TRIGGER) inorder to invoke exact function inside the script.

Please confirm

avaidyam commented 3 years ago

Exactly - this flow looks good to me. Once this is implemented it will be easier to grasp too.

Linoy339 commented 3 years ago

Thanks @avaidyam . Let us make this live ASAP.

Linoy339 commented 3 years ago

@avaidyam. Can we make a change in uploading the script file: Can we upload base 64 encoded script file itself instead of the zip file. For requirements.txt, we can include all those in data-url ? So the data url change would be : data:application/vnd-lamp-automation;language=js;trigger=study.*.activity.*;requirements=node-fetch;base64,UEsDBBQAAAAIAKB8fVJuS== where UEsDBBQAAAAIAKB8fVJuS== will be plain script file(JS/PY) and requirements(in this case, its node-fetch package) will be the packages required for this script to work which would be installed using npm or pip(if python)

Please suggest

avaidyam commented 3 years ago

@Linoy339 It needs to be a ZIP file as it could be multiple script files or libraries. We cannot add the requirements.txt file to the data-url.

Linoy339 commented 3 years ago

Okay. Understood I have tweaked data url in a small way inorder to include the main script file : data:application/vnd-lamp-automation;driverscript=SampleScript.js;language=js;trigger=activity.ey0y0hdbrvwcxzvhqwfr.participant.*;base64,UEsDBBQAAAAIAJWWTFN1XajdDAAAAAoAAAAQAA

Here, I have added a PARAM called driverscript which will act as main script file to be executed. All others remains unchanged.

In the above case, SampleScript.js will be the main script and will be available after unzipping the file inside the container

avaidyam commented 3 years ago

@Linoy339 While I appreciate the usefulness of this parameter, maybe we can set the default value to be "main.py" or "main.js" or "main.r", etc. depending on the script language? That way this parameter does not have to be declared unless someone needs to change it.

Linoy339 commented 3 years ago

@avaidyam . That too is also fine. Thanks

Linoy339 commented 3 years ago

@avaidyam. Please find https://github.com/BIDMCDigitalPsychiatry/LAMP-worker. We have tested it in our local environment and found to be working. We will test it with some scripts in staging and let you know

Linoy339 commented 3 years ago

@avaidyam.Also, we need to mount the volume(/var/run/docker.sock) in the staging stack to make it work ? Can we do it ?

avaidyam commented 3 years ago

@lukeoftheshire can add the docker socket mount to staging and production preemptively. We'll ping you back here when that's done and you can test on staging then. Could you share the testing files and procedure here in this issue as well?

Linoy339 commented 3 years ago

Sure @avaidyam . Please find the attachment for the zip files which we tested. JS.zip PY.zip

Please note the following: 1)For Js scripts, requirements.txt should be space separated as you can see in the attachment -JS.zip 2)For Py scripts, requirements.txt should be line separated as you can see in the attachment - PY.zip 3)Multiple triggers can be uploaded using the symbol -&. eg:activity.ey0y0hdbrvwcxzvhqwfr.participant.*&activity.*.participant.* 4)Sample body of the automation tag to be uploaded by the researchers: "data:application/vnd-lamp-automation;language=js;trigger=activity.ey0y0hdbrvwcxzvhqwfr.participant.*&activity.*.participant.*;base64,UEsDBBQAAAAIADNZTVPOW/I37QIAAEUIAAAHAAAAbWFpbi5qc41VUW+bMBB+R+I/XP1SqChoeczUaVPXrZu6tmrzMk3TysAEr2Az26SNKv77bIMbTNI1zgPx3Xd3330+jO+RumFcQoFlVkLBWQ2IshwfGwPyvSSBjFFhESfA8d+WcByMYeEYJ0agw0Icjp2llM04h96jcP8q+tcDU75cKVzDWYaFiPX2x+yn9ba8Uk6TXsyTJG1IXKV1E+dkSWRaJQLTPKFMkoJkqSQqpm81OfrvgqPkmSurcFyxZYDOzy4uruDbd/h49uEGbk9vvlwvUPT19uoyblIucKC5hQN535N8/eR7oFZiaIBDo/do++XYHGyle2tz7AA/oXu8RvNZp1EdZKnWNMCcMx7CUFw7fM82DVtMjAeOwD7fq+JpDUZ3Ya1KjaKlmcbvoGGgtqDvmYfqfjC4IupoQpeuGLFa+tR1NAwDwhopTmwKuW7wHJCiJklGmpRKgaLeNbbNLX7i+EXyed9Q7FojC4ZNnCSy0sXME4Sapgr3L0xNaK6ny5bWq1ZDmS41fvj3akSqZFwRuR6RsiZw1yam871+1/VW854Eavgj23GNZclyrRET0pb7zfL1HMxICcmV7KRYB1rYcACUOM0x17oBOmVUYiqPF0prpBKlTVMNB5T8EYwi6ExUF/axsSwxDQKO1dGfvIMn55iVVZ9ppL22DVJAcKAd7D60tCfSl5w9AMUPcKZnOLg7XyyuwczzAQiZylbcPafrAFcCjxM5DB4fUaQiw41bTXIh4gdOJP5EKnVniTYzd4p8lIcRoFvVPTiTjSJd3HTnpOl70dm3HPuw0Itj2XI6MXeTfZIUiihoyupoYOBbtFW1dpBduK3mkOv5tMzlEAQ4nLTj0J3NZmO+hoSrmTmMN4NkeKqPCdhWJ0n2LQYTaQZj5+xelcXgNqI4Vc1kAaYZa9W8c5yDvZI+n36DphWlczcN95JN98Il61RYrVamLzbqbIeIMyui3uwU8gUZdxbblNop4VjAfeQbTZRpW31FfO8fUEsDBBQAAAAIAFarTFMDl83PugAAACgBAAAMAAAAcGFja2FnZS5qc29uTY87D8IwDIT3/gorQyeoQGxdEQNzRx5SlVzVQJtEScpDiP8OTkAw+rvz+fwoiIRpR4iahHNOzBhc4IO2htmyWlSLTBWC9NrFj5Lh2Oo0Ne3oBjTJUJ1CFuPdpeDRqmlAZjkjvDHfZhNCZBNkb2kvNt5bX5OxxAIFB6k7DbUXVJaEm460FO/NZ0o74361XnHc7pBIO8Xe+l/BQUuYkGpsm/X3EwejYKTGXxFjFeYdouzZfFylz/lS8SxeUEsDBBQAAAAIANeMTFP4zfsQJQAAACMAAAAQAAAAcmVxdWlyZW1lbnRzLnR4dMvLT0nVTUstSc5QyCrOzytPTSrJz07NU0gsrsxL1s0tLUmtAABQSwECHwAUAAAACAAzWU1TzlvyN+0CAABFCAAABwAkAAAAAAAAACAAAAAAAAAAbWFpbi5qcwoAIAAAAAAAAQAYAOX1Tbf0v9cB5fVNt/S/1wHOwgyb9L/XAVBLAQIfABQAAAAIAFarTFMDl83PugAAACgBAAAMACQAAAAAAAAAIAAAABIDAABwYWNrYWdlLmpzb24KACAAAAAAAAEAGAChYc/Bgb/XAdJsDZv0v9cB0mwNm/S/1wFQSwECHwAUAAAACADXjExT+M37ECUAAAAjAAAAEAAkAAAAAAAAACAAAAD2AwAAcmVxdWlyZW1lbnRzLnR4dAoAIAAAAAAAAQAYAIgXlOhhv9cBGX4Om/S/1wGEVg6b9L/XAVBLBQYAAAAAAwADABkBAABJBAAAAAA=" 5) The api to upload the tag would be: https://api.lamp.digital/type/7s9ts30kq0w67tdg1qh2/attachment/lamp.automation/me

avaidyam commented 3 years ago

@Linoy339 Thanks. Minor fix: we already confirmed the format of the trigger string should use / and NOT . and that the way to add multiple triggers is to use trigger=/activity/123;trigger=/activity/456 (please take note of the / in the beginning of the string and the repetition of trigger= with the ; separator) instead of &. These details are minor but important - could you please fix those?

Linoy339 commented 3 years ago

Yes. We can do that. Will fix and ping you

lukeoftheshire commented 3 years ago

@Linoy339 @avaidyam Thank you for your patience. Docker socket mount is now present in staging and production. Please ping me if you have any issues.

Linoy339 commented 3 years ago

Thanks @lukeoftheshire . Can you please check volume(/var/run/docker.sock:/var/run/docker.sock:ro) which you have mounted, as I am unable to make the automation work.

lukeoftheshire commented 3 years ago

Hi @Linoy339 - that is the volume I have mounted to the LAMP-worker (which I assume is where the volume should be mounted). I just updated the container again, so please try again (or let me know if you need something changed)

Linoy339 commented 3 years ago

Okay. The issue is that we are unable to connect to the docker daemon (/var/run/docker.sock:ro) I did not understand why this is happening. Can you please check the permission/privileges ?

Linoy339 commented 3 years ago

HI @lukeoftheshire. @avaidyam. Its able to communicate and working now. I am unsure that why it was not working earlier. I did a stack restart in staging. Now, its able to create container and run the scripts inside.

@Linoy339 Thanks. Minor fix: we already confirmed the format of the trigger string should use / and NOT . and that the way to add multiple triggers is to use trigger=/activity/123;trigger=/activity/456 (please take note of the / in the beginning of the string and the repetition of trigger= with the ; separator) instead of &. These details are minor but important - could you please fix those?

@avaidyam : We have fixed this and merged in staging. So, sample body of the automation tag to be uploaded by the researchers would be:

"data:application/vnd-lamp-automation;language=js;trigger=/activity/ey0y0hdbrvwcxzvhqwfr/participant/*;trigger=/activity/*/participant/*;base64,UEsDBBQAAAAIANeMTFP4zfsQJQAAACMAAAAQAAAAcmVxdWlyZW1lbnRzLnR4dMvLT0nVTUstSc5QyCrOzytPTSrJz07NU0gsrsxL1s0tLUmtAABQSwMEFAAAAAgAPHBOU7RLMhD9AQAAXAQAAAcAAABtYWluLmpzhVPBitswEL0b/A+zPskhsaHHlCyFpbQUui00t1J2VVu21dqSkMa7hOJ/70i2krjdUhEi+80bzczTc5rIwWiL0AisOmisHiBTuha7AGRpUmnlELhtn+AAxupKOFf416+vvsXoaHsKZh2icfuy5EbuHPJWqrbo+WCKWrYSeV86oepSaZSNrDhKyqUCaE+/0gRolYEAK8Ic8fj9Ncw+fPl0XxhunWC+mTx/TbQ0maDifhAmrNU2h+VkH0iTNClLKDcb+KtMiMAG4v6GTuYDhLFdRMs0aUZVef4LDQVqLJgmYaPRYgdeKN2Lotcty3w2ibOetKCV5T53ZiNog+4QD8CTEXvIqDGUlTRcocu2c+ga20f+H4EHWe/ncYo1uo1kuOShxN4XCzs4usJezN4YpKr9lcbSfg3kCN56/vL03wxOIj5JPF01FSFYr0vOlCbz2zSjwZ+MnLeNEw8CO117jbTDWO67rk97CG5xaEl02ZyYFzZfCJ3gtbBeN8jutEKhcHckrTM6iBvTL9dT/nBaZTCFrCmfcwvshGLMCrr4w+1FedkAuyG00D/zC7pSuLP6GZR4hrfeqOzx/fH4GYJpb4C+HRzdYx7pE4jeifNBZ3h+ODcTnM+YWPdypq8cGKqCUJUeaWQraoiefHf3EczoupU5yZjXxf75lS1/1Bn9fgNQSwMEFAAAAAgAVqtMUwOXzc+6AAAAKAEAAAwAAABwYWNrYWdlLmpzb25NjzsPwjAMhPf+CitDJ6hAbF0RA3NHHlKVXNVAm0RJykOI/w5OQDD6u/P5/CiIhGlHiJqEc07MGFzgg7aG2bJaVItMFYL02sWPkuHY6jQ17egGNMlQnUIW492l4NGqaUBmOSO8Md9mE0JkE2RvaS823ltfk7HEAgUHqTsNtRdUloSbjrQU781nSjvjfrVecdzukEg7xd76X8FBS5iQamyb9fcTB6NgpMZfEWMV5h2i7Nl8XKXP+VLxLF5QSwECHwAUAAAACADXjExT+M37ECUAAAAjAAAAEAAkAAAAAAAAACAAAAAAAAAAcmVxdWlyZW1lbnRzLnR4dAoAIAAAAAAAAQAYAIgXlOhhv9cBVi0pZdHA1wG6kChl0cDXAVBLAQIfABQAAAAIADxwTlO0SzIQ/QEAAFwEAAAHACQAAAAAAAAAIAAAAFMAAABtYWluLmpzCgAgAAAAAAABABgAc3j58tXA1wFzePny1cDXAUpTKWXRwNcBUEsBAh8AFAAAAAgAVqtMUwOXzc+6AAAAKAEAAAwAJAAAAAAAAAAgAAAAdQIAAHBhY2thZ2UuanNvbgoAIAAAAAAAAQAYAKFhz8GBv9cBUAErZdHA1wHe2Spl0cDXAVBLBQYAAAAAAwADABkBAABZAwAAAAA="