prbf2-tools / svctl

0 stars 1 forks source link

Artifacts uploader #5

Open emilekm opened 2 months ago

emilekm commented 2 months ago

Sysadmin should be able to configure automatic upload of artifacts (chatlogs and demos).

Methods of transport:

Both should support SSL. Credentials should be fetched from env vars, files or global config.

Each type of artifact should have

There might be a need to consolidate location and filename pattern with templates, so they are not duplicated and don't conflict. See https://github.com/sboon-gg/svctl/issues/6.

Pickup algorithms:

After upload there might be a need to move or remove them.

Upload failures should be logged and cached so user can manually try to reupload them.

Abdullah-Khawahir commented 2 months ago

I was trying to make a solution and I came with this for now : the artifacts for uploading must be specified in the config.yaml file as follow:

artifacts:
  - path: "./folderA/file*.txt" # what ever matches the pattern POST it to the destination
    destination: "http://127.0.0.1:8080"
  - path: "./folderB/report*.txt" # what ever matches the pattern Transport it via FTP
    destination: "ftp://user:pass@127.0.0.1:8765"

if the destination URL start with ftps or https it will be handled accordingly and for the path it either going to be Regexp or Glob expression -glob is the UNIX style for path name file expansion- the artifacts list will be mapped to an ArtifactsConfig struct. now for each entry of the artifacts it will assign an Uploader interface for the correct destination as example : this path ./folderA/file*.txt will be assigned to httpsUploader this path ./folderB/report*.txt will be assigned to ftpUploader

for the algorithm now after each failed upload the artifact will be appended to a file for later handling

  1. sort all files
  2. get last uploaded file if exists
  3. get the next file
  4. upload file
  5. if successful append the file name to successful file uploads file else append to failed uploads file
  6. repeat step 2

step 1 and 2 should fulfill this there should be a mechanism for picking up artifacts created when Daemon wasn't running (downtime).


I skipped the defaults for now , and I want to know where to put this module ? and is there a module or function which handles the reading the config file ?

emilekm commented 2 months ago

Great idea with one string for destination, I wasn't sure how to achieve that. One thing that bogs me is how to pass an authorization header...

for the algorithm now after each failed upload the artifact will be appended to a file for later handling

  1. sort all files
  2. get last uploaded file if exists
  3. get the next file
  4. upload file
  5. if successful append the file name to successful file uploads file else append to failed uploads file
  6. repeat step 2

This should consider the different ways files are created by the process - IIRC:

We can implement that further down the line, so we can stick with this approach right now, but keep that in mind.

I skipped the defaults for now , and I want to know where to put this module ? and is there a module or function which handles the reading the config file ?

There is only one config file read here. You should hook your config there. The default/example values should be generated programmaticly, in Initialize function.

The whole uploading logic should live in internal/server/uploader - then we can hook it up to FSM, and also hook it to a command for uploading failed uploads.

We can discuss the code design when you'll raise the PR

Abdullah-Khawahir commented 1 month ago

One thing that bogs me is how to pass an authorization header...

I saw this coming this that is why I used the strategy pattern. This is can be easily done, and we can even handle more cases like API keys , google drive APIs or Discord Web hooks etc. As for the headers case I will need to do a bit of refactoring later and it is going to be something like the following: we can pass optional fields in the YAML config file like this :

artifacts:
  - path: "./folderA/file*.txt" # what ever matches the pattern POST it to the destination
    destination: "http://127.0.0.1:8080"
    http-headers: 
        - Authentication: Basic admin:123
        - Accept: text/html 

and this must be handled under HttpPostUploadStrategy struct

type HttpPostUploadStrategy struct{
           Headers              map[string]string                `yaml:"http-headers"`
}

and in the Upload function:

        // somewhere in the Upload function ...
    req , _ := http.NewRequest("POST", destination ,bytes.NewBuffer(fileBytes))

    for header ,value := range uploader.Headers {
        req.Header.Add(header, value)
    }

I did not give this much thoughts for now and there must be a better way to do this so we can handle more complex cases with in a modular testable way.