s3rius / rustus

TUS protocol implementation in Rust.
https://s3rius.github.io/rustus/
MIT License
150 stars 13 forks source link

error with https tls using hybrid-s3 storage #146

Open Tigranchick opened 10 months ago

Tigranchick commented 10 months ago

i am deployed rustus with helm chart providing changing only these values

env:
  RUSTUS_DIR_STRUCTURE: "{year}/{month}/{day}"
  RUSTUS_MAX_BODY_SIZE: "100000000"
  RUSTUS_MAX_FILE_SIZE: "1000000000"
  RUSTUS_LOG_LEVEL: "debug"
  RUSTUS_STORAGE: "hybrid-s3"
  RUSTUS_S3_URL: https://s3.eu-central-1.amazonaws.com
  RUSTUS_S3_BUCKET: my-bucket-name
  RUSTUS_S3_REGION: "eu-central-1"
  RUSTUS_S3_ACCESS_KEY: "<AWS_ACCESS_KEY>"
  RUSTUS_S3_SECRET_KEY: "<AWS_SECRET_KEY>"
  RUSTUS_HOOKS: "post-finish"

persistence:
  enabled: true

  existingClaim: "rustus-pvc"

ingress:
  enabled: true
  className: "nginx"
  annotations: 
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: "letsencrypt-prod"
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: rustus.mydomain.com
      paths:
        - path: /
          pathType: Prefix
  tls: 
   - secretName: rustus-tls-secret
     hosts:
       - rustus.mydomain.com

Everything deployed as expected including cert issuing by my cert maker but that must not related to error

Some info about cluster. Its quite simple eks cluster with nginx ingress controller cert-maker ebs-csi-controller

But on the cluster i got this


[2023-11-19][17:08:52+00:00][DEBUG] Starting uploading f44d552d-1d58-4650-99fb-c5dd40b79f68 to S3 with key `2023/11/19/f44d552d-1d58-4650-99fb-c5dd40b79f68`
[2023-11-19][17:08:52+00:00][DEBUG] starting new connection: https://my-bucket-name.s3.eu-central-1.amazonaws.com/
[2023-11-19][17:08:52+00:00][DEBUG] resolving host="my-bucket-name.s3.eu-central-1.amazonaws.com"
[2023-11-19][17:08:52+00:00][DEBUG] connecting to <some ip>:443
[2023-11-19][17:08:52+00:00][DEBUG] connected to <some ip>0:443
[2023-11-19][17:08:52+00:00][ERROR] Found S3 error: reqwest: error sending request for url (https://my-bucket-name.s3.eu-central-1.amazonaws.com/2023/11/19/f44d552d-1d58-4650-99fb-c5dd40b79f68): error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1919: (unable to get local issuer certificate)
[2023-11-19][17:08:52+00:00][DEBUG] Error in response: S3Error(Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("my-bucket-name.s3.eu-central-1.amazonaws.com")), port: None, path: "/2023/11/19/f44d552d-1d58-4650-99fb-c5dd40b79f68", query: None, fragment: None }, source: hyper::Error(Connect, Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 337047686, library: "SSL routines", function: "tls_process_server_certificate", reason: "certificate verify failed", file: "ssl/statem/statem_clnt.c", line: 1919 }]))) }, X509VerifyResult { code: 20, error: "unable to get local issuer certificate" })) }))

its kind a openssl related error? Due to connection to s3 on https? Or maybe i just misunderstanding configuration and i need provide more envs in prod like token session etc? And please proof me if i wrong in using s3 url

(i am also cloned original repo and tested locally with this env and s3 syncing, all works as expected without errors and i can see my files on s3 side)

Also, in any case, I want to thank you for the work done, this project is very cool, well thought out and interesting! Thanks in advance for any help.

s3rius commented 10 months ago

Hi and thanks for all warm words. I guess that error might happen because of some SSL error within docker container. Can you build an image based on rustus with Openssl installed and try it once again?

FROM s3rius/rustus:0.7.6

RUN apt-get update \
    && apt-get install -y libssl-dev \
    && apt-get clean
Tigranchick commented 10 months ago

i figured it out

Unfortunately this is not work

Hi and thanks for all warm words. I guess that error might happen because of some SSL error within docker container. Can you build an image based on rustus with Openssl installed and try it once again?

FROM s3rius/rustus:0.7.6

RUN apt-get update \
    && apt-get install -y libssl-dev \
    && apt-get clean

But i remembered how I was faced with the problem with reqwest crate , it could not send requests to https from my prod in another project. Then these dependencies helped me. Thank you for suggesting a solution. After that I created this image and used it in the values, everything worked and now the files are uploaded to s3. Also your chart organized very flexible to configure it, very convenient.

FROM s3rius/rustus:0.7.6

RUN apt-get update && apt install -y openssl

RUN apt-get update \
    && apt-get install -y ca-certificates tzdata \
    && rm -rf /var/lib/apt/lists/*
s3rius commented 10 months ago

Thank you for figuring it out. Let's keep the issue open, until ca-certs won't be added into image.

Tigranchick commented 10 months ago

I found that resume not working on my configuration, just caution for who will use ingress nginx controller you must use this annotations to enable resumable uploads and adjust limits the same or above RUSTUS_MAX_BODY_SIZE and RUSTUS_MAX_FILE_SIZE

i found in the docs RUSTUS_BEHIND_PROXY="true" to enable on nginx proxy ip resolving but it not applicable in my case


ingress:
  enabled: true
  className: "nginx"
  annotations: 
    kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
    nginx.ingress.kubernetes.io/proxy-buffering: "off"
    nginx.ingress.kubernetes.io/proxy-body-size: "0" 
    nginx.ingress.kubernetes.io/client-body-buffer-size: "0" 

  hosts:
    - host: rustus.mydomain.com
      paths:
        - path: /
          pathType: Prefix
  tls: 
   - secretName: rustus-tls-secret
     hosts:
       - rustus.mydomain.com
s3rius commented 10 months ago

It would be nice to add this entry somewhere in docs. I'll do it.

Tigranchick commented 10 months ago

It would be nice to add this entry somewhere in docs. I'll do it.

that would be nice