Open michalby24 opened 2 weeks ago
0.19 is not a SFTPGo version. This report also lacks basic info such as the storage provider you are using and a standalone reproducer. If the problem occurs on every upload, it should be very easy to provide a standalone reproducer.
A problem like this would be noticed by many users, we have several installations with millions of uploads per day and no memory issues.
We are not sure what is happening in your case and unfortunately we do not have the time or motivation to help you, but we think it is quite unlikely that this is a bug that occurs all the time. In any case you have access to the source code, feel free to debug the issue and if you find a bug/edge case, please submit a PR. As a community user this is a way to help the project you are using for free. Thanks
0.19 is not a SFTPGo version. This report also lacks basic info such as the storage provider you are using and a standalone reproducer. If the problem occurs on every upload, it should be very easy to provide a standalone reproducer.
A problem like this would be noticed by many users, we have several installations with millions of uploads per day and no memory issues.
We are not sure what is happening in your case and unfortunately we do not have the time or motivation to help you, but we think it is quite unlikely that this is a bug that occurs all the time. In any case you have access to the source code, feel free to debug the issue and if you find a bug/edge case, please submit a PR. As a community user this is a way to help the project you are using for free. Thanks
I got confused with the image version of SFTPGo, as it is v2.5.0 with Openshift PVC linked to Azure files as the storage provider. We find SFTPGo a proper solution for our clients, as it is a Kubernetes native application for large file uploads to the cloud (Azure). On more than few occasions, our clients - during the process of uploading large files (500Gb) experienced disconnections from the service. The debug procedure has shown reboot of the SFTPGo pods due to OOM as the cause. Above, attached is a screenshot from last week, during which we uploaded several files in 3 separate batches, as we described in the last comment. Is there any more information you would like to get? Our clients and we would very much appreciate your help.
I had the same problem. In our case, the issue was related to the S3 backend used in a virtual folder without a specified root directory. If the "Root directory" is not specified for a virtual folder, all data goes to the RAM first, and only after successful upload is copied to the S3 backend.
It would be great if the creation of a file system would work the same way for users and virtual folders. For example, if a user is created with S3 storage, the "Root directory" is automatically set to "/srv/sftpgo/data/%user%". However, for the virtual folders with an S3 backend the "Root directory" is left empty, so all data goes to the memory first instead of temporary storage.
I had the same problem. In our case, the issue was related to the S3 backend used in a virtual folder without a specified root directory. If the "Root directory" is not specified for a virtual folder, all data goes to the RAM first, and only after successful upload is copied to the S3 backend.
It would be great if the creation of a file system would work the same way for users and virtual folders. For example, if a user is created with S3 storage, the "Root directory" is automatically set to "/srv/sftpgo/data/%user%". However, for the virtual folders with an S3 backend the "Root directory" is left empty, so all data goes to the memory first instead of temporary storage.
SFTPGo does not work as you describe, are you writing this because you have examined the code? Or because you assume so? Please also read the description provided above in the initial report
⚠️ This issue respects the following points: ⚠️
Bug description
We are experiencing an issue with SFTPGo where the memory usage continuously rises during file uploads. Memory usage gradually increases while uploading, and upon completion, there is a noticeable spike in memory usage. After this spike, the memory usage settles at a higher level than before the upload started and does not decrease over time. This cycle repeats with each file upload, eventually leading to the pod being killed by OOM (Out Of Memory).
Steps to reproduce
Expected behavior
Memory usage should increase during the upload process and should return to normal levels after the upload completes. Memory should not continuously increase after each upload, nor should it cause the pod to be OOM killed.
SFTPGo version
0.19.0
Data provider
postgresql
Installation method
Community Docker image
Configuration
sftpgo:
volumes:
name: sftpgo-pvc persistentVolumeClaim: claimName: sftpgo-pvc
volumeMounts:
name: sftpgo-pvc mountPath: /mnt
podAnnotations: prometheus.io/scrape: "true" prometheus.io/port: "10000"
resources: requests: cpu: 100m memory: 350Mi limits: cpu: 400m memory: 350Mi
config: data_provider: create_default_admin: true driver: postgresql name: sftpgo host: POSTGRES_SERVER port: 5432 username: POSTGRES_ADMIN password: POSTGRES_ADMIN_USER
envVars:
name: SFTPGO_DEFAULT_ADMIN_PASSWORD valueFrom: secretKeyRef: name: sftpgo-admin-creds key: password
autoscaling: enabled: true minRepliacs: 1 maxReplicas: 5 targetCPUUtilizationPercentage: '' targetMemoryUtilizationPercentage: 80
Relevant log output
No response
What are you using SFTPGo for?
Small business (10-person firm with file exchange?)
Additional info