Open mario16 opened 1 year ago
@mario16
Please check out the latest release of GroupDocs.Conversion Cloud Docker Container. Now, it's possible to connect a AWS S3 cloud storage by setting environment variables:
Name | Description |
---|---|
S3_STORAGE_BUCKET | Bucket ID |
S3_STORAGE_ACCESS_KEY | S3 API Access Key |
S3_STORAGE_SECRET_KEY | S3 API Secret Key |
S3_STORAGE_REGION | AWS S3 Region |
Windows (PowerShell)
docker run `
-p 8080:80 `
-v "${pwd}/data:/data" `
-e "S3_STORAGE_BUCKET=main_bucket" `
-e "S3_STORAGE_ACCESS_KEY=XXXXXXXXXXXXXXXXXXX" `
-e "S3_STORAGE_SECRET_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" `
-e "S3_STORAGE_REGION=us-west-2" `
--name conversion_cloud `
groupdocs/conversion-cloud
Linux (bash)
docker run \
-p 8080:80 \
-v $(pwd)/data:/data \
-e S3_STORAGE_BUCKET=main_bucket \
-e S3_STORAGE_ACCESS_KEY=XXXXXXXXXXXXXXXXXXX \
-e S3_STORAGE_SECRET_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
-e S3_STORAGE_REGION=us-west-2" \
--name conversion_cloud \
groupdocs/conversion-cloud
It will help you accomplish your requirements. You can post your query in our free support forum for any further assistance.
Hi @tilalahmad , we already started a container with the latest image 23.2 and configured the env vars.
However, when we try to list the files and folders, it ignores the "storageName" (I don't know where to set the storage name in this scenario).
The storage where always search is the bind volume.
folderApi.getFilesList(new GetFilesListRequest("/", "bucket-name"))
The "storageApi" always returns "true" on the "exist" function, the same occurrs on the Rest API.
Note: same functions are working if we use the cloud version instead of the local container.
Thanks in advance.
It works!
I've noticed that latest tag (of docker image) from Apr 19, 2023 at 5:01 am
is older than 23.2 tag from May 4, 2023 at 5:13 am
.
With latest tag works fine.
I have some doubts, with this groupdocs server running on the container, does it check some config from cloud account like Applications (client_id, client_secret), Storages.
What if I want to track different clients? Could I configure more than one client_id on the env vars of the container?
Thanks!
@mario16
It works! I've noticed that latest tag (of docker image) from Apr 19, 2023 at 5:01 am is older than 23.2 tag from May 4, 2023 at 5:13 am. With latest tag works fine.
We were publishing GroupDocs.Conversion Cloud Docker Container with the latest tag only. Recently, we found a regression issue in the latest version 23.4 and published an older version with the 23.2 tag as well. So that's why you noticed the newer date with the 23.2 release.
I have some doubts, with this groupdocs server running on the container, does it check some config from cloud account like Applications (client_id, client_secret), Storages.
No, the Docker container does not check any information from your groupdocs cloud hosted account. The Docker container uses a metered license and it periodically sends the API usage to our billing server.
What if I want to track different clients? Could I configure more than one client_id on the env vars of the container?
I will check this out and get back to you soon.
Thank you very much. I will wait for the other response.
@mario16
What if I want to track different clients? Could I configure more than one client_id on the env vars of the container?
I am afraid that currently, we only support a single Client ID in the Docker container. However, we have logged a ticket to configure multiple Client IDs and will notify you as soon as the ticket is resolved.
Awesome! I will stay tune, we are going to purchase the license.
What happen if we need to use different s3 buckets? Can I set a s3 bucket on "storageName" attribute of the ConvertSettings? Or is the one on the env vars the only allowed?
@mario16
What happen if we need to use different s3 buckets? Can I set a s3 bucket on "storageName" attribute of the ConvertSettings? Or is the one on the env vars the only allowed?
Yes, you are right. You can only have one bucket configured at the time of container initialization.
Java, MacOS, development env.
We need to convert every kind of file to pdf (groupdocs-conversion) Another challenges: -generate thumbnails -analyze content (OCR)
We've already started a docker container locally, with aws and s3 bucket configuration. The issue is that we can't connect with s3 bucket using the Java SDK (groupdocs-cloud, conversion).