Closed jyuwei closed 1 year ago
K8s secret aws-s3-secret was created before hand with AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION
@jyuwei do you mind sharing how your aws secrets look. Currently, we only support secrets that are in AWS credentials format. Something that looks like:
[default]
aws_access_key_id=<>
aws_secret_access_key=<>
region=<>
Once you have your credentials file in this format, you can create your secret using:
$ kubectl create secret generic aws-s3-secret --from-file=credentials=<path to your creds file in above format>
Unfortunately, we don't support providing/overriding region and endpoints using ais config: ais config cluster backend.conf='{"aws":{"cloud_region": "us-east-1", "endpoint": "s3://694596843551.s3-control.us-east-1.amazonaws.com"}}'
. One option is to provide AWS config file along with credentials, if you plan on having additional AWS specific config:
$ kubectl create secret generic aws-s3-secret --from-file=credentials=<path to your creds file in above format> --from-file=config=<path to file containing your aws config>
@saiprashanth173 - Thanks for the response. I had created an Opaque
type K8s secret with kubectl apply -f <path_to_aws_s3_secret_resource>
apiVersion: v1
data:
AWS_ACCESS_KEY_ID: QUxxxxxxxx=
AWS_DEFAULT_REGION: dXMtd2VzdC0x
AWS_SECRET_ACCESS_KEY: a3xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
kind: Secret
metadata:
name: aws-s3-secret
type: Opaque
I will try the kubectl create secret generic aws-s3-secret --from-file=credentials=
approach as you suggested.
A couple of follow up questions:
Error: aws-error[MissingRegion: could not find region configuration]
was due to misconfigured K8s secret for the S3 credentials?Thanks again.
in re remote AIS:
"backend": {"ais":{"ais2":["https://ais2.xxxxx.xom"]}
- xom
here is probably a typoproxy.go:3033 p[rlMBSvXn]: retrying remais ver=0
- retrying and asking targets (in this cluster) about the remote one, and not succeeding - getting an empty response, essentiallyexport AIS_ENDPOINT=http://aistore-proxy:51080
- this cluster is HTTP based. But the remote one is evidently HTTPs - see the first bullet above. That won't work. HTTP or HTTPS is a global choice - if the cluster listens to HTTP it'll use HTTP for all external and intra-cluster comm-s. And vice versa.
$ ais config cluster net --json
"net": {
"l4": {
"proto": "tcp",
"sndrcv_buf_size": 131072
},
"http": {
"server_crt": "server.crt",
"server_key": "server.key",
"write_buffer_size": 65536,
"read_buffer_size": 65536,
"use_https": false,
"skip_verify": false,
"chunked_transfer": true
}
}
Separately, it'll help if you start using the latest master which is almost ready for v3.18 - will release it soon.
@alex-aizman - Thank you for the response!
"backend": {"ais":{"ais2":["https://ais2.xxxxx.xom"]}
- Yes, the xom
should be com
, it was a typo when I made the comment. In my setup I had deployed Nginx ingress controller and created a K8s ingress to route https://ais2.xxxxxx.com
to the aistore-proxy. I was able to use ais
CLI to interfacing the the cluster with this URL (I anonymized the domain name)apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: aistore
component: proxy
function: gateway
name: aistore-proxy
namespace: ais
spec:
rules:
- host: ais1.xxxxxx.com
http:
paths:
- backend:
service:
name: aistore-proxy
port:
number: 51080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- ais1.xxxxxx.com
secretName: example-tls-cert
proxy.go:3033 p[rlMBSvXn]: retrying remais ver=0
- QUESTION: Does this error pointing to a potential configuration issue? export AIS_ENDPOINT=http://aistore-proxy:51080
- Apologies for this confusion, but in my above example I was logged in the aisnode-deubg
pod using this command: kubectl --kubeconfig /root/.kube/ais1-us-west -n ais exec -it aisnode-debug -- /bin/bash
, the AIS_ENDPOINT
was configured using in-cluster host name. If I ran the remote-attach command locally I'm getting the same error.# cat /root/.config/ais/cli/cli.json
{
"cluster": {
"url": "https://ais1.xxxxxx.com",
"default_ais_host": "https://ais1.xxxxxx.com",
"default_docker_host": "http://172.50.0.2:8080",
"skip_verify_crt": false
},
"timeout": {
"tcp_timeout": "60s",
"http_timeout": "0s"
},
"auth": {
"url": "http://127.0.0.1:52001"
},
"aliases": {
"cp": "bucket cp",
"create": "bucket create",
"get": "object get",
"ls": "bucket ls",
"put": "object put",
"start": "job start",
"stop": "job stop",
"wait": "job wait"
},
"default_provider": "ais",
"no_color": false
}
# ais cluster remote-attach ais2=https://ais2.xxxxxx.com
Remote cluster (ais2=https://ais2.xxxxxx.com) successfully attached
# ais config cluster backend.conf -j
"backend": {"ais":{"ais2":["https://ais2.xxxxxx.com"]},"aws":{"cloud_region":"us-east-1","endpoint":"s3://69xxxxxxxx.s3-control.us-east-1.amazonaws.com"}}
I'll try with the latest master as well. Thanks for letting me know.
@jyuwei I could reproduce your issue on a local deployment (non-k8s). Creating secret with correct credential format should fix your issue with AWS.
$ cat ~/.aws/credentials
AWS_ACCESS_KEY_ID=<key>
AWS_DEFAULT_REGION=<>
AWS_SECRET_ACCESS_KEY=<>
$ make deploy
Enter number of storage targets:
1
Enter number of proxies (gateways):
1
Number of local mountpaths (enter 0 for preconfigured filesystems):
1
Select backend providers:
Amazon S3: (y/n) ?
y
Google Cloud Storage: (y/n) ?
n
Azure: (y/n) ?
n
HDFS: (y/n) ?
n
Loopback device size, e.g. 10G, 100M (creating loopbacks first time may take a while, press Enter to skip):
Building aisnode 1a6eafa73 [build tags: aws mono]
go: downloading github.com/aws/aws-sdk-go v1.44.264
done.
Proxy is listening on port: 8080
$ ais ls aws:// --all
E 09:54:04.959094 t[iQit8081]: failed to list buckets s3://, err: aws-error[MissingRegion: could not find region configuration]: GET /v1/buckets (called by p[akvp8080]) (stack: [htrun.go:1155 <- tgtbck.go:149 <- tgtbck.go:70 <- target.go:537])
E 09:54:04.962679 t[iQit8081]: failed to list buckets s3://, err: aws-error[MissingRegion: could not find region configuration]: GET /v1/buckets (called by p[akvp8080]) (p[akvp8080]: htrun.go:1155 <- proxy.go:2055 <- proxy.go:564 <- proxy.go:372])
Error: t[iQit8081]: failed to list buckets s3://, err: aws-error[MissingRegion: could not find region configuration]
Hello @alex-aizman, @saiprashanth173, guys,
Sorry for hijacking that issue for something not exactly related with it.
I am trying to configure specific AWS IAM policy and attach it to specific AWS IAM user, so that we allow access to specific AWS S3 bucket only but I am receiving an error when using ais ls s3://test-bucket-aisdev
while it works fine with aws s3 ls s3://test-bucket-aisdev
(see details below). Of course we use same AWS credentials and region. Any idea what am I doing wrong?
user@aisbox-1:~/go/src/github.com/NVIDIA/aistore$ ais ls --all
NAME PRESENT
s3://bucket_1 no
s3://bucket_2 no
...
s3://test-bucket-aisdev no
Total: [AWS buckets: 100 (0 present)] ========
user@aisbox-1:~/go/src/github.com/NVIDIA/aistore$
user@aisbox-1:~/go/src/github.com/NVIDIA/aistore$ ais ls s3://test-bucket-aisdev
E 10:49:08.985605 t[GJDt8088]: failed to HEAD remote bucket s3://test-bucket-aisdev, err: aws-error[AccessDenied: Access Denied]: HEAD /v1/buckets/test-bucket-aisdev (called by p[mgSp8080]) (stack: [htrun.go:1155 <- tgtbck.go:518 <- target.go:548])
E 10:49:08.986266 t[GJDt8088]: failed to HEAD remote bucket s3://test-bucket-aisdev, err: aws-error[AccessDenied: Access Denied]: HEAD /v1/buckets/test-bucket-aisdev (called by p[mgSp8080]) (p[mgSp8080]: htrun.go:1155 <- prxbck.go:249 <- prxbck.go:238 <- proxy.go:597 <- proxy.go:372])
Error: t[GJDt8088]: failed to HEAD remote bucket s3://test-bucket-aisdev, err: aws-error[AccessDenied: Access Denied]
user@aisbox-1:~/go/src/github.com/NVIDIA/aistore$
user@aisbox-1:~$ aws s3 ls s3://test-bucket-aisdev
PRE test-dir-aisdev
user@aisbox-1:~$
My AWS IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowRWBucketAndObjects",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": [
"arn:aws:s3:::test-bucket-aisdev",
"arn:aws:s3:::test-bucket-aisdev/*"
]
},
{
"Sid": "AllowListAllBuckets",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
By the way: Can I have more than one AWS profile configured to access multiple S3 buckets from different AWS organizations for example?
I am using Version: 3.17.3cf1d5271 at the moment.
Best Regards, bboychev
maybe add
"Action": [
"s3:GetBucketVersioning"
]
It is easy to check - here's the piece of code and the two operations it executes using aws-sdk-go
:
Hello @alex-aizman,
Thank you! It works fine with appending "s3:GetBucketVersioning" action in the AWS IAM policy above:
user@aisbox-1:~/go/src/github.com/NVIDIA/aistore$ ais ls s3://test-bucket-aisdev
NAME SIZE CACHED
test-dir-aisdev/ 0B no
user@aisbox-1:~/go/src/github.com/NVIDIA/aistore$
I have tried to configure more than one (the default
) profile in ~/.aws/credentials
and ~/.aws/config
but I was not able to make it work with ais
CLI. I do not see option to use a specific AWS profile as well.
Can I have more than one AWS profile configured to access multiple S3 buckets from different AWS organizations for example?
Best Regards, bboychev
Setup
Environment:
K8s
Deploy method:ais-k8s operator
Operator image used:aistorage/ais-operator:0.94
AIStore cluster custom resource:Notes
aws-s3-secret
was created before hand withAWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
andAWS_DEFAULT_REGION
aistore-proxy
Issues
Unable to add S3 remote backend
Using the
aisnode-debug
pod, I tried to add S3 as a remote back end, but recieved the following error:Error: aws-error[MissingRegion: could not find region configuration]
aistore-proxy
pod log message:Unable to attach a remote AIS cluster
I have used the same method to set up another AIStore cluster on K8s:
ais2
, but when trying to attach it as a remote cluster onais1
, the remote cluster was not added, even though the CLI returned success messageaistore-proxy
pod log message: