aws / aws-sdk-java

The official AWS SDK for Java 1.x (In Maintenance Mode, End-of-Life on 12/31/2025). The AWS SDK for Java 2.x is available here: https://github.com/aws/aws-sdk-java-v2/
https://aws.amazon.com/sdkforjava
Apache License 2.0
4.13k stars 2.83k forks source link

Unable to load AWS credentials from any provider in the chain #1324

Closed poonamtr closed 7 years ago

poonamtr commented 7 years ago

I have my application running on EC2. Trying to access dynamoDB via SDK from my Java application. But for every operation, I get "Unable to load AWS credentials from any provider in the chain".

Code:

try {
            AWSCredentialsProvider provider = new DefaultAWSCredentialsProviderChain();
            AWSCredentials credentials = provider.getCredentials();
            if(credentials != null) {
                LOG.info("Credentials Key: " + credentials.getAWSAccessKeyId());
                LOG.info("Credentials secret: " + credentials.getAWSSecretKey());
            }
        } catch (Exception e) {
            LOG.info("Exception in credentials cause:" + e.getCause() +";message: " + e.getMessage() +";stack: " +e.getStackTrace());
        }

The getCredentials() line itself throws an Exception with message: "Unable to load AWS credentials from any provider in the chain". If I do not try to get credentials here and pass it to the DynamoDB client:

dynamoDBClient = builder.withEndpointConfiguration(new 
                AwsClientBuilder.EndpointConfiguration(url, region.getName()))
                .withCredentials(credentialProvider())
                .withClientConfiguration(clientConfig)
                .build();

Then it throws the same exception when I try to do any operation.

I tried setting up the proxy too:

ClientConfiguration clientConfig = new ClientConfiguration();
        clientConfig.setProxyHost("yyy");
        clientConfig.setProxyPort(port);
        clientConfig.setNonProxyHosts("xxx");
        clientConfig.setProtocol(Protocol.HTTP);

and then passing it while creating the client:

dynamoDBClient = builder.withEndpointConfiguration(new 
        AwsClientBuilder.EndpointConfiguration(url, region.getName()))
                .withCredentials(credentialProvider())
                .withClientConfiguration(clientConfig)
                .build();

But no success.

Can someone @here guide me what am I missing here?

varunnvs92 commented 7 years ago

SDK has a credential resolution logic to resolve the aws credentials to use when interacting with aws services. See the below link: http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html

When you see the error "Unable to load AWS credentials from any provider in the chain", it means we could not find credentials in any of the places the DefaultAWSCredentialsProviderChain looks at. Please make sure the credentials are located at atleast one of the places mentioned in the above link.

varunnvs92 commented 7 years ago

Feel free to reopen if you still face the issue.

HSDen commented 7 years ago

@varunnvs92 This issue still exists for me. I am running a plain Java application(Not Spark) on an emr and while trying to access the S3, I am facing the same issue.

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
            .withCredentials(new AWSStaticCredentialsProvider(credentials))
            .withRegion(" us-east-1")
            .build();

try{ s3Client.getObjectAsString(bucket_name, object_key); }catch(AmazonClientException e){ log.error("AmazonClient Exception Occured"+e.getMessage()); }

I am accessing the s3 from EMR through an IAM role and Not secret key or access keys

LarsAlmgren commented 5 years ago

@HSDen did you find a solution to this problem?

AmitZoh commented 5 years ago

If anybody stumbles upon this issue - in my case it turned out to be a missing trust relationship with the cluster nodes.

venkateshkonduru1 commented 5 years ago

I am still facing this issue.. in Jenkins Environment with AWS SM Service? Any Solution

monigala commented 4 years ago

If anybody stumbles upon this issue - in my case it turned out to be a missing trust relationship with the cluster nodes.

Can you expand on this solution. How was the fix implemented?

csllc-one commented 4 years ago

I encountered this issue too and resolved it be ensuring there was a literal default profile in the credentials file (~/.aws/credentials). The default profile can be any IAM user but must be defined with default profile name.

oniseun commented 4 years ago

Just as @CSLLCUser mentioned I changed the profile name to default and it fixed the issue .. got to command line and open

open ~/.aws/credentials

updated like

change whatever name that is in [whatevername] to [default]

[default]
aws_access_key_id = *************************
aws_secret_access_key = *********************************************
aws_session_token =**************************************************
albertoandreottiATgmail commented 4 years ago

I have set the secret information everywhere(in all the recommended places) and it is unable to find it anywhere. This is very broken.

therealppk commented 3 years ago

I've setup a Spark Standalone Cluster on EC2 Instances (1 Master, 2 Workers). I'm trying to deploy an application in cluster mode. The application jar is present in S3. I'm getting the same error.

Command:

spark/bin/spark-submit --deploy-mode client --master spark://xxxx:7077 --class RawProcessingHandler s3a://xxxxx/spark.jar some args

Output:

21/01/05 09:15:38 INFO SecurityManager: Changing view acls to: root
21/01/05 09:15:38 INFO SecurityManager: Changing modify acls to: root
21/01/05 09:15:38 INFO SecurityManager: Changing view acls groups to: 
21/01/05 09:15:38 INFO SecurityManager: Changing modify acls groups to: 
21/01/05 09:15:38 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
21/01/05 09:15:38 INFO Utils: Successfully started service 'driverClient' on port 43569.
21/01/05 09:15:38 INFO TransportClientFactory: Successfully created connection to xxxx:7077 after 84 ms (0 ms spent in bootstraps)
21/01/05 09:15:38 INFO ClientEndpoint: Driver successfully submitted as driver-20210105091538-0021
21/01/05 09:15:38 INFO ClientEndpoint: ... waiting before polling master for driver state
21/01/05 09:15:43 INFO ClientEndpoint: ... polling master for driver state
21/01/05 09:15:43 INFO ClientEndpoint: State of driver-20210105091538-0021 is ERROR
21/01/05 09:15:43 ERROR ClientEndpoint: Exception from cluster was: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
    at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1866)
    at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:721)
    at org.apache.spark.util.Utils$.fetchFile(Utils.scala:509)
    at org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:155)
    at org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:173)
    at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:92)
21/01/05 09:15:43 INFO ShutdownHookManager: Shutdown hook called
21/01/05 09:15:43 INFO ShutdownHookManager: Deleting directory /tmp/spark-40618d5d-af47-4700-b8db-1d4befea22eb

I've added the credentials as env variables as well as set them using aws configure. Can you please help?

kwiecien commented 3 years ago

I had the same error. In my case it helped to add the STS dependency. If you use an AWS profile, STS has to be on the class path.

My solution for mvn project with AWS SDK Java v2 :

        <!-- necessary to be on the class path for ProfileCredentialsProvider -->
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>sts</artifactId>
        </dependency>

A solution for SDK v1 is similar - aws-java-sdk-sts module must be on the class path.

rajeevprasanna commented 3 years ago

adding this dependency solved this problem api("com.amazonaws:aws-java-sdk-sts:1.11.956")

jrichardsz commented 3 years ago

If someone have problems with credentials provider chain, I was able to connect to aws with env variables (best practice) instead chain (properties or another type):

EnvironmentVariableCredentialsProvider credentialsProvider = new EnvironmentVariableCredentialsProvider();
CloudWatchLogsClient logsClient = CloudWatchLogsClient.builder().region(Region.of(regionId))
                .credentialsProvider(credentialsProvider).build();

And these dependencies:

<dependency>
  <groupId>software.amazon.awssdk</groupId>
  <artifactId>logs</artifactId>
  <version>2.0.0-preview-4</version>
</dependency>

Just export these variables before the execution:

AWS_ACCESS_KEY_ID=changeme
AWS_SECRET_ACCESS_KEY=dontseeme
zoltangoendoes commented 2 years ago

Another casue could be if you are using a proxy. In that case make sure you allow access to 169.254.170.2 , which address is used to query the credentials if using Role/Instance profile.

RandLVT commented 2 years ago

You can also go to your AWS account, select the "Command line or programmatic access" on the right side, and copy option 2 to your credential file. Next, remove [NUMBERS_PowerUseAceess] and replace it with the profile name. The only downside is that you will have to copy the session_token every four hours.

babaralishah commented 2 years ago

Putting default in the top of the AWS configuration file really resolved my issue. Thank you so much!

DarkBitz commented 2 years ago

The problem is that AWS Credentials Provider is simply unreliable, if you stick to the best practice of using IAM roles you will run into this error sooner or later.

ShanikaEdiriweera commented 1 year ago

I am experiencing this issue while EKS container trying to do cognitoidentityprovider.DefaultCognitoIdentityProviderClient.getUser. Do I need to specify any cognito related permission to the pod role?

I am curious because I did not have any cognito specific perms when I ran the same app in ECS Fargate.

ShanikaEdiriweera commented 1 year ago

This helped me https://github.com/aws/aws-sdk-java-v2/issues/2961 Adding the sts dependency