Closed mdeering24 closed 4 years ago
Switched the credential provider to use ecsCredentials but still getting a 404.
$this->container['SecretsManagerClient'] = function ($c) {
try {
$provider = \Aws\Credentials\CredentialProvider::ecsCredentials();
return new \Aws\SecretsManager\SecretsManagerClient([
'version' => '2017-10-17',
'region' => 'us-east-1',
'credentials' => $provider
]);
} catch (AwsException $e) {
echo $e->getMessage() . "\n";
echo $e->getAwsRequestId() . "\n";
echo $e->getAwsErrorType() . "\n";
echo $e->getAwsErrorCode() . "\n";
}
};
Error retrieving credential from ECS (Client error: `GET http://169.254.170.2` resulted in a `404 Not Found` response:
404 page not found
)
#0 /tmp/vendor/guzzlehttp/promises/src/Promise.php(203): Aws\Credentials\EcsCredentialProvider->Aws\Credentials\{closure}(Object(GuzzleHttp\Exception\ClientException))
#1 /tmp/vendor/guzzlehttp/promises/src/Promise.php(156): GuzzleHttp\Promise\Promise::callHandler(2, Array, Array)
#2 /tmp/vendor/guzzlehttp/promises/src/TaskQueue.php(47): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}()
#3 /tmp/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php(98): GuzzleHttp\Promise\TaskQueue->run()
#4 /tmp/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php(125): GuzzleHttp\Handler\CurlMultiHandler->tick()
#5 /tmp/vendor/guzzlehttp/promises/src/Promise.php(246): GuzzleHttp\Handler\CurlMultiHandler->execute(true)
#6 /tmp/vendor/guzzlehttp/promises/src/Promise.php(223): GuzzleHttp\Promise\Promise->invokeWaitFn()
#7 /tmp/vendor/guzzlehttp/promises/src/Promise.php(267): GuzzleHttp\Promise\Promise->waitIfPending()
#8 /tmp/vendor/guzzlehttp/promises/src/Promise.php(225): GuzzleHttp\Promise\Promise->invokeWaitList()
#9 /tmp/vendor/guzzlehttp/promises/src/Promise.php(62): GuzzleHttp\Promise\Promise->waitIfPending()
#10 /tmp/vendor/aws/aws-sdk-php/src/AwsClientTrait.php(58): GuzzleHttp\Promise\Promise->wait()
#11 /tmp/vendor/aws/aws-sdk-php/src/AwsClientTrait.php(86): Aws\AwsClient->execute(Object(Aws\Command))
#12 /app/lib/Fibroblast/Handler/GetSecretKeyHandler.php(71): Aws\AwsClient->__call('getSecretValue', Array)
#13 /app/lib/Fibroblast/Bus/StandardBus.php(76): Fibroblast\Handler\GetSecretKeyHandler->handleCommand(Object(Fibroblast\Command\GetSecretKeyCommand))
#14 /app/lib/Fibroblast/Controller/AbstractApplication.php(350): Fibroblast\Bus\StandardBus->execute(Object(Fibroblast\Command\GetSecretKeyCommand))
#15 /app/lib/Fibroblast/Controller/AbstractApplication.php(456): Fibroblast\Controller\AbstractApplication->afterCheckCurrentUser()
#16 /app/lib/Fibroblast/Controller/AbstractApplication.php(553): Fibroblast\Controller\AbstractApplication->after()
#17 /tmp/vendor/digintent/mefworks/lib/MVC/Dispatcher.php(141): Fibroblast\Controller\AbstractApplication->respond(Object(mef\MVC\MatchedRoute))
#18 /tmp/vendor/digintent/mefworks/lib/MVC/Dispatcher.php(92): mef\MVC\Dispatcher->handleRequest(Object(mef\HTTP\Request), Object(mef\MVC\MatchedRoute))
#19 /app/lib/Fibroblast/Application.php(350): mef\MVC\Dispatcher->process(Object(mef\HTTP\Request))
#20 /app/app/index.php(54): Fibroblast\Application->run(Object(mef\HTTP\Request))
#21 {main}
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
gives me the correct object back with RoleArn, AccessKeyId, SecretAccessKey
But if I do shell_exec(`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`)
it does not work with a 404 page not found.
I have followed the advice posted by aws on this issue here: https://forums.aws.amazon.com/thread.jspa?threadID=273767
The two suggested workarounds do not help with the sdk.
This first option worked for getting the aws cli working but that is it.
- Export in the .profile file
By exporting the environment variables in the .profile file additional processes can access the environment variable. This can be accomplished in the Dockerfile, for example, a RUN parameter can be used to add the export command to /root/.profile during the build of the container image.
RUN echo 'export $(strings /proc/1/environ)' >> /root/.profile
This statement would export all environment variables. Alternatively, to reduce this to only export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI.
RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profile
Please note: this solution has a dependency on the strings and grep commands.
This solution did not work as well
- Export in the wrapper script
If a wrapper script is being used, by modifying it to first export the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable, it will be set in the environment of child processes.
!/bin/bash
export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Run other processes
/root/start.sh
Hi @mdeering24, thanks for reaching out to us. If you're relying on instance profile credentials, this behavior is likely caused by recent changes in the Instance Metadata Service. You should be able to use EC2's ModifyInstanceMetadataOptions call to increase the hop limit to allow the SDK to retrieve instance profile credentials from IMDS as expected.
I got the same error with @mdeering24 I create issue here: https://github.com/aws/aws-sdk-php-laravel/issues/176
It makes me crazy :(
hi @diehlaws , we are using ECS Fargate, no any ec2 instance here. how could be ?
Hi @at-bachhuynh,
I actually figured out my issue and forgot to post the solution and close this ticket. We were running a monolithic application in the container with Supervisor as a wrapper for everything. The two steps that we did to overcome this issue were:
1) Export in the .profile file
By exporting the environment variables in the .profile file additional processes can access the environment variable. This can be accomplished in the Dockerfile, for example, a RUN parameter can be used to add the export command to /root/.profile during the build of the container image.
RUN echo 'export $(strings /proc/1/environ)' >> /root/.profile
Please note: this solution has a dependency on the strings and grep commands.
2) Update PHP ini file
Change the php ini to allow worker processes to have access to environmental variables. By default php does not allow workers to access this information for security reasons. But we felt comfortable turning this feature off. All we added was clear_env = no See this link: https://www.php.net/manual/en/install.fpm.configuration.php and look up 'clear_env'
Hi @mdeering24
Really happy when getting your response.
I try to add clear_env = no
but still not working.
Could you please take a look on my issue: https://github.com/aws/aws-sdk-php-laravel/issues/176
Could you have time to chat with me via facebook, or what's app?
here is the result of command:
ps e -p 1
It lists all process with PID 1 and export env:
1 ? Ss 0:00 sh -c set -x;php artisan config:cache;php artisan migrate;service php7.2-fpm start ;ps e -p 1;nginx -g "daemon off;" PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-22-31-114.ap-northeast-1.compute.internal MAIL_ENCRYPTION=null MIX_CLIENT_ID=XXXXX CHANNEL_ID_FAMILY_LOGIN=XXXXX CHANNEL_ACCESS_TOKEN_2=xxxxx CHANNEL_SECRET_2=xxxx DB_DATABASE=uni ADMIN_TOKEN_EXPIRATION_TIME=60 MAIL_HOST=smtp.sendgrid.net MIX_END_POINT=https://xxxxx/api/ APP_KEY=xxxxxx CHANNEL_SECRET_1=xxxxx ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/393e0417-c1d1-4b2a-b4a4-fd254f50de47 MIX_LIFF_ID_CATEGORIES_FAMILY=null MAIL_FROM_ADDRESS=null MIX_LIFF_ID_CATEGORIES=null MIX_LIFF_ID_SETTING=null MIX_LIFF_ID_QUESTIONS_ANSWERS=xxx APP_DEBUG=TRUE LOG_CHANNEL=errorlog MIX_LIFF_ID_INTRODUCESERVICE=null APP_NAME=Pigeon RICH_MENU_ID_1=xxxx CALLBACK_FAMILY_LOGIN=xxxx MAIL_DRIVER=smtp AWS_EXECUTION_ENV=AWS_ECS_FARGATE AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/790e6073-8eb7-4835-b438-919b679e8879 MAIL_USERNAME=null MIX_CHANNEL_ID_MAIN_PERSON=x MAIL_PORT=null RICH_MENU_ID_2=xxxx MIX_JWT_SECRET=xxxx APP_URL=https://xxxx PROLOGUE_RICH_MENU_ID=xxxx S3_TEMPORARY_URL_FILE_EXPIRED=null MIX_LIFF_ID_EDIT_ANSWER=null MIX_LIFF_ID_CONFIRMRECOMMEND=null AWS_REGION=ap-northeast-1 MIX_CLIENT_SECRET=xxxxx GET_FRIEND_STATUS_LINE_ENDPOINT=https://api.line.me/friendship/v1/status JWT_SECRET=xxxxx MIX_LIFF_ID_DISCLOSURE_FAMILY=null DB_PASSWORD=xxxxx MAIL_PASSWORD=null AWS_BUCKET=xxxx MIX_LIFF_ID_CATEGORY=null DB_CONNECTION=mysql DB_USERNAME=xxx CHANNEL_ACCESS_TOKEN_1=xxxxx DB_HOST=xxxx DB_PORT=3306 AWS_DEFAULT_REGION=ap-northeast-1 MIX_END_POINT_ADMIN=https://xxxxx/admin/api/ APP_ENV=stg DB_NAME=xxxx MAIL_FROM_NAME=null NGINX_VERSION=1.17.4 NJS_VERSION=0.3.5 PKG_RELEASE=1 HOME=/root
And I see that the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is available
But my application still get error:
2019/12/19 16:15:02 [error] 39#39: *89 FastCGI sent in stderr: "PHP message: [2019-12-19 16:15:02] stg.ERROR: Aws\Exception\CredentialsException: Error retrieving credential from ECS (Client error: `GET http://169.254.170.2` resulted in a `404 Not Found` response:
404 page not found
Could you please take a look? @mdeering24 @diehlaws
Why is this closed, this still seems to be an issue. Your steps are very helpful @mdeering24 but this should surely be working without having to make these convoluted steps?
Turns out that those steps don't work anyway, I believe it's simply due to the fact that /root/.profile won't be run when apache2 fires off against php-fpm.
Looks like things were moved to https://github.com/aws/aws-sdk-php-laravel/issues/176
Which is a bit weird since this is a general sdk error, not laravel specific
I ended up putting this in my entrypoint, super hacky, but I hope it can help people until this issue is properly fixed.
if [ -f /proc/1/environ ]; then
# Get the ECS environment variables
export $(strings /proc/1/environ)
# Set the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable if it isn't already set
if ! grep -q 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI' /etc/php/7.2/fpm/pool.d/www.conf; then
echo "clear_env = no" >> /etc/php/7.2/fpm/pool.d/www.conf
echo "env[AWS_CONTAINER_CREDENTIALS_RELATIVE_URI] = $AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" \
>> /etc/php/7.2/fpm/pool.d/www.conf
fi
else
echo "Warning, couldn't find ECS environ file"
fi
I also have this error. artisan commands that interacts with aws services works fine whereas fpm fails. we are using fargate
We have our application in ECS FARGATE.
We are trying to have our application retrieve a secret from the SecretsManager. The sdk is getting a permission error while the aws cli is working and able to retrieve the secret fine.
Our application sets up a container with the secrets manager inside it:
We then have a Command Bus call on the SecretsManager:
Can you please help me understand why this issue is happening. Does Fargate store role credentials in a different area? Does the sdk not check that area? My current understanding of the sdk is that it will know to curl the container/instance metadata for the credentials but that obviously doesn't seem to be the case.