Open yegenpres opened 3 weeks ago
Hey @yegenpres, if the storage was created using defineStorage
you should be able to use the access property to provide access to the function.
import { defineStorage } from '@aws-amplify/backend';
import { myFunction } from './my-function/resource'
export const storage = defineStorage({
name: 'myStorage',
access: (allow) => ({
'media/*': [allow.resource(myFunction).to(['read', 'write', 'delete'])]
})
});
which should provide an environment variable with the format <storage-name>_BUCKET_NAME
in the function.
refer to the documentation providing this information: https://docs.amplify.aws/react/build-a-backend/storage/authorization/
@ykethan look at this error: https://github.com/aws-amplify/amplify-backend/issues/1769
this approach does not work
@ykethan I have tried one more time, it does not work.
that is my storage resource:
export const storage = defineStorage({
name: 'appStorage',
access: (allow) => ({
'as_images/*': [
allow.authenticated.to(['read','write', 'delete'])
],
'audio/*': [
allow.resource(textToAudio).to(['read','write', 'delete']),
allow.authenticated.to(['read','write', 'delete'])
],
})
});
and backend file:
const textToAudioLambda = backend.textToAudio.resources.lambda
const pollyPolicyStatement = new iam.PolicyStatement({
actions: ['polly:*'], // Full access to Polly service
resources: ['*'], // Apply to all resources
})
const translatePolicyStatement = new iam.PolicyStatement({
actions: ['translate:*'], // Full access to Translate service
resources: ['*'], // Apply to all resources
});
textToAudioLambda.addToRolePolicy(pollyPolicyStatement)
textToAudioLambda.addToRolePolicy(translatePolicyStatement)
So int that case, when I try to save file to S3 by
Hey @yegenpres, thank you for providing us the definition used. Could you provide us the full error message you are observing? Does the error state on resource-based policy or the identity-based policy as the access denied error may also occur if the Block Public Access settings on the account. For reference, refer to this document: https://docs.aws.amazon.com/AmazonS3/latest/userguide/troubleshoot-403-errors.html#access-denied-message-examples
@ykethan d4fcdcde-051a-4a42-898f-e34ec0ceed04 INFO error AccessDenied: Access Denied
That is all what I can see about error in CloudWatch
Hey @yegenpres, interesting. Would you be open a for a quick chat on discord to dive into this? my handle on discord is ykethan
@ykethan that is all what cloud watch shows. Nothing more. Cloud x trails are empty. I don't know how show You more
You can easy reproduce this error from empty amplify project.
Just make some function and try to put object to s3. Set up iam policy like me.
@yegenpres did try to reproduce this using the following
import { env } from "$amplify/env/api-function";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { APIGatewayProxyHandler } from "aws-lambda";
const s3Client = new S3Client({
region: env.AWS_DEFAULT_REGION || "us-east-1",
});
const bucketName = env.APP_STORAGE_BUCKET_NAME;
export const handler: APIGatewayProxyHandler = async (event, _context) => {
try {
const fileName = "example.txt"; // Example file name
const fileContent = "This is a sample file content"; // Example file content
const params = {
Bucket: bucketName,
Key: `audio/${fileName}`,
Body: fileContent,
ContentType: "text/plain",
};
const command = new PutObjectCommand(params);
const result = await s3Client.send(command);
return {
statusCode: 200,
body: JSON.stringify({
message: "File uploaded successfully",
location: `https://${bucketName}.s3.amazonaws.com/audio/${fileName}`,
}),
};
} catch (error) {
console.error("Error uploading file: ", error);
let errorMessage = "Unknown error";
if (error instanceof Error) {
errorMessage = error.message;
}
return {
statusCode: 500,
body: JSON.stringify({
message: "Error uploading file",
error: errorMessage,
}),
};
}
};
on storage resource
import { defineStorage } from "@aws-amplify/backend";
import { myApiFunction } from "../functions/test-function/resource";
export const storage = defineStorage({
name: "appStorage",
access: (allow) => ({
"as_images/*": [allow.authenticated.to(["read", "write", "delete"])],
"audio/*": [
allow.resource(myApiFunction).to(["read", "write", "delete"]),
allow.authenticated.to(["read", "write", "delete"]),
],
}),
});
then invoked the function on the console for a test which showed
{
"statusCode": 200,
"body": "{\"message\":\"File uploaded successfully\",\"location\":\"<bucketName>/audio/example.txt\"}"
}
verified the permissions on the function resource policy as well
"Statement": [
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<bucket-name>/audio/*",
"Effect": "Allow"
},
{
"Condition": {
"StringLike": {
"s3:prefix": [
"audio/*",
"audio/"
]
}
},
"Action": "s3:ListBucket",
"Resource": "<bucket-name>",
"Effect": "Allow"
},
{
"Action": "s3:PutObject",
"Resource": "<bucket-name>/audio/*",
"Effect": "Allow"
},
{
"Action": "s3:DeleteObject",
"Resource": "<bucket-name>/audio/*",
"Effect": "Allow"
}
Hi @ykethan Try also add this permissions to current function. I thing it is a reason, when I try to assign any other permissions by CDK API `const textToAudioLambda = backend.textToAudio.resources.lambda const pollyPolicyStatement = new iam.PolicyStatement({ actions: ['polly:'], // Full access to Polly service resources: [''], // Apply to all resources })
const translatePolicyStatement = new iam.PolicyStatement({ actions: ['translate:'], // Full access to Translate service resources: [''], // Apply to all resources });
textToAudioLambda.addToRolePolicy(pollyPolicyStatement) textToAudioLambda.addToRolePolicy(translatePolicyStatement)`
@yegenpres did ensure the polly and translate permissions were added as well.
Does the project have any permissions boundary set? For testing are you invoking the function on the AWS console? if yes does the user logged in to the console have Lambda invoke permissions attached? additionally, to ensure on the deployed Lambda function under configuration -> role document. Do you observe put object permissions to the audio path?
@ykethan Permissions from polly and translate like works fine, because when I remove permissions from storage resource file, and add it by CDK API like this one: `textToAudioLambda.addToRolePolicy(s3PolicyStatement)
const s3ExternalReadOnlyPolicyStatement = new iam.PolicyStatement({
actions: [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectTagging",
"s3:GetObjectVersionTagging",
"s3:ListBucket",
],
resources: [
arn:aws:s3:::${externalBucket.bucket_name}
,
arn:aws:s3:::${externalBucket.bucket_name}/*
]
});
`
Function works fine, and make all job fine, but it that case , need to pass app storage name manually , to function what is not correct.
@yegenpres how is the key for the S3 SDK call being passed in? are using passing it as /audio/<item>
or as audio/item
?
did notice when using /audio/
the s3 SDK call will provide a Access denied error as this will utilze the root path, when using just the audio/
the call succeeded.
Additionally, the permissions being included will provide the Lambda access to the root and all the nested objects present. https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html
arn:aws:s3:::${externalBucket.bucket_name},
arn:aws:s3:::${externalBucket.bucket_name}/*
But if you would like to manually pass in the name of the storage as an environment variables to a function you can utilize the addEnvironment
method on the function.
backend.myApiFunction.addEnvironment(
"s3Name",
backend.storage.resources.bucket.bucketName
);
@ykethan that is my lambda policy when I assign access to s3 from lambda resource.ts file, like documentation sad:
{
"partial": false,
"policies": [
{
"type": "inline",
"name": "storageAccess268C740FB",
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<name>/audio/*",
"Effect": "Allow"
},
{
"Condition": {
"StringLike": {
"s3:prefix": [
"audio/*",
"audio/"
]
}
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<name>/audio/*",
"Effect": "Allow"
},
{
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<name>/audio/*",
"Effect": "Allow"
},
{
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::<name>/audio/*",
"Effect": "Allow"
}
]
}
},
{
"type": "inline",
"name": "textToAudiolambdaServiceRoleDefaultPolicyF5680C55",
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "polly:*",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "translate:*",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:eu-central-1:957807784596:parameter/amplify/resource_reference/<path>/APP_STORAGE_BUCKET_NAME",
"Effect": "Allow"
}
]
}
},
{
"type": "managed",
"name": "AWSLambdaBasicExecutionRole",
"arn": "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole",
"id": "ANPAJNCQGXC42545SKXIK",
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
}
],
"resources": {
"s3": {
"service": {
"name": "Amazon S3",
"icon": "data:image"
},
"statements": [
{
"service": "s3",
"effect": "Allow",
"action": "s3:GetObject",
"resource": "arn:aws:s3:::<name>/audio/*",
"source": {
"policyName": "storageAccess268C740FB",
"policyType": "inline",
"index": "0"
}
},
{
"service": "s3",
"effect": "Allow",
"action": "s3:ListBucket",
"resource": "arn:aws:s3:::<name>",
"source": {
"policyName": "storageAccess268C740FB",
"policyType": "inline",
"index": "1"
}
},
{
"service": "s3",
"effect": "Allow",
"action": "s3:PutObject",
"resource": "arn:aws:s3:::<name>/audio/*",
"source": {
"policyName": "storageAccess268C740FB",
"policyType": "inline",
"index": "2"
}
},
{
"service": "s3",
"effect": "Allow",
"action": "s3:DeleteObject",
"resource": "arn:aws:s3:::<name>/audio/*",
"source": {
"policyName": "storageAccess268C740FB",
"policyType": "inline",
"index": "3"
}
}
]
},
"polly": {
"service": {
"name": "Amazon Polly",
"icon": "data:image"
},
"statements": [
{
"service": "polly",
"effect": "Allow",
"action": "polly:*",
"resource": "*",
"source": {
"policyName": "textToAudiolambdaServiceRoleDefaultPolicyF5680C55",
"policyType": "inline",
"index": "0"
}
}
]
},
"translate": {
"service": {
"name": "Amazon Translate",
"icon": "data:image"
},
"statements": [
{
"service": "translate",
"effect": "Allow",
"action": "translate:*",
"resource": "*",
"source": {
"policyName": "textToAudiolambdaServiceRoleDefaultPolicyF5680C55",
"policyType": "inline",
"index": "1"
}
}
]
},
"ssm": {
"service": {
"name": "AWS Systems Manager",
"icon": "data:image"
},
"statements": [
{
"service": "ssm",
"effect": "Allow",
"action": "ssm:GetParameters",
"resource": "arn:aws:ssm:eu-central-1:957807784596:parameter/amplify/resource_reference/<path>/APP_STORAGE_BUCKET_NAME",
"source": {
"policyName": "textToAudiolambdaServiceRoleDefaultPolicyF5680C55",
"policyType": "inline",
"index": "2"
}
}
]
},
"logs": {
"service": {
"name": "Amazon CloudWatch Logs",
"icon": "data:image"
},
"statements": [
{
"service": "logs",
"effect": "Allow",
"action": "logs:CreateLogGroup",
"resource": "*",
"source": {
"policyName": "AWSLambdaBasicExecutionRole",
"policyType": "managed",
"index": "0"
}
},
{
"service": "logs",
"effect": "Allow",
"action": "logs:CreateLogStream",
"resource": "*",
"source": {
"policyName": "AWSLambdaBasicExecutionRole",
"policyType": "managed",
"index": "0"
}
},
{
"service": "logs",
"effect": "Allow",
"action": "logs:PutLogEvents",
"resource": "*",
"source": {
"policyName": "AWSLambdaBasicExecutionRole",
"policyType": "managed",
"index": "0"
}
}
]
}
},
"roleName": "amplify-easywordseditor-a-textToAudiolambdaServiceR-<someId>",
"trustedEntities": [
"lambda.amazonaws.com"
]
}
@yegenpres Thanks for the information the permissions on the function appear to be correct. could you provide us the handler/S3 API for the function to reproduce this issue.
@ykethan
export const handler: Schema["textToAudio"]["functionHandler"] = async (event, context) => {
const { text, path } = event.arguments
const params = {
Text: text ?? "",
OutputFormat: OutputFormat.MP3,
VoiceId: VoiceId.Ruth,
LanguageCode: LanguageCode.en_US,
Engine: Engine.NEURAL,
};
try {
const command = new SynthesizeSpeechCommand(params);
const result = await client.send(command);
const chunks = [];
// @ts-ignore
for await (const chunk of result.AudioStream) {
chunks.push(chunk);
}
const audioBuffer = Buffer.concat(chunks);
const bucketName = env.APP_STORAGE_BUCKET_NAME;
const key = `audio/${path}.mp3`;
const input = {
"Body": audioBuffer,
"Bucket": bucketName,
"Key": key,
"Tagging": "key1=value1&key2=value2",
"ContentType": 'audio/mpeg',
"ContentDisposition": 'inline'
};
const s3res = await s3Client.send( new PutObjectCommand(input));
return {
isSuccess: true,
data: `https://${bucketName}.s3.amazonaws.com/${key}`,
message: ""
}
} catch (error) {
return {
isSuccess: false,
data: "",
message: error?.toString() ?? "som error"
}
}
};
Hey @yegenpres, thank you for the handler. on testing the handler, I was able to run into the denied error on Tagging
policy when running the Query using the AppSync console.
AccessDenied: User: arn:aws:sts:::assumed-role/<role-name> is not authorized to perform: s3:PutObjectTagging on resource: \”<s3-arn>/audio/testa.mp3\" because no identity-based policy allows the s3:PutObjectTagging action
on adding the policy on the function role, I was able to make a success query
const myLambda = backend.myApiFunction.resources.lambda;
myLambda.role?.attachInlinePolicy(
new iam.Policy(backend.storage.resources.bucket, "allows3PutObjectTagging", {
statements: [
new iam.PolicyStatement({
actions: ["s3:PutObjectTagging"],
resources: [backend.storage.resources.bucket.bucketArn + "/audio/*"],
}),
],
})
);
@ykethan thank You so much 1) Are You going include this policy to amplify "write" action, or any other ways to fix this problem? 2) For a future how can I get error details just from error object?
@ykethan thank You so much 1) Are You going include this policy to amplify "write" action, or any other ways to fix this problem? 2) For a future how can I get error details just from error object?
@yegenpres, marking this as feature request for further evaluation on adding tagging permissions to the storage resource. for reproduction i utilized the same example provided
import { env } from "$amplify/env/api-function";
import {
Engine,
LanguageCode,
OutputFormat,
PollyClient,
SynthesizeSpeechCommand,
VoiceId,
} from "@aws-sdk/client-polly";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import type { Schema } from "../../data/resource";
const s3Client = new S3Client({
region: env.AWS_DEFAULT_REGION || "us-east-1",
});
const client = new PollyClient({
region: env.AWS_DEFAULT_REGION || "us-east-1",
});
export const handler: Schema["echo"]["functionHandler"] = async (
event,
context
) => {
const { text, path } = event.arguments;
const params = {
Text: text ?? "",
OutputFormat: OutputFormat.MP3,
VoiceId: VoiceId.Ruth,
LanguageCode: LanguageCode.en_US,
Engine: Engine.NEURAL,
};
try {
const command = new SynthesizeSpeechCommand(params);
const result = await client.send(command);
const chunks = [];
// @ts-ignore
for await (const chunk of result.AudioStream) {
chunks.push(chunk);
}
const audioBuffer = Buffer.concat(chunks);
const bucketName = env.APP_STORAGE_BUCKET_NAME;
const key = `audio/${path}.mp3`;
const input = {
Body: audioBuffer,
Bucket: bucketName,
Key: key,
Tagging: "key1=value1&key2=value2",
ContentType: "audio/mpeg",
ContentDisposition: "inline",
};
const s3res = await s3Client.send(new PutObjectCommand(input));
// console.log(s3res);
return {
isSuccess: true,
data: `https://${bucketName}.s3.amazonaws.com/${key}`,
message: "",
};
} catch (error) {
return {
isSuccess: false,
data: "",
message: error?.toString() ?? "som error",
};
}
};
Environment information
Description
`lambdaResource.ts export const textToAudio = defineFunction({ name: 'textToAudio', entry: './handler.ts', timeoutSeconds: 60, environment: { BUCKET_NAME:"amplify-some-bucet-name-XXXX" } });
lambdaHandler.ts
export const handler: Schema = async (event, context) => { val bucketName = env.BUCKET_NAME } `
There is only one way haw to pass name of bucket to lambda, is to manually pass name to variable after first deploy, but this approach is bad as bucket should be created before. Also it is not possible to deploy multiple stacks, because name of buckets will be different.
Question: How to get access to bucket name from lambda to be able run som s3 commands?