aws-amplify / amplify-category-api

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.
https://docs.amplify.aws/
Apache License 2.0
89 stars 79 forks source link

500 Number of resources limit and 1000000 Template size limit #2550

Open MarlonJD opened 6 months ago

MarlonJD commented 6 months ago

How did you install the Amplify CLI?

nem

If applicable, what version of Node.js are you using?

v18.20.2

Amplify CLI Version

1.0.1

What operating system are you using?

MacOS

Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.

No

Describe the bug

I have 60 models. I'm trying to migrate to Gen 2.npx ampz sandbox command tried to push, then got this warning:

Number of resources: 421 is approaching allowed maximum of 500
Template size is approaching limit: 824168/1000000. Split resources into multiple stacks or set suppressTemplateIndentation to reduce template size. [ack: @aws-cdk/core:Stack.templateSize]

then I tried to test reach limit and got error like this:

Caused By: Error: Number of resources in stack 'amplify-amplifygen2demo-marlonjd-sandbox-608dc1dabf/data/amplifyData/ConnectionStack': 505 is greater than allowed maximum of 500: AWS::AppSync::FunctionConfiguration (336), AWS::AppSync::Resolver (168), AWS::CDK::Metadata (1)

So, I was able to fix this limit issue by creating custom stacks for some resolvers like in documents, Place AppSync Resolvers in Custom-named Stacks. And I could fix template size limit issue by doing amplify push --minify

I couldn't do these in Gen 2. So how can we fix this?

Just info: I saw this example to splitting stack with cdk, it may help this article I saw people could solve this issue when they're using aws-cdk and serverless (framework) by using split-stacks.

Expected behavior

Should update stack

Reproduction steps

Creating big schema will give this error. I added some test models like this. If models have much relations, it will started to increase resource counts quickly.

Test0: a
    .model({
      name: a.string(),
      test1Id: a.id(),
      test1: a.belongsTo("Test1", "test1Id"),
      test2id: a.id(),
      test2: a.belongsTo("Test2", "test2id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test1: a
    .model({
      name: a.string(),
      test0s: a.hasMany("Test0", "test1Id"),
      test2id: a.id(),
      test2: a.belongsTo("Test2", "test2id"),
      test7s: a.hasMany("Test7", "test1id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test2: a
    .model({
      name: a.string(),
      test1s: a.hasMany("Test1", "test2id"),
      test3id: a.id(),
      test3: a.belongsTo("Test3", "test3id"),
      test7s: a.hasMany("Test7", "test2id"),
      test0s: a.hasMany("Test0", "test2id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test3: a
    .model({
      name: a.string(),
      test2s: a.hasMany("Test2", "test3id"),
      test4id: a.id(),
      test4: a.belongsTo("Test4", "test4id"),
      test5id: a.id(),
      test5: a.belongsTo("Test5", "test5id"),
      test7s: a.hasMany("Test7", "test3id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test4: a
    .model({
      name: a.string(),
      test3s: a.hasMany("Test3", "test4id"),
      test6id: a.id(),
      test6: a.belongsTo("Test6", "test6id"),
      test7s: a.hasMany("Test7", "test4id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test5: a
    .model({
      name: a.string(),
      test3s: a.hasMany("Test3", "test5id"),
      test7s: a.hasMany("Test7", "test5id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test6: a
    .model({
      name: a.string(),
      test4s: a.hasMany("Test4", "test6id"),
      test7id: a.id(),
      test7: a.belongsTo("Test7", "test7id"),
      test8id: a.id(),
      test8: a.belongsTo("Test8", "test8id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test7: a
    .model({
      name: a.string(),
      test6s: a.hasMany("Test6", "test7id"),
      test1id: a.id(),
      test1: a.belongsTo("Test1", "test1id"),
      test2id: a.id(),
      test2: a.belongsTo("Test2", "test2id"),
      test3id: a.id(),
      test3: a.belongsTo("Test3", "test3id"),
      test4id: a.id(),
      test4: a.belongsTo("Test4", "test4id"),
      test5id: a.id(),
      test5: a.belongsTo("Test5", "test5id"),
    })
    .authorization((allow) => [allow.authenticated()]),

  Test8: a
    .model({
      name: a.string(),
      test6s: a.hasMany("Test6", "test8id"),
    })
    .authorization((allow) => [allow.authenticated()]),

Project Identifier

No response

Log output

``` # Put your logs below this line ```

Additional information

No response

Before submitting, please confirm:

MarlonJD commented 6 months ago

I think I fixed this, I can fix this by adding these codes to backend.ts:

// Get the first resolver
const amplifyData = amplifyDataStack[0];

// Get resolvers from the first resolver
const connectionStack = amplifyData.node.children.filter((child) =>
  child.node.id.includes("ConnectionStack")
);

// If the connectionStack has more than 500 resources
if (connectionStack[0].node.children.length > 500) {
  console.log("All Resources Count:", connectionStack[0].node.children.length);

  // Crete new nestedStacks every nestedStacks can hold 200 resources, each.
  // connectionStack[0].node.children.length / 200 = number of nestedStacks
  const nestedStacksCount = Math.ceil(
    connectionStack[0].node.children.length / 200
  );

  console.log("Splitting, Nested Stacks Count:", nestedStacksCount);

  for (let i = 0; i < nestedStacksCount; i++) {
    const nestedStack = new NestedStack(
      connectionStack[0],
      `ConnectionStackSplit${i}`
    );

    // Get items with start i * 200 to (i + 1) * 200 resources
    const startIndex = i * 200;
    const endIndex = (i + 1) * 200;

    // Add the resources to the nestedStack
    connectionStack[0].node.children
      .slice(startIndex, endIndex)
      .forEach((child) => {
        nestedStack.node.addDependency(child.node);
      });

    amplifyData.node.addDependency(nestedStack);
  }

  // Remove stack from the data stack
  connectionStack[0].node.children.forEach((child) => {
    connectionStack[0].node.tryRemoveChild(child.node.id);
  });
}

I'm still doing some tests, I'll let you know about news

AnilMaktala commented 6 months ago

Hey @MarlonJD, Thank you for raising this issue and sharing the alternative solution. We will incorporate the workaround instructions into the troubleshooting section of the documentation. Additionally, we are marking this as a bug for further evaluation by the team.

MarlonJD commented 6 months ago

Hello @AnilMaktala. Thanks for you reply. I'm glad to hear that, I'm glad there's a solution, even if it's temporary for now.

MarlonJD commented 6 months ago

Hello @AnilMaktala

Not works old or new version, if it's creating from scratch. I created splitted NestedStack but I cannot move old resources to new nestedStacks. It's really important issue. If anyone can help with cdk and CloudFormation stacks resource moving it should work.

It's only working if any new 500 or less stacks available for update. So I cannot create first time deploy. I can deploy like 3-4 part scheme. It's awful solution.

I cannot migrate to Gen 2 for this. It's huge blocker for big project. I hope it's will fix soon.

AnilMaktala commented 5 months ago

I can replicate this issue using schema with 70+ models.

image
LukaASoban commented 3 months ago

I am about to hit this limit. Is there any work around for this for now?

Template size is approaching limit: 903572/1000000. Split resources into multiple stacks or set suppressTemplateIndentation to reduce template size.

MarlonJD commented 3 months ago

You can fix this error with custom stack mapping but it's not merged yet on gen 2, you have to modify the source and push manually for now, I hope it can fix soon

LukaASoban commented 3 months ago

@MarlonJD would you be able to kindly share your work-around for this? So you are saying that using this work-around makes it so that I can no longer "push" in 1 go and have to split it up into multiple deployments?

MarlonJD commented 3 months ago

Hey @LukaASoban,Which one are you using, Amplify Gen 1 or Gen 2?

LukaASoban commented 3 months ago

Gen 2

MarlonJD commented 3 months ago

Hey @LukaASoban, You have to build manually and push on local when you do this workaround, it's because custom stack mapping not merged yet, but @AnilMaktala already created test version of this changes, so you can try this first, if it's works you could be do auto build, you need to use specific version of backend or edit on manually use these versions on your backend:

@aws-amplify/auth-construct@0.0.0-test-20240603204424
@aws-amplify/backend@0.0.0-test-20240603204424
@aws-amplify/backend-data@0.0.0-test-20240603204424
@aws-amplify/backend-function@0.0.0-test-20240603204424
@aws-amplify/backend-storage@0.0.0-test-20240603204424
@aws-amplify/backend-cli@0.0.0-test-20240603204424
@aws-amplify/client-config@0.0.0-test-20240603204424
@aws-amplify/platform-core@0.0.0-test-20240603204424
@aws-amplify/sandbox@0.0.0-test-20240603204424

Then go to your backend-folder/data/resource.ts file:

Found this code:

export const data = defineData({
  schema,
  authorizationModes: {
    defaultAuthorizationMode: "userPool",
    apiKeyAuthorizationMode: {
      expiresInDays: 365,
    },
  },

and add experimentalStackMapping to defineData, it should looks like this:

export const data = defineData({
  schema,
  authorizationModes: {
    defaultAuthorizationMode: "userPool",
    apiKeyAuthorizationMode: {
      expiresInDays: 365,
    },
  },
  experimentalStackMapping: {
    EmployeehappyLevelsResolver: "SplittedCustomStack17",
    EmployeehappyTransactionsResolver: "SplittedCustomStack18",
  },
});

About the resolver names, in short term: Todo model has child it will be TodochildrenResolver: "Whateverstackname"

Todo

  Todo: a
    .model({
      name: a.string(),
      children: a.hasMany("Children", "todoId"),)
    })
    .authorization((allow) => [
      allow.authenticated(),
    ]),

So just found hasMany fields on your models, add them until you're good about the resource limit.

I didn't try this method in auto build, because I already using custom functions and it's not merged yet, so if you specify these version it could be run on auto build, if it won't work, I'll try to help you with local editing.

LukaASoban commented 3 months ago

Thanks @MarlonJD I will try this later. My only worry is that if and when the Amplify team fix it, will it cause me issues since they might not move forward with @AnilMaktala 's approach?

MarlonJD commented 3 months ago

@LukaASoban I didn't fully understand what you mean, but you can use this approach for now if you need immediately, if team will already released new fix in stable release, we can migrate that solution I think, I don't think it won't matter, because we just moving some stacks to another sub stacks.

LukaASoban commented 3 months ago

I am not an expert with the CDK but I guess I was worrying whether or not the LogicalIDs would be modified from moving to sub-stacks

vinothj-aa commented 2 months ago

@MarlonJD Is there an ETA for the "experimentalStackMapping" feature to be officially part of Amplify Gen 2?

I'm facing the same issue with Amplify Gen 2: Template may not exceed 1000000 bytes in size.

To give you some context (as on September 16 2024): Models: 1 Custom Types: 32 (we have duplication check & aggregation) Custom Queries: 36 Custom Mutations: 43 And we have several modules to be developed and the counts would be significantly higher. I'm stuck with this error and I'm not sure how to move forward from here!

I really appreciate it if you could help me to solve this issue with Amplify Gen 2. Thanks in advance.

MarlonJD commented 2 months ago

Hey @vinothj-aa, it's been 3 months and there is no change in this issue. Right now I only manually editted amplify backend and using push from local machine, so now cannot use auto build, I think we need stackMapping parameter for fix this issue @AnilMaktala

vinothj-aa commented 2 months ago

Experimental stackmapping changes for Gen2 aws-amplify/amplify-backend#1593

Do you mind sharing the steps and it will be very useful in our case of custom types? And I believe that I can't use the options of minification and suppressTemplateIndentation in Amplify Gen 2

MarlonJD commented 2 months ago

You can directly use this method https://github.com/aws-amplify/amplify-category-api/issues/2550#issuecomment-2245588706, I recommend to edit your local amplify with this pr https://github.com/aws-amplify/amplify-backend/pull/1593 then add your resolvers to custom stack mappings, did you use stack mappings on gen 1? It's still the same you can check from here https://docs.amplify.aws/gen1/javascript/build-a-backend/graphqlapi/modify-amplify-generated-resources/#place-appsync-resolvers-in-custom-named-stacks

If you have any issue while you trying to do this, I'll try to help again

vinothj-aa commented 2 months ago

You can directly use this method #2550 (comment), I recommend to edit your local amplify with this pr aws-amplify/amplify-backend#1593 then add your resolvers to custom stack mappings, did you use stack mappings on gen 1? It's still the same you can check from here https://docs.amplify.aws/gen1/javascript/build-a-backend/graphqlapi/modify-amplify-generated-resources/#place-appsync-resolvers-in-custom-named-stacks

If you have any issue while you trying to do this, I'll try to help again

Sure, I'll check these out. Thank you for your response.

vinothj-aa commented 1 month ago

@MarlonJD I tried the steps mentioned in your comment however the stack is not split and the custom type, queries and mutations continue to remain in the data stack (nested).

Steps that I followed: Installed dependencies

For example "@aws-amplify/auth-construct": "^0.0.0-test-20240603204424", etc

Updated the amplify/data/resource.ts file with the following:

experimentalStackMapping: {
    ResolverQuerylistRegionBusinessUnits: "SplitCustomQPAdminStack",
    ResolverQuerygetRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationaddRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationdeleteRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationupdateRegionBusinessUnit: "SplitCustomQPAdminStack",
}

I did not face any issues or errors but the stack remains the same. Am I missing something?

Note: I'm using external DynamoDB tables (for custom queries and mutations) by the following the docs here: https://docs.amplify.aws/nextjs/build-a-backend/data/connect-to-existing-data-sources/connect-external-ddb-table/ I'm using custom JavaScript resolvers.

MarlonJD commented 1 month ago

Hey @vinothj-aa, did you try manuel deploy? You're already under 500 limit now? What's the output can you share? Stack mapping may need more, I have nearly 70 models, but I have 50+ custom stack mapping for this. BTW you cannot do big changes in gen 2, @AnilMaktala and amplify team trying to fix this issue for a long time. Let's take a look at the output, maybe we can found a solution for you.

You can build manually with this command:

export CI=1 && npx ampx pipeline-deploy --branch <YOUR APP BRANCH> --app-id <YOUR APP ID> --outputs-out-dir <YOUR OUTPUT DIR>
MarlonJD commented 1 month ago

@AnilMaktala I just created new PR same as yours but didn't add experimental, I hope we can get this parameter soon, we cannot use auto build for this. Any test should I create just tell me, I can try.

vinothj-aa commented 1 month ago

Hey @vinothj-aa, did you try manuel deploy? You're already under 500 limit now? What's the output can you share? Stack mapping may need more, I have nearly 70 models, but I have 50+ custom stack mapping for this. BTW you cannot do big changes in gen 2, @AnilMaktala and amplify team trying to fix this issue for a long time. Let's take a look at the output, maybe we can found a solution for you.

You can build manually with this command:

export CI=1 && npx ampx pipeline-deploy --branch <YOUR APP BRANCH> --app-id <YOUR APP ID> --outputs-out-dir <YOUR OUTPUT DIR>

Did you mean that I have to deploy manually on Amplify console using the above command?

Here's the count: Models: 1 Custom Types: 32 (we have duplication check & aggregation) Custom Queries: 36 Custom Mutations: 43

Backend Deployed Resources: 380

I'll be creating plenty more custom types/models as we just started the development.

vinothj-aa commented 1 month ago

Hey @vinothj-aa, did you try manuel deploy? You're already under 500 limit now? What's the output can you share? Stack mapping may need more, I have nearly 70 models, but I have 50+ custom stack mapping for this. BTW you cannot do big changes in gen 2, @AnilMaktala and amplify team trying to fix this issue for a long time. Let's take a look at the output, maybe we can found a solution for you.

You can build manually with this command:

export CI=1 && npx ampx pipeline-deploy --branch <YOUR APP BRANCH> --app-id <YOUR APP ID> --outputs-out-dir <YOUR OUTPUT DIR>

Please find the attached amplify_outputs.json file amplify_outputs_trimmed.json

MarlonJD commented 1 month ago

@vinothj-aa You can use auto build if you already using npm test version, what's the output of the build ?

vinothj-aa commented 1 month ago

@vinothj-aa You can use auto build if you already using npm test version, what's the output of the build ?

The build was successful but still no split stacks. Here is the output amplify_outputs_trimmed.json

MarlonJD commented 1 month ago

@vinothj-aa did you checked from cloudformation ? It just splitting cloud formation stack. It's just solving the total number of resources issue, if you don't already have 500 resource limit or size limit, it will solve by splitting your stacks from cloud formation and you can update your backend. So now can you update your backend without an issue?

vinothj-aa commented 1 month ago

@vinothj-aa did you checked from cloudformation ? It just splitting cloud formation stack. It's just solving the total number of resources issue, if you don't already have 500 resource limit or size limit, it will solve by splitting your stacks from cloud formation and you can update your backend. So now can you update your backend without an issue?

I still get this warning: Template size is approaching limit: 998877/1000000. Split resources into multiple stacks or set suppressTemplateIndentation to reduce template size. [ack: @aws-cdk/core:Stack.templateSize]

As you can see, the limit is almost reached and I'm unable to add more custom types, queries or mutations as it leads to the original issue still - Template may not exceed 1000000 bytes in size.

Are you suggesting to add more resolvers to this section?

experimentalStackMapping: {
    ResolverQuerylistRegionBusinessUnits: "SplitCustomQPAdminStack",
    ResolverQuerygetRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationaddRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationdeleteRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationupdateRegionBusinessUnit: "SplitCustomQPAdminStack",
}

If yes then the issue is that I'm unable to add more custom types as the template size goes beyond 1000000 bytes. So, I'm not sure how to add more custom types, queries and mutations by splitting the stack.

Right now, I'm adding an existing DynamoDB table like this:

./backend.ts

const externalSampleTable = aws_dynamodb.Table.fromTableName(
  externalDataSourcesStack,
  "MyExternalSampleTable",
  "SampleTable"
);

const sampleDS = backend.data.addDynamoDbDataSource(
  "SampleTableDS",
  externalSampleTable
);
sampleDS.ds.serviceRoleArn = ddb2AppSyncRole.roleArn;

Like the above code, I'm unable to add existing DynamoDB tables (I need to do this to use transactions, batch operations, etc) due to the template size error.

MarlonJD commented 1 month ago

@vinothj-aa Relations taking much counts as resolvers in models, I don't sure which type of resolvers increasing your resource, but you can check, You need to add new resolvers to stackMapping, you can found resolvers in this file, Search Resolver" <backend_folder>/.amplify/artifacts/cdk.out/manifest.json. You'll see your resolvers, you can just add and check out your template size with push command, then you can cancel with ctrl + c in your terminal. When you're in the right limit, you can push. You'll see you will decrease resources fast when you found right resolver string in experimentalStackMapping

vinothj-aa commented 1 month ago

@vinothj-aa Relations taking much counts as resolvers in models, I don't sure which type of resolvers increasing your resource, but you can check, You need to add new resolvers to stackMapping, you can found resolvers in this file, Search Resolver" <backend_folder>/.amplify/artifacts/cdk.out/manifest.json. You'll see your resolvers, you can just add and check out your template size with push command, then you can cancel with ctrl + c in your terminal. When you're in the right limit, you can push. You'll see you will decrease resources fast when you found right resolver string in experimentalStackMapping

I got the resolver names from manifest.json file and included a few more in resource.ts file (to test if the mapping works).

export const data = defineData({
  schema,
  authorizationModes: {
    defaultAuthorizationMode: "apiKey",
    apiKeyAuthorizationMode: { expiresInDays: 90 },
  },
  experimentalStackMapping: {
    ListDoctorConsultantResolver: "SplitCustomQPAdminStack",
    GetDoctorConsultantResolver: "SplitCustomQPAdminStack",
    CreateDoctorConsultantResolver: "SplitCustomQPAdminStack",
    UpdateDoctorConsultantResolver: "SplitCustomQPAdminStack",
    DeleteDoctorConsultantResolver: "SplitCustomQPAdminStack",
    SubscriptiononCreateDoctorConsultantResolver: "SplitCustomQPAdminStack",
    SubscriptiononUpdateDoctorConsultantResolver: "SplitCustomQPAdminStack",
    SubscriptiononDeleteDoctorConsultantResolver: "SplitCustomQPAdminStack",
    ResolverQuerylistRegionBusinessUnits: "SplitCustomQPAdminStack",
    ResolverQuerygetRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationaddRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationdeleteRegionBusinessUnit: "SplitCustomQPAdminStack",
    ResolverMutationupdateRegionBusinessUnit: "SplitCustomQPAdminStack",
    MutationAddUserToGroupResolver: "SplitCustomQPAdminStack",
    MutationconvertIncCatTemplateResolver: "SplitCustomQPAdminStack",
    MutationcreateUserResolver: "SplitCustomQPAdminStack",
    MutationupdateUserResolver: "SplitCustomQPAdminStack",
    MutationremoveUserFromGroupResolver: "SplitCustomQPAdminStack",
  },
});

When I try to deploy the changes, I get this warning:

Template size exceeds limit: 1022366/1000000. Split resources into multiple stacks or set suppressTemplateIndentation to reduce template size. [ack: @aws-cdk/core:Stack.templateSize]

Finally, the deployment fails with the same error:

UPDATE_FAILED        | AWS::CloudFormation::Stack          | data.NestedStack/data.NestedStackResource (data7552DF31) Template may not exceed 1000000 bytes in size.

My observation: The mapped stack appears in manifest.json file only for model based resolvers and not for custom resolvers:

Model based resolver (auto-generated by AppSync) - You can notice the mapped stack name (SplitCustomQPAdminStack) in the key

"/amplify-xxxxx/data/amplifyData/SplitCustomQPAdminStack/queryListDoctorConsultantsResolver": [
          {
            "type": "aws:cdk:logicalId",
            "data": "ListDoctorConsultantResolver"
          },
          {
            "type": "graphqltransformer:resourceName",
            "data": "Query.listDoctorConsultants"
          }
        ],

Whereas for a custom resolver, the mapped stack name is not part of the key

"/amplify-xxxxx/data/Resolver_Query_listRegionBusinessUnits": [
          {
            "type": "aws:cdk:logicalId",
            "data": "ResolverQuerylistRegionBusinessUnits"
          }
        ],

Does this stack mapping help in reducing the template size? The template size keeps increasing when I add more resolvers to experimentalStackMapping object.

MarlonJD commented 1 month ago

@vinothj-aa Can you try first 5 is SplitCustomQPAdminStack1, later then SplitCustomQPAdminStack2, maybe it's works. Sometimes these are not changing resource count or template size, it's odd to increase. Maybe splitting into 2-3 parts works. If it's allowing to update your backend you can ignore this time, if you're doing much changes in the backend split these into 2 or 3 parts, stack mapping just surpassing the error, we didn't found any solution for a long time. It does not matter you splitting into parts, it's giving nested stack issues, make small changes then update, then rest, you can try this. ie: I got 70 models, I made this backend with 9 parts, if you do major changes in one time it gives error, please try like this

vinothj-aa commented 1 month ago

@MarlonJD In my case, I have just 1 model and the rest are custom types. Like I mentioned in my previous comment, the split stack works for the following resolvers of that model:

ListDoctorConsultantResolver: "SplitCustomQPAdminStack",
GetDoctorConsultantResolver: "SplitCustomQPAdminStack",
CreateDoctorConsultantResolver: "SplitCustomQPAdminStack",
UpdateDoctorConsultantResolver: "SplitCustomQPAdminStack",
DeleteDoctorConsultantResolver: "SplitCustomQPAdminStack",

The remaining resolvers belong to custom types and those are not modified/affected by this approach. I really wish there is an official solution from AWS soon for splitting stacks even for custom types.

MarlonJD commented 1 month ago

@AnilMaktala was working for solution, I think this splitting should be doing by automatically while updating because there is also nested stack limit and it's not controlling these, I hope it will solve in near future, but right now, if you can already can update your backend just make your changes small, I don't know what else we should do

vinothj-aa commented 1 month ago

@AnilMaktala was working for solution, I think this splitting should be doing by automatically while updating because there is also nested stack limit and it's not controlling these, I hope it will solve in near future, but right now, if you can already can update your backend just make your changes small, I don't know what else we should do

You are correct. I'm making small, incremental changes and I was cruising well before hitting the template size limit issue. Now, everything has come to a standstill. I'm exploring other options and I'll post updates if I'm able to fix this issue. Thank you for your suggestions.

thomasoehri commented 1 month ago

I'm running into the same problem:

[Warning at /amplify-xxxx/data] Template size is approaching limit: 925984/1000000. Split resources into multiple stacks or set suppressTemplateIndentation to reduce template size. [ack: @aws-cdk/core:Stack.templateSize]

MarlonJD commented 1 month ago

Hey there @LukaASoban @vinothj-aa @thomasoehri, it seems now we can disable some auto-generated resource, it may help us to decrease our resources and finally push without an error, is there anybody tried this https://github.com/aws-amplify/amplify-category-api/issues/2559#issuecomment-2396461242

thomasoehri commented 1 month ago

Hey there @LukaASoban @vinothj-aa @thomasoehri, it seems now we can disable some auto-generated resource, it may help us to decrease our resources and finally push without an error, is there anybody tried this #2559 (comment)

Hi @MarlonJD, for me disabling all the auto-generated queries/mutations/subscriptions i have custom ones for reduced my template size from 925984/1000000 to 842567/1000000.

MarlonJD commented 1 month ago

Hey @thomasoehri what about resource count, is this also deceased?

vinothj-aa commented 1 month ago

Hey there @LukaASoban @vinothj-aa @thomasoehri, it seems now we can disable some auto-generated resource, it may help us to decrease our resources and finally push without an error, is there anybody tried this #2559 (comment)

Hi @MarlonJD, for me disabling all the auto-generated queries/mutations/subscriptions i have custom ones for reduced my template size from 925984/1000000 to 842567/1000000.

This is good news for people using models however, in our case we have just 1 model and the rest are custom types. As we add the datasources for custom types to Amplify data, the template size exceeds 1000000 bytes and our deployments failed. Perhaps, I'm able to come up with a workaround to unblock ourselves and I'll be happy to share my approach if anyone else is facing this issue with custom types.

MarlonJD commented 1 month ago

Hey @vinothj-aa happy to hear that you already fixed your issue, can you share with us how you could did this?

vinothj-aa commented 3 weeks ago

Hey @vinothj-aa happy to hear that you already fixed your issue, can you share with us how you could did this?

Sure!

Here are the steps:

Please let me know if you need more details.