aws-amplify / amplify-category-api

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.
https://docs.amplify.aws/
Apache License 2.0
82 stars 71 forks source link

Amplify push got error "Message: Resource is not in the state stackUpdateComplete" #92

Open lehoai opened 2 years ago

lehoai commented 2 years ago

Before opening, please confirm:

How did you install the Amplify CLI?

npm

If applicable, what version of Node.js are you using?

No response

Amplify CLI Version

Using the latest version at amplify CI/CD

What operating system are you using?

Mac

Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.

No manual changes made

Amplify Categories

Not applicable

Amplify Commands

push

Describe the bug

I am using CI/CD which links with my GitHub master branch. The last few days ago, it work properly. But now when I try to merge source to master branch, I got the error: [WARNING]: ✖ An error occurred when pushing the resources to the cloud [WARNING]: ✖ There was an error initializing your environment. [INFO]: DeploymentError: ["Index: 1 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"] at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/iterative-deployment/deployment-manager.ts:159:40 at Interpreter.update (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:267:9) at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:112:15 at Scheduler.process (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:69:7) at Scheduler.flushEvents (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:60:12) at Scheduler.schedule (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:49:10) at Interpreter.send (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:106:23) at _a.id (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:1017:15) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5)

Then I try with amplify CLI, get the same error too.

Expected behavior

push success.

Reproduction steps

I add a @connection, a @key, and few @aws_subscribe, then push

GraphQL schema(s)

```graphql # Put schemas below this line ```

Log output

``` # Put your logs below this line ```

Additional information

No response

akshbhu commented 2 years ago

Hi @lehoai

Can you share you gql schema and categories you have added so that I can reproduce it on my end ?

Also can you share the debug logs present here : ~/.amplify/logs/amplify-cli-<issue-date>.log

Also may I know which amplify version you are using?

batical commented 2 years ago

Got the same issue. very problematic, not able to push anything(dev or production-.

My log are 20k line long.

2022-03-11T12:00:38.110Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (1 of 3)"}])
2022-03-11T12:00:38.110Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:00:38.111Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (1 of 3)"}])
2022-03-11T12:00:39.234Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (1 of 3)"}])
2022-03-11T12:04:41.051Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Waiting for DynamoDB indices to be ready"}])
2022-03-11T12:04:44.266Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:04:44.493Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (2 of 3)"}])
2022-03-11T12:04:44.493Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:04:44.494Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (2 of 3)"}])
2022-03-11T12:04:45.634Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (2 of 3)"}])
2022-03-11T12:08:47.462Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Waiting for DynamoDB indices to be ready"}])
2022-03-11T12:09:51.696Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:09:51.920Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (3 of 3)"}])
2022-03-11T12:09:51.921Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:09:51.924Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (3 of 3)"}])
2022-03-11T12:09:52.941Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Deploying (3 of 3)"}])
2022-03-11T12:21:58.236Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:21:58.239Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (1 of 3)"}])
2022-03-11T12:21:58.240Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:21:58.241Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:21:58.242Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:21:58.478Z|error : amplify-provider-awscloudformation.deployment-manager.startRolbackFn([{"index":2}])
Error: Cannot start step then the current step is in ROLLING_BACK status.
2022-03-11T12:21:59.401Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:26:01.228Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (2 of 3)"}])
2022-03-11T12:26:04.433Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:26:04.648Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])
2022-03-11T12:26:04.650Z|info : amplify-provider-awscloudformation.aws-s3.uploadFile.s3.putObject([{"Key":"[***]ment-[***]json","Bucket":"[***]it-[***]ev-[***]161237-[***]ment"}])
2022-03-11T12:26:04.654Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])
2022-03-11T12:26:05.732Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])
2022-03-11T12:30:07.578Z|info : amplify-provider-awscloudformation.deployment-manager.deploy([{"spinner":"Rolling back (3 of 3)"}])

I try using the last version of amplify cli 7.6.23

here is some part of the log with the rollback starting

ecc7220 commented 2 years ago

I have the same issue, I pushed a few destructive changes to my GraphQL model and it failed because a token expired during the push.

2022-03-11T12:22:24.363Z|error : amplify-provider-awscloudformation.aws-s3.uploadFile.s3([{"Key":"[***]ment-[***]json","Bucket":"[***]ify-[***]pool-[***]ing-[***]316-[***]ment"}]) ExpiredToken: The provided token has expired. 2022-03-11T12:22:24.363Z|error : amplify-provider-awscloudformation.deployment-manager.startRolbackFn([{"index":2}]) ExpiredToken: The provided token has expired. 2022-03-11T12:22:38.638Z|error : amplify-provider-awscloudformation.deployment-manager.getTableStatus([{"tableName":"[***]er-[***]6sfqmjiikqe-[***]ing"}]) ExpiredTokenException: The security token included in the request is expired I tried it again and got:

2022-03-11T12:24:18.467Z|info : amplify-provider-awscloudformation.deployment-manager.rollback([{"spinner":"Waiting for previous deployment to finish"}]) 2022-03-11T12:24:18.526Z|error : amplify-provider-awscloudformation.deployment-manager.DeploymentManager([{"stateValue":"failed"}]) DeploymentError: ["Index: 3 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"] Then I pulled the lasted env and pushed and got again:

2022-03-11T12:35:34.798Z|info : amplify-provider-awscloudformation.deployment-manager.rollback([{"spinner":"Waiting for previous deployment to finish"}]) 2022-03-11T12:35:34.834Z|error : amplify-provider-awscloudformation.deployment-manager.DeploymentManager([{"stateValue":"failed"}]) DeploymentError: ["Index: 3 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"] How can I solve this, I'm stuck, please provide steps to fix this, even if I need to remove some stuff, I need to get it working today!

My cli version is 7.6.23, using Cloud9 instance. Only some destructive updates to the model, since the last push. Recreation of tables not an issue right now.

ecc7220 commented 2 years ago

I did some research and there are tons of "Resource not in state stackUpdateComplete" issues that where never solved but simply closed. The guys have probably all recreated the whole environments.

Related issues containing "stackUpdateComplete" start at aws-amplify/amplify-cli#82 and goes all the way up to aws-amplify/amplify-category-api#95, with still 18 open issues and 149 closed issues including this one (#9925). Drifts in the stacks happen, you need to handle this properly IMHO. In my case it was the long time the update took, which caused a token to expire.

Amazon are you watching this very common ?

This is a no go.

This is clearly a high priority issue that should be solved once and for all times. Recovery from push failures and stack drifts is very essential, this should simply just work all the time, like a file system.

EDIT:

I could find some more hints, if you change more than 2 indexes in a graphql model, the push fails, complaining about that too many parallel indexes where changed or deleted on a single table, I cant find the log about it, it was displayed on the terminal, but I lost it. The amplify log file in ~/amplify/logs/ is too long already, I can't find anything. After that, you will get the message "Resource is not in the state stackUpdateComplete" and you are stuck. The only way to get it back to work is the deletion and recreation of the environment. Steps I did (on your own risk):

  1. Save all your data that is on the amplify controlled amplify storage, also archive the whole environment.
  2. Backup all data in your database tables, you will need later to import that data back into new tables. In step 5, the env will be delete and all tables and storage s3 buckets will be deleted! Be careful, you are on your own risk.
  3. Create a new env with: amplify env add your_new_env_name
  4. You will be now in your new env, the code and backend config should still be there. You can see that with an: amplify status
  5. Now delete your old env: amplify env remove broken_env_name (on your own risk)
  6. Create a new env with the same name, this env will work: amplify env add broken_env_name
  7. you should be in the new env now, check with: amplify status
  8. Push your backend into the new env, this will take a while...
  9. Now it's the time to reimport all your saved data from step 1 and 2.

The whole procedure takes a long time, is there a better or faster way to do it?

EDIT:

I'm very sure now, that this is caused by too many simultaneous dynamodb index updates on the same table, which hits on a dymanodb limit and that the expired token message was somehow only related to the first error. This also explains why some many people encounter this switching from V1 to V2 graphql models.

So we have two issues:

  1. A failed push, for what ever reason needs to be recoverable.
  2. GraphQL model updates trigger occasionally simultaneous index updates or deletions which abort the running amplify push, leaving you with an unrecoverable state.
lehoai commented 2 years ago
Screen Shot 2022-03-12 at 9 35 16 AM Screen Shot 2022-03-12 at 9 36 47 AM

@akshbhu Sorry, i was so busy. I've changed my schema as above, and a few @subscription and @function. @ecc7220 I cant create a new env because this is a production env.

batical commented 2 years ago

I was able to go though, pushing my changes step by step

Issue cames from a @index error, my bad, but at least an error message can be helpful.

lehoai commented 2 years ago

@batical Can u give me more info? The amplify error log is useless at all!

ecc7220 commented 2 years ago

@lehoai I don't know how to recover from this error without recreating the env, sorry, but somebody with deeper knowledge needs to help out somehow. The only thing I could find out, is the cause of the error. Which comes from the index issue I mentioned above. So, don't change too much at once in the model, it could lead to a broken env.

ecc7220 commented 2 years ago

Issue aws-amplify/amplify-category-api#88 is also related, I had also tried everything, including the deletion of the "deployment-state.json" file in the corresponding amplify stage bucket. The issue described there is very similar. Only that in my case nothing helped.

@lehoai I would give it a try and delete the file and try pushing again, if you are lucky, you can get it back working.

lehoai commented 2 years ago

@ecc7220 Thanks. I will try.

ecc7220 commented 2 years ago

@lehoai if this is not working try also the solution for aws-amplify/amplify-category-api#88. This includes modifying the deployment bucket as well. It is a much better solution, as my solution recreating everything. If I have next time a similar issue, I will try the suggested modification in aws-amplify/amplify-category-api#88. Good luck!

lehoai commented 2 years ago

@ecc7220 Thanks, I will try. I promise you this is the last time I work with Amplify. Terrible tool ever: Unstable, slow support, there is a lot of bugs!

josefaidt commented 2 years ago

Hey @lehoai :wave: apologies for the delay! Can you share the CloudFormation errors that are printed prior to receiving the Resource is not in the state stackUpdateComplete message? Typically when we see this error the CloudFormation errors provide additional insight

alharris-at commented 2 years ago

Hi @lehoai, In addition to the schema errors which Josef has mentioned above specifically, could you also provide a bit more data your environment? There are 2 things which we'd like to understand in a bit more details, auth token expiration and it's impact on your deployment, and the contents of the deployment itself.

  1. What auth mechanism are you using for your account? i.e. are you using user access/secret keys for a given user, or are you using something like STS to generate short-lived federated tokens?
  2. The schema you're starting out with when you kick off a deployment (previous schema).
  3. Schema you are attempting to deploy.

This will help us get understanding of the changes being applied during the deployment. We can set up a call if you'd like as well, rather than sharing the schema publicly on GH, you can reach out to amplify-cli@amazon.com

lehoai commented 2 years ago

@alharris-at al Sorry for the late reply. Finally, we have to create a new env and redeploy the whole project. Then it worked. The old env was deleted. So I think it's not a problem with schema (i don't create or update any index).

  1. I link the amplify with github in aws console, so it automatically re-deploys every time the source code is merged. I don't use access/secret keys 2, 3. as i said, i don't create or update any index, just add a few subscriptions and columns. I can't show the detail of the schema.

I've checked the error log many times, then there is only one reason "Resource is not in the state stackUpdateComplete", no more. ( i know, this error sometimes shows up when other error occurs, but not in my case, only "Resource is not in the state stackUpdateComplete" is thrown).

alharris-at commented 2 years ago

I see, thank you for the update @lehoai, we're going to create a new bug related to force push behavior in the AWS Console, which sounds related to what you're seeing here. Is there anything else specific we can help you out with on this issue?

josefaidt commented 2 years ago

Hey @lehoai :wave: thank you for those details! To clarify, do you have the affected backend files and would you be willing to send us a zip archive to amplify-cli@amazon.com? If so, we would like to take a look and see if we are able to reproduce the issue using your backend definition as we have been unable to reproduce this ourselves.

lehoai commented 2 years ago

@josefaidt @alharris-at Thank you for your response. Honestly, I really wanna share the detail of the schema, and backend files but there is an NDA contract so I cant. I gave u everything I can share above, the error log, part of the schema...

I think u should give more detail in the error log then the developers can investigate the cause.

naingaungphyo commented 2 years ago

I run amplify push and also got the same "Resource is not in the state stackUpdateComplete" error after changing many indexes and the primary key of one of my models. I tried with this amplify/cli/json setting of enableIterativeGsiUpdates as true and also used --force and --allow-destructive-graphql-schema-updates cli flags according to this troubleshooting guide, but none of them works.

By the way, I didn't do any manual modification. (eg. from console etc)

My workaround was removing the api and adding api again as below.

backup my schema file amplify remove api amplify push amplify add api and use my backup schema amplify push used cli version is 8.0.2 and v2 transformer

josefaidt commented 2 years ago

Hey @lehoai no worries on sending the schema. We will continue to investigate this issue.

@naingaungphyo are you seeing any CloudFormation errors outputted to your terminal and would you mind sharing the logs at ~/.amplify/logs the day this occurred? And finally, approximately how many changes were applied prior to receiving this error?

naingaungphyo commented 2 years ago

@josefaidt I added two bidirectional One-to-Many relationship indexes, changed the primary key of a model and added a new GSI.

One thing I forgot to mention is that I added amplify add storage together with the above changes and tried to amplify push all of them at once. Later, I removed both storage and api then added them back one by one with push on each step.

I attached my log starting from adding storage, ending at the error. I masked some information. log.txt

josefaidt commented 2 years ago

Hey @naingaungphyo thanks for the clarification and for posting your logs! I'm taking a further look at this 🙂

Tony-OAA commented 2 years ago

Same Error here!!!

Started on dev environment after updating CLI last week. We had to delete dev env and rebuild.

tried pushing to prod today - same error!

["Index: 1 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]

This is a nightmare, wasted a whole day on this so far trying to resolve it. Can you please add better error log information that might give us a clue.??? The fact the whole stack gets rebuilt on even a tiny change means a long wait and reams of useless log files to go through. - I don't want to be an expert in debugging cloudformation - Isn't that the point of amplify.

sachscode commented 2 years ago

@naingaungphyo Could you please share in add storage did you deploy S3 bucket or DynamoDB table.

naingaungphyo commented 2 years ago

@sachscode

did you deploy S3 bucket or DynamoDB table

I tried to add s3 bucket and used amplify push to deploy. But it failed.

jakejcheng commented 2 years ago

I added a new model and a relationship in the Studio but then after doing amplify pull, and tried auto-deploy via Github or amplify push, now I'm just getting the "Resource is not in the state stackUpdateComplete" error as well

jakejcheng commented 2 years ago

I run amplify push and also got the same "Resource is not in the state stackUpdateComplete" error after changing many indexes and the primary key of one of my models. I tried with this amplify/cli/json setting of enableIterativeGsiUpdates as true and also used --force and --allow-destructive-graphql-schema-updates cli flags according to this troubleshooting guide, but none of them works.

By the way, I didn't do any manual modification. (eg. from console etc)

My workaround was removing the api and adding api again as below.

backup my schema file amplify remove api amplify push amplify add api and use my backup schema amplify push used cli version is 8.0.2 and v2 transformer

i tried this but I'm still getting the "ResourceNotReady: Resource is not in the state stackUpdateComplete" error on the resource "UpdateRolesWithIDPFunctin"

naingaungphyo commented 2 years ago

@jakejcheng I think that there are some multiple updates at once. (example, updating roles and api both at once) Generally speaking, it should work if you update one by one and amplify push them after each update.

jakejcheng commented 2 years ago

@jakejcheng I think that there are some multiple updates at once. (example, updating roles and api both at once) Generally speaking, it should work if you update one by one and amplify push them after each update.

@naingaungphyo I didn't touch the roles/auth though. I only created a new model and changed an existing model in the console, did a amplify pull, and can no longer successfully push anymore. I tried many solutions from multiple closed/open issues dating back to 2019 but no avail. I was able to push last night before I made a change to the schema in the console. This is extremely frustrating.

josefaidt commented 2 years ago

Hey @jakejcheng would you mind using the CloudFormation console to detect drift in your app's stack? It is recommended to not make changes in the console as that can potentially create drift. Are you continuously experiencing this issue?

jakejcheng commented 2 years ago

Hey @jakejcheng would you mind using the CloudFormation console to detect drift in your app's stack? It is recommended to not make changes in the console as that can potentially create drift. Are you continuously experiencing this issue?

I'm not 100% sure what I'm doing or what it meant but I clicked on "Detect drift" and both auth and unauth roles were drifted.

However, I was able to "fix" it after I reverted the schema back to the one before I made any changes in the console two days ago. I never had issues using the console to update the schema until two days ago so something must have changed in the backend in the past month or two

djeter commented 2 years ago

Not certain if this helps, but I tried removing the API and got this message:

"amplify push --iterative-rollback" to rollback the prior deployment "amplify push --force" to re-deploy

I am going to the iterative-rollback currently. I will update you when it is complete.

josefaidt commented 2 years ago

Hey @jakejcheng apologies for the delay here but glad to hear you're back up and running! Out of curiosity, what were the drift results for the auth and unauth roles?

@djeter were you able to resolve your issue? Would you mind opening a separate bug report for this occurrence?

djeter commented 2 years ago

@josefaidt I was able to deploy once I did the iterative rollback and amplify push, however, I found the root of my issue. I changed the case of one of my table from formTypes to FormTypes. This caused an issue with the datastore model apparently. I deleted the table and the join from the Form table and did a push. I then added the tables back. The changes were reflected in Amplify Studio.

I do like using DataStore a lot. I am having a query issue though. I have a User table with a manyToMany join to a ChatRoom. I would like to get all of the users in a ChatRoom by the user IDs so I can determine which chatroom to send them to. There are only two users to a chatroom.

josefaidt commented 2 years ago

Hey @djeter glad to hear you're back up and running! Would you mind opening a separate report for the DataStore query issue here? https://github.com/aws-amplify/amplify-js/issues/new/choose

With that I will close this issue for now, however if this comes up again please reply back to this thread and we can re-open to investigate further 🙂

ataibarkai commented 2 years ago

I've run into a similar issue and have been stuck on it since yesterday.

@josefaidt @alharris-at since it seems this issue is as difficult to reproduce/triage as it is critical, I would like to offer you an opportunity for extensive collaboration on this today and the rest of this week, until we get to the bottom of the matter. I will make myself as available as necessary, including screen-sharing, phone calls, sharing logs, etc.

As you can see in the discussion above, this issue is severe enough to make some (rightfully) question the suitability of amplify as a hosting platform entirely... and yet it remains unsolved despite many recurrences.

Some more context on my particular situation: The original change I made had to do with changing 2 @hasMany relations into @manyToMany relations. I've tried amplify push --iterative-rollback, deleting deployment-state.json from the bucket, and a few other things, to no avail.

Please let me know if you'd like to work with me to solve this issue. I'm atai#6010 on the AWS Amplify Discord server. My phone number is (650) [ignore these ~10 words inside brackets, here to confuse bots] 521-2930

johndrewery commented 1 year ago

I also have the same issue. My only change was renaming a table in my qraphql schema. It comes right after I had to delete and reinstall my entire amplify installation including all tables, stacks, users, etc. due to a 'duplicate template' issue that could not be resolved. I am at an early stage in development and considering abandoning AWS entirely due to these multiple problems with Amplify.

lewisdonovan commented 1 year ago

I have this as well. I created several new types in GraphQL and pushed, now I just get the error ["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"].

The new types have interdependent relationships, so I can't really push one at a time without having to leave indexes and add them later (so ultimately there will still be a final push with changes to all of them).

Of all the Amplify errors I've encountered (and there are MANY), this is the most stupid.

jayKayEss commented 1 year ago

I'm experiencing this now on a CI deployment adding a single index to an existing model. There are three nested stacks in the state UPDATE_ROLLBACK_COMPLETE that seem to be causing the issue. However, since the rollbacks are "complete" I don't see what the problem is.

Attempting to fix this manually with the Amplify CLI yields this output:

A deployment is in progress.
If the prior rollback was aborted, run:
"amplify push --iterative-rollback" to rollback the prior deployment
"amplify push --force" to re-deploy

However running with --iterative-rollback yields this:

⠧ Waiting for previous deployment to finish✖ An error occurred when pushing the resources to the cloud
🛑 An error occurred during the push operation: /
["Index: 1 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"]

There doesn't appear to be a way to roll this stack forward OR backward...

lewisdonovan commented 1 year ago

@jayKayEss have you checked to see if there is a file called deployment-state.json in the environment's deployment bucket in S3? If it's there, delete it and try amplify push --iterative-rollback again.

marendra commented 1 year ago

i found the solution. I got the same error due to want to change the primary key, so did what they amplify told me, to use --allow-destructive-graphql-schema-updates , heck it failed, then i just delete the tables in the schema ( the one i want to change the primary key), push to the cloud, then after it finished, put back the table, push again, it worked

Ternst12 commented 1 year ago

@marendra Thank you for sharing - This solution worked for me aswell !

gregorsiwinski commented 1 year ago

I spent hours trying to recover from deployment that timed out in limbo state and the solution proposed by @lewisdonovan was the only thing that worked. I deleted the deployment-state.json and used amplify push to redeploy successfully.

Alex-Github-Account commented 1 year ago

Got same error after "amplify upgrade". Changed nothing in schema/DB. Think twice before updating a buggy tool. No luck with logs (myapiname AWS::CloudFormation::Stack UPDATE_FAILED is the only other string in the output)

DanielhCarranza commented 1 year ago

Even with the latest version on this day the problem still happens and is annoying. Neither deleting the s3 file works, and I see no point in deleting my api and then reinstall again and losing my data. Has anyone found a better solution to this ?

Alex-Github-Account commented 1 year ago

There is no better, but there is alternate solution - use Google Cloud. Has bugs too, but at least they are different kind of bugs, and much less of them are unfixed for years unlike in aws products. I am not advocating for google, I'll even add that GCS storage was partly unreachable for a almost a day(!) few days ago on specific project at least. But at least google cloud works and will not ruin a project with magic like 'Stack UPDATE_FAILED'

ucheNkadiCode commented 1 year ago

I feel so lucky I got out from Amplify while still in beta. I have users that will depend on this app in production so deleting everything as a solution is unacceptable. After this 3rd "stackUpdateComplete" bug, (one of them dealt with amplify pull overwriting ALL of my OAuth settings in a way that was unfixable) I'm leaving Amplify and I'm seriously disappointed in this platform. Luckily I chose to implement all of the Authentication screens myself so I didn't rely on the AWS Auth UI.

Going to try React native firebase. I tried Amplify because I was told of the cost saving benefits, but at this point it isn't worth it when I have to spend this much dev time on debugging the cloud infra I'm paying to "just work". Disappointed in Amplify. This SDK is totally underserved.

Alex-Github-Account commented 1 year ago

and the thing is people don't even ask for solution.

people ask for at least meaningfull error text.

That's it. We are programmers. We can debug and fix. But we need something more than 'Stack UPDATE_FAILED' Some backtrace would save the day. Last function/command that failed would help too. Big people at amazon will never see threads of pain like this.

johndrewery commented 1 year ago

Amen.  I think this is the push I need to get Amplify out of my life like a toxic relationship.On Dec 14, 2022, at 13:43, Alex @.***> wrote: and the thing is people don't even ask for solution. people ask for at least meaningfull error text. That's it. We are programmers. We can debug and fix. But we need something more than 'Stack UPDATE_FAILED' Some backtrace would save the day. Last function/command that failed would help too. Big people at amazon will never see threads of pain like this.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>

lewisdonovan commented 1 year ago

I've written this extensive StackOverflow answer for anyone still stuck with this error. It explains why it happens and provides several techniques that will usually help diagnose and fix it. Although I can't be sure it's exhaustive because the nature of this error is that you never really know the cause, and some of them seem to happen spontaneously.

AWS, you really need to fix the reporting of this. The whole point of Amplify is that it's supposed to be simple. If people have to go digging through CloudFormation for clues, where's the motivation to use Amplify over Serverless Framework where they can define the stacks themselves (or just migrate to another cloud platform altogether)? This lack of proper error reporting literally defeats the purpose of Amplify's existence.

I found ChatGPT to be more helpful than AWS support, even when you ask for the response to be complete nonsense: Screenshot 2022-12-14 at 23 33 10