aws-amplify / amplify-category-api

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.
https://docs.amplify.aws/
Apache License 2.0
87 stars 73 forks source link

Amplify push got error "Message: Resource is not in the state stackUpdateComplete" #92

Open lehoai opened 2 years ago

lehoai commented 2 years ago

Before opening, please confirm:

How did you install the Amplify CLI?

npm

If applicable, what version of Node.js are you using?

No response

Amplify CLI Version

Using the latest version at amplify CI/CD

What operating system are you using?

Mac

Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.

No manual changes made

Amplify Categories

Not applicable

Amplify Commands

push

Describe the bug

I am using CI/CD which links with my GitHub master branch. The last few days ago, it work properly. But now when I try to merge source to master branch, I got the error: [WARNING]: βœ– An error occurred when pushing the resources to the cloud [WARNING]: βœ– There was an error initializing your environment. [INFO]: DeploymentError: ["Index: 1 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"] at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/amplify-provider-awscloudformation/src/iterative-deployment/deployment-manager.ts:159:40 at Interpreter.update (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:267:9) at /root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:112:15 at Scheduler.process (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:69:7) at Scheduler.flushEvents (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:60:12) at Scheduler.schedule (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/scheduler.js:49:10) at Interpreter.send (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:106:23) at _a.id (/root/.nvm/versions/node/v14.18.1/lib/node_modules/@aws-amplify/cli/node_modules/xstate/lib/interpreter.js:1017:15) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5)

Then I try with amplify CLI, get the same error too.

Expected behavior

push success.

Reproduction steps

I add a @connection, a @key, and few @aws_subscribe, then push

GraphQL schema(s)

```graphql # Put schemas below this line ```

Log output

``` # Put your logs below this line ```

Additional information

No response

ucheNkadiCode commented 1 year ago

Hey @lewisdonovan, I really appreciate that write up you wrote on StackOverflow! I actually found it and tried everything but none of the solutions worked for me. And I tried those solutions all 3 out of the times I eventually had to destroy and recreate things from scratch.

It's just such a shame how bad my experience with Amplify was. Glad I was able to switch before my app releases publicly

CameronSima commented 1 year ago

I am having the same issue, over and over again. This last time, I erroneously added MaxRecieveCount to an SQS queue, instead of BatchSize on the lambda that consumes it. Nbd, I'll just remove it -- nope, even after deleting the single offending property in cloudformation, I can't push because amplify errors on extraneous key [MaxReceiveCount] is not permitted. Even though it is long deleted.

Alex-Github-Account commented 1 year ago

I am having the same issue, over and over again. This last time, I erroneously added MaxRecieveCount to an SQS queue, instead of BatchSize on the lambda that consumes it. Nbd, I'll just remove it -- nope, even after deleting the single offending property in cloudformation, I can't push because amplify errors on extraneous key [MaxReceiveCount] is not permitted. Even though it is long deleted.

Sorry to hear. This issue thread is quite old and buried deep within hundreds of other issues. Maybe we could gather bit more attention from Amazon team if you comment in the recent 'issue-awareness' thread i've created here (in there even was a promise by contributor @josefaidt to 'look into')

offspring commented 1 year ago

I have the same issue stuck forever in the loop of

πŸ›‘ ["Index: 1 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"]
AcuamaticDev commented 1 year ago

I have the same problem, i'm trying to update the schema with destructive changes and it gave me this error

πŸ›‘ ["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]

MuhammadShahiryar commented 1 year ago

Any update on this? Stuck on this for this past four day?

hackmajoris commented 1 year ago

Happens to me also when trying to add a sorting key to the primary id like id: ID! @primaryKey(sortKeyFields: ["start"]). Will deep dive. I have more indexes on the current model.

Updates: How to reproduce:

  1. Here is the initial schema:
type ShorterLink @model {
  id: ID! @primaryKey(sortKeyFields: ["createdAt"])
  name: String!
  createdAt: String!
  shorterUrl: String
  originalUrl: String
  clicks: Int
  lastOpen: String
  description: String
  info: String
  logs: [Log] @hasMany
}

type Log @model @primaryKey(sortKeyFields: ["createdAt"]) {
  id: ID!
  createdAt: String!
  title: String!
  link: ShorterLink @belongsTo
}
  1. run amplify push -y

  2. After push, update Log model(remove sotKeyFields)

    type Log @model  {
    id: ID!
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
    }
  3. run amplify push --allow-destructive-graphql-schema-updates

Result: πŸ›‘ ["Index: 0 State: {\"deploy\":\"waitingForDeployment\"} Message: Resource is not in the state stackUpdateComplete"]

Works also backwards: create Log model without sortingFields, and then update the model with sortingFields

More info:

At the CloudFormation, the latest deploy failed with status: UPDATE_ROLLBACK_COMPLETE Drift detection:

image

@josefaidt @AnilMaktala

AnilMaktala commented 1 year ago

Hi Everyone,We deeply apologize for the delay. Our team is actively engaged in resolving this issue and we are seeking to determine whether the underlying root cause is consistent across all related issues. Would you kindly run the command below and share with us the project identifier?

 amplify diagnose --send-report

please refer here

AnilMaktala commented 1 year ago

Hey @hackmajoris πŸ‘‹, We are able to reproduce the issue with the provided steps and below schema. Steps to reproduce:

Here is the initial schema:

type ShorterLink @model {
  id: ID! @primaryKey(sortKeyFields: ["createdAt"])
  name: String!
  createdAt: String!
  shorterUrl: String
  originalUrl: String
  clicks: Int
  lastOpen: String
  description: String
  info: String
  logs: [Log] @hasMany
}

  type Log @model {
    id: ID! @primaryKey(sortKeyFields: ["createdAt"])
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
  }

run amplify push -y After push, update Log model(remove sotKeyFields)

  type Log @model  {
    id: ID! @primaryKey
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
  }

run amplify push --allow-destructive-graphql-schema-updates

image
rjm13 commented 1 year ago

This error happens when you change your primaryKey. Once an @index is created, you have to be careful changing or deleting it. This error occurred for me because I deleted 2 indexed fields in my schema before running amplify push. It's a pain, but if you're deleting an index you have to do it one at a time.

Tshetrim commented 1 year ago

I also just ran into this issue, sharing in case its of help (I'll likely just remake env):

My initial test model was:

type Comment @model @auth(rules: [{ allow: public, operations: [read] }, { allow: owner }]) {
    id: ID! @primaryKey(sortKeyFields: ["createdAt"])
    content: String!

    grantID: ID!
    grant: Grant! @belongsTo(fields: ["grantID"])

    userEmail: String!
    user: User! @belongsTo(fields: ["userEmail"])

    createdAt: AWSDateTime!
}

I then modified it to this below (changing primary key to simple key from composite and adding a new index):

type Comment @model @auth(rules: [{ allow: public, operations: [read] }, { allow: owner }]) {
    id: ID! @primaryKey
    content: String!

    grantID: ID! @index(name: "byGrant", sortKeyFields: ["createdAt"])
    grant: Grant! @belongsTo(fields: ["grantID"])

    userEmail: String!
    user: User! @belongsTo(fields: ["userEmail"])

    createdAt: AWSDateTime!
} 

Got an alert this would require destroying table, was fine with it and pushed it, but then also got the same exact issue of certain stacks being UPDATE_ROLLBACK_COMPLETE and others CREATE_COMPLETE

and finally the Result:

β Ή Waiting for previous deployment to finish.
Deployment completed.

πŸ›‘ ["Index: 1 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"]

β Έ Waiting for previous deployment to finish.

Session Identifier: bd1c4911-09eb-4914-9b5c-c32381bc73a3
AnilMaktala commented 1 year ago

Hi @Tshetrim, Thank you for informing us of this matter. Prior to creating a new environment, would you be willing to attempt the following steps?

  1. Make modifications to the primary key and execute amplify push.
  2. Next, add the index for grantID and execute amplify push.

Please let us know if this two-step workaround resolves the issue.

Tshetrim commented 1 year ago

Hello,

1) I reverted the primary key to what it initially was and pushed with --force, but the issue is that deployment won't even be pushed because of the same error image

 Waiting for previous deployment to finish.
Deployment completed.

πŸ›‘ ["Index: 1 State: {\"preRollback\":\"previousDeploymentReadyCheck\"} Message: Resource is not in the state stackUpdateComplete"]

β Ό Waiting for previous deployment to finish.

Session Identifier: baefa47a-7065-4454-8230-54362ae42266

2) Same issue occurs, I can't even push the changes without being blocked by the waiting for previous deployment to finish. I have a feeling its due to something not correctly recognizing or labeling the stacks are in a completed state when they actually are.

AnilMaktala commented 1 year ago

Hi @Tshetrim, Thank you for attempting the workaround, and I apologize to hear that it did not resolve your issue. I came across a similar issue here. Would it be possible for you to delete the deployment.json file as mentioned in the ticket and try again?

Before executing these steps in production, please make sure to verify them in lower environments.

hackmajoris commented 1 year ago

@Tshetrim Not related to your issue, but for curiosity: what's the editor/extension which you're using to highlights the .graphql AppSync related files?(From your screenshot)

Tshetrim commented 1 year ago

@hackmajoris haha the editor is VSCode and the extensions are in this picture: image

The theme is: image

Enjoy!

@AnilMaktala I deleted and restarted in another environment already but I will recreate the environment and my steps to get to the error and try that solution

Tshetrim commented 1 year ago

@AnilMaktala Hello, so I went to S3 and deleted the deployment-state.json like others had it.

After doing so, the push did go through this time - I was not immediately stopped by the index issue. Unfortunately, after it tried to deploy it for a bit, just like the first time, the same error eventually came up.

image

I tried it twice for good measure.

image

image

KenObie commented 1 year ago

So...the solution is to stop using Amplify? Cheers

judygab commented 1 year ago

Hey @hackmajoris πŸ‘‹, We are able to reproduce the issue with the provided steps and below schema. Steps to reproduce:

Here is the initial schema:

type ShorterLink @model {
  id: ID! @primaryKey(sortKeyFields: ["createdAt"])
  name: String!
  createdAt: String!
  shorterUrl: String
  originalUrl: String
  clicks: Int
  lastOpen: String
  description: String
  info: String
  logs: [Log] @hasMany
}

  type Log @model {
    id: ID! @primaryKey(sortKeyFields: ["createdAt"])
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
  }

run amplify push -y After push, update Log model(remove sotKeyFields)

  type Log @model  {
    id: ID! @primaryKey
    createdAt: String!
    title: String!
    link: ShorterLink @belongsTo
  }

run amplify push --allow-destructive-graphql-schema-updates

image

So what is the right way of updating primary keys on a table? I am also encountering the same error when trying to do so

judygab commented 1 year ago

For anyone still wondering, what worked for me was just deploying without the table that you are to change primaryKey of and than adding the updated table and re-deployed. Figured since the table will be deleted regardless because of the change of the index, I would delete it myself because just updating the primaryKey didn't work for me.

Tshetrim commented 1 year ago

@judygab Hi Judy, thanks for the heads up. I've tried deleting the table thorough Dynamo before then deploying but I still got the same error.

Just to clarify, if you can, did you remove the table from your schema, delete the table through Dynamo, and then recompiled and deployed?

And then did that successfully deploy, and then you were you able to add in the table back to your schema and deploy successfully?

This was the sort of loop around I was hoping to find, so awesome if this works!

KenObie commented 1 year ago

For those who are still stuck. The root cause is a stuck deployment.json file that stores the deployment status. Ignore this horrible design choice and follow these steps.

  1. Rollback deployment using the cli
  2. Delete deployment.json in the root of the amplify s3 directory
  3. Redeploy
  4. Delete primary index and redeploy (1 table at a time).
judygab commented 1 year ago

@judygab Hi Judy, thanks for the heads up. I've tried deleting the table thorough Dynamo before then deploying but I still got the same error.

Just to clarify, if you can, did you remove the table from your schema, delete the table through Dynamo, and then recompiled and deployed?

And then did that successfully deploy, and then you were you able to add in the table back to your schema and deploy successfully?

This was the sort of loop around I was hoping to find, so awesome if this works!

Yes, so my steps were:

  1. Remove the table from schema
  2. Re-deploy(deployment was successful)
  3. Add the table back with updated primary keys
  4. Delete deployment.json from s3 bucket(I tried re-deploying but it got stuck with no errors, so wasn't able to stop it or try again because of stored deployment state)
  5. Re-deploy
malmgrens4 commented 1 year ago

Ran into this as well. Deleting the deployment-status.json in s3 didn't work for me. Ended up making a backup of my changes and pulling back down from the server with amplify pull to overwrite my local changes.
Then redeploying changes incrementally. Really need amplify to have more detailed errors and better tolerance for multiple changes. They're bound to happen in early stages of a project.

omegabyte commented 1 year ago

What's the point --force if doesn't force?

hackmajoris commented 1 year ago

C'mon guys. Please prioritise this critical issue.

mgrabka commented 1 year ago

Ran into this as well. Deleting the deployment-status.json in s3 didn't work for me. Ended up making a backup of my changes and pulling back down from the server with amplify pull to overwrite my local changes. Then redeploying changes incrementally. Really need amplify to have more detailed errors and better tolerance for multiple changes. They're bound to happen in early stages of a project.

Ran into this as well, and it's funny because even pulling and trying to push afterwards doesn't work for me.

nxia416 commented 1 year ago

Having the same problem.

malmgrens4 commented 1 year ago

One issue I ran into was trying to change which field the primary key was. Couldn't push after that.

chrisl777 commented 1 year ago

I ran into the same issue.

hackmajoris commented 1 year ago

I ran into the same issue.

If loosing the existing data is not a problem, you can delete the existing schema/table and then re-create it again with the required keyes.

evan1108 commented 1 year ago

I had the same problem. Removing the tables and re-adding one by one with the updated primary key as described above worked for me.

chrisl777 commented 1 year ago

In my case, I had the above error message with Storage, not the API.

What helped in my case was to run amplify update storage and run through all my settings.

At first, when changing settings, such as updating permissions, or adding a trigger, I was getting an error: Resolution error: statement.freeze is not a function aws amplify.

In my case, I think what was causing the issue was that I had a Lambda trigger for S3 that was not set up properly, I removed the link between S3 and the Lambda tigger and re-linked it. I also have an Admin group for Auth, and I needed to add permissions for Storage for that group as well. I also had an override.ts for my Storage where I had a policy:

   resources.s3GuestReadPolicy.policyDocument.statements.push({
     Effect: "Allow",
     Action: "s3:GetObject",
     Resource: `${resources.s3Bucket.attrArn}/public/*` 
   })

I think this policy may have been conflicting with the cli-inputs, so I commented out this policy.

After making these changes, I stopped getting errors when running amplify update storage and then my backend build succeeded.

Just putting it out there since I didn't see anyone mention Storage as throwing the error message in the original post above.

donkee1982 commented 11 months ago

I'm not sure if this will be helpful or if it might be a special case, but I'll describe the situation where I resolved the same error. In conclusion, I checked and modified the contents of team-provider-info.json, and after doing so, the error was resolved, and the deployment succeeded. In more detail, in my case, I was sharing a single backend between two applications, and there was a discrepancy in the environment variables of the functions section in team-provider-info.json. Once I corrected this, the error was resolved. It seems that even after running the amplify pull command, the team-provider-info.json was not updated, leading to this situation. This aspect was not shown in the logs of amplify console. I noticed this purely by chance. I hope this proves useful to someone

hackmajoris commented 11 months ago

I'm not sure if this will be helpful or if it might be a special case, but I'll describe the situation where I resolved the same error. In conclusion, I checked and modified the contents of team-provider-info.json, and after doing so, the error was resolved, and the deployment succeeded. In more detail, in my case, I was sharing a single backend between two applications, and there was a discrepancy in the environment variables of the functions section in team-provider-info.json. Once I corrected this, the error was resolved. It seems that even after running the amplify pull command, the team-provider-info.json was not updated, leading to this situation. This aspect was not shown in the logs of amplify console. I noticed this purely by chance. I hope this proves useful to someone

Could you check your solution on these reproductio steps? https://github.com/aws-amplify/amplify-category-api/issues/92#issuecomment-1552080042

nikolaigeorgie commented 11 months ago

Not sure if anyone saw this but stack overflow top answer laid out best options. For me any change to schema.graphql would fail with the same error here. I had to

  1. delete delete deployment-state.json in the s3 bucket for the deployment.
  2. amplify push --force (failed for same reason)
  3. step 1 again
  4. amplify push --force

And the second time worked 🍭 . So perhaps deleting it 2-3 times.. is the trick.. πŸ‘€ lol

drewjhart commented 7 months ago

Has there been any movement on this? I am getting this whenever I want to add an @index to an existing table. Currently, I am recreating the env as a workaround, but that is less than ideal because anything in the DynamoDB tables need to be deleted.

KarthikPoonjar commented 4 months ago

What worked for me is the following:

  1. First remove all the fields having @hasMany and @belongsTo , push
  2. Add/remove primaryKey field and push
djom202 commented 3 months ago

Today i had the same issue, looking for a soluction or workarround, I found that one of thing that produce this issues is a conflict change in the infra code, so I think that the env in the cloud was working fine, I just create a backup from Amplify folder in order to don't forget which changes were made so I delete the amplify folder and I just pull the entire env. It's just a band-aid; it's only a temporary fix until a permanent solution is found.