Closed andreialecu closed 3 years ago
I have noticed an additional issue related to this.
Repro: 1) modify the schema 2) amplify push 3) once it's starting to update the cloudformation stacks, modify the schema again and save it 4) wait for step 3 to end 5) amplify push again -> it will report no changes
It's unclear whether it will properly pick up ALL the changes by simply adding a white space to the schema and trying to push again.
Doing amplify pull --force
will revert the schema to what it was during step 2, even though it reports no changes at step 5.
@andreialecu this later one is "by design" as the timestamp of the schema file is updated in the meta file after a successful push operation, it requires changing to change detection to work around edge problems like this.
It could still be fixed by storing the timestamp of the file in memory from when the schema file was read, and on a successful push persist that one instead of assuming the file didn't change in the mean time.
Lots of hours have been wasted trying to troubleshoot seemingly weird bugs because amplify push
reports no changes, when in fact there are but they were not detected properly.
This is especially annoying when doing very small schema changes.
I saw that amplify pull
can retrieve the schema from appsync, so maybe a full byte for byte comparison could be made.
@andreialecu Which schema are you modifying? What's the location of this schema file?
amplify/backend/api/apiname/schema.graphql
So you're saying - after you make a change to that file - your amplify status doesn't show any status change?
To clarify, assume the scenario in https://github.com/aws-amplify/amplify-cli/issues/4066#issuecomment-619527853
It's very common to do some additional changes to the schema while amplify push
is running, especially during initial prototyping. It could be something simple as changing the type of a field or adding a connection.
If I save the schema during this time, a subsequent amplify push
will detect no changes and it is unclear what the state of the cloud deployment is.
It's possible my initial report was caused by this, instead.
So you're saying - after you make a change to that file - your amplify status doesn't show any status change?
Yes, and @atillah confirmed that this can happen in his comment above.
@andreialecu Ah, got you! So basically changing the schema at the time of amplify push
running will result in this. You can use the amplify push --force
command in that scenario.
I understand, that's a potential workaround.
However:
I don't always remember if I made changes to the schema while amplify push
was running, depending on what other things I'm working on, and it takes a long time to push every time.
I'd like to avoid having to do random additional forced pushes to ensure the deployment is up to date. :)
I feel this can be the cause of a lot of frustration and can result in impossible to debug problems, especially for newcomers.
And since I have your attention :)
Here's an additional oddity:
1) start with a fresh git repository with all changes committed
2) delete an api
3) amplify push
, wait for it to finish
4) revert git changes done by step 2 and 3.
5) amplify push
reports no changes
Now it's the first time I think of using amplify push --force
. I never knew about the timestamps problem.
@andreialecu about the original issue: I can't reproduce it, it works as expected, changes were deployed for me, but based on the conversation I think you were changing the schema during push and that's how you ended up with this corrupt state.
Beside timestamps we store the hashes of the given directories, but as I wrote happens after push, I've to look into how this could be moved to before push, but since this behavior overarches on other categories it is a change in logic.
On this last one: the amplify-meta.json file is NOT under source control that that carries the timestamp and hash information for pushed resources, so when you revert in git I think doing an amplify pull could solve the solution of getting up to date information about the status of the resources.
@andreialecu I changed the title and marked as an enhancement as it is something that could require a platform change to be done differently.
UPDATE: marked it as a bug to get this prioritized.
A known issue similar to this is when only the auth settings are updated with an 'api update' command, those changes are stored outside of the API folder so not taken into account during the change detection. It will be addressed as well as part of this issue.
There are a number of similar cases(not specifically wrt. changes not being updated necessarily) that require "tricks" to work around.. such as having to push new GSIs one at a time and probably a couple of other ones I can't remember. Maybe a troubleshooting / known issues section could be added to the docs?
@saevarb that is unrelated and a known limitation of CFN+DynamoDB, it is documented in the CLI docs here: https://docs.amplify.aws/cli/graphql-transformer/directives#generates-3
If you've a suggestion on how can we improve a docs please give feedback or we welcome PRs for the docs repo as well.
Closing this as a duplicate of #4914 which we're tracking.
This issue has been automatically locked since there hasn't been any recent activity after it was closed. Please open a new issue for related bugs.
Looking for a help forum? We recommend joining the Amplify Community Discord server *-help
channels for those types of questions.
Describe the bug
amplify push
does not update appsync resolvers when changing type declaration.To Reproduce
Consider this schema:
Run
amplify push
Change the type of
AvailabilityHours
like the following:Run
amplify push
again. The schema is not updated in appsync and the updated type is not usable, resulting in being unable to query the model at all:I was able to work around it by deleting
availability
fromUser
, pushing it, then adding it back, then pushing again. But this is not great for production.Expected behavior
Schema should be updated properly.