Open ioweb-gr opened 3 years ago
Hi @ioweb-gr. Thank you for your report. To help us process this issue please make sure that you provided the following information:
Please make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, please, add a comment to the issue:
@magento give me 2.4-develop instance
- upcoming 2.4.x release
For more details, please, review the Magento Contributor Assistant documentation.
Please, add a comment to assign the issue: @magento I am working on this
Join Magento Community Engineering Slack and ask your questions in #github channel.
:warning: According to the Magento Contribution requirements, all issues must go through the Community Contributions Triage process. Community Contributions Triage is a public meeting.
:clock10: You can find the schedule on the Magento Community Calendar page.
:telephone_receiver: The triage of issues happens in the queue order. If you want to speed up the delivery of your contribution, please join the Community Contributions Triage session to discuss the appropriate ticket.
:movie_camera: You can find the recording of the previous Community Contributions Triage on the Magento Youtube Channel
:pencil2: Feel free to post questions/proposals/feedback related to the Community Contributions Triage process to the corresponding Slack Channel
I'm not able to test this on the demo instance.
Hi @engcom-November. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:
[ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).Details
If the issue has a valid description, the label Issue: Format is valid
will be added to the issue automatically. Please, edit issue description if needed, until label Issue: Format is valid
appears.
[ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue. If the report is valid, add Issue: Clear Description
label to the issue by yourself.
[ ] 3. Add Component: XXXXX
label(s) to the ticket, indicating the components it may be related to.
[ ] 4. Verify that the issue is reproducible on 2.4-develop
branchDetails
- Add the comment @magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure.
- If the issue is reproducible on 2.4-develop
branch, please, add the label Reproduced on 2.4.x
.
- If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here!
[ ] 5. Add label Issue: Confirmed
once verification is complete.
[ ] 6. Make sure that automatic system confirms that report has been added to the backlog.
Let me add more info here. I tracked down this post
https://maxchadwick.xyz/blog/lessons-learned-during-a-recent-magento-2-deploy
It seems that during operations which lock the tables, dropping the trigger or creating it has to actually wait for all database locks to disappear.
Looking further into the process I noticed that in my case it was trying to build the catalog price replica tables for the indexes which were locking the tables like this
DELETE FROM `catalogrule_product_price_replica` WHERE (product_id NOT IN ((SELECT `crp`.`product_id` FROM `catalogrule_product_price`
Causing a lot of queries to wait during setup:upgrade making the downtime extremely high. Unfortunately even if you build the files in a different server the setup:upgrade procedure has to run on the production.
This should never ever happen and cause such unpredictable downtimes
This is what the end result in mytop looks like in such cases
I think I have same issue. Will track this behavior and see
I've seen it happen in 3 installations on 3 different servers and now before setup:upgrade I make sure all indexers are green to avoid the issue. I will report back if this stops occuring like this but it would be great if it can be verified
Just an update, while following this practice the downtimes have reduced significantly because the locks are staying for a smaller period. The issue still persists but is minified like this.
Basically I can verify that if the indexers are running on scheduled mode, the issue can happen with setup:upgrade if they have items in the backlog
Hi @engcom-Delta. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:
[ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).Details
If the issue has a valid description, the label Issue: Format is valid
will be added to the issue automatically. Please, edit issue description if needed, until label Issue: Format is valid
appears.
[ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue. If the report is valid, add Issue: Clear Description
label to the issue by yourself.
[ ] 3. Add Component: XXXXX
label(s) to the ticket, indicating the components it may be related to.
[ ] 4. Verify that the issue is reproducible on 2.4-develop
branchDetails
- Add the comment @magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure.
- If the issue is reproducible on 2.4-develop
branch, please, add the label Reproduced on 2.4.x
.
- If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here!
[ ] 5. Add label Issue: Confirmed
once verification is complete.
[ ] 6. Make sure that automatic system confirms that report has been added to the backlog.
Hi @ioweb-gr
it works fine on Magento 2.4-develop instance, I would request to try once again & confirm .Hence added the label ''Needs update'
Thanks
Hi, what do you mean it's working fine?
Let's take it one step at a time.
Did you verify that during setup upgrade triggers are dropped and recreated?
Hi @ioweb-gr , We could not notice that during setup upgrade triggers are dropped & recreated. Thanks
@engcom-Delta
I see. Here's how to guarantee that you'll see triggers are dropped and recreated.
Step 1: Put all indexers in scheduled mode. Step 2: Modify database table mview_state and add the following check
CHECK (mode = 'enabled')
Step 3: Execute setup upgrade
How does this guarantee you'll notice the issue?
First of all when indexers change mode from save to schedule and vice versa, triggers are created and dropped in order for the indexers to function.
By adding the above check, when setup:upgrade executes it's going to throw an exception because of a DB level constraint.
At that point you'll know for sure, that the mode on the indexer was changed from scheduled to save and that the setup:upgrade command tried to drop / recreate the triggers.
Could you give it a try?
Hi @engcom-Hotel. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:
[ ] 1. Verify that issue has all the required information. (Preconditions, Steps to reproduce, Expected result, Actual result).Details
If the issue has a valid description, the label Issue: Format is valid
will be added to the issue automatically. Please, edit issue description if needed, until label Issue: Format is valid
appears.
[ ] 2. Verify that issue has a meaningful description and provides enough information to reproduce the issue. If the report is valid, add Issue: Clear Description
label to the issue by yourself.
[ ] 3. Add Component: XXXXX
label(s) to the ticket, indicating the components it may be related to.
[ ] 4. Verify that the issue is reproducible on 2.4-develop
branchDetails
- Add the comment @magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure.
- If the issue is reproducible on 2.4-develop
branch, please, add the label Reproduced on 2.4.x
.
- If the issue is not reproducible, add your comment that issue is not reproducible and close the issue and stop verification process here!
[ ] 5. Add label Issue: Confirmed
once verification is complete.
[ ] 6. Make sure that automatic system confirms that report has been added to the backlog.
Hello @ioweb-gr,
We have tried to reproduce the issue in Magento 2.4-develop branch with the following data:
But for us the setup:upgrade
not taking that much time. We have recorded a video related to that. Please have a look and let us know if we have missed anything:
Thanks
Hi, that only means that you cannot simulate real conditions this way.
On our client's store we have around 60k products, 500 orders daily, 50-150 concurrent customers browsing the website at all times, 5 people doing data entry, multiple catalog price rules, 5 sources, 11 stocks etc etc.
The indexers are constantly having items on queue to reindex and while it's not always happening that setup:upgrade is extremely slow I have seen cases where it will take a long time (even 15minutes) contrary to the normal ones when it needs a few seconds.
Usually the queries running in the backend as showcased by mytop are related to MSI indexers running for stock or catalog rule pricing ones in our specific case.
The best way I found to replicate this is to actually have a reindexing process running on an amount of data, that will lock the tables and then try to do a setup:upgrade. Because the lock exists, the process will hang for an indefinite period of time until the lock is released.
But in spite of this side-effect, the underlying issue is still puzzling me.
If I've decided to set my indexers to schedule
mode. Why would they need to change to work on save
and then back to schedule
during setup:upgrade? There's nothing to gain by that, it only adds time to the setup:upgrade process.
@kandy: maybe it helps if a senior developer could have a look here? The engcom squad seems to not understand this problem because I think they miss some deeper knowledge about this sort of complicated functionality of Magento.
Same issue here on a busy store.
Any solution? @joeshelton-wagento @ioweb-gr
Not really, expecting Magento to confirm this is happening at the very least so a few more voices might help
After an internal discussion and based on multiple complaints we confirmed this issue. For such type of issue is not trivial to describe simple(or end-user) steps to reproduce.
Also, it is hard to predict final performance results. Theoretically, numbers should be better So, as Acceptance Criteria for this issue should be performance measurement before and after
Suggested approach
cc: @ioweb-gr @hostep cc2: @engcom-Hotel @kandy @sidolov @akaplya
:white_check_mark: Jira issue https://jira.corp.magento.com/browse/AC-2423 is successfully created for this GitHub issue.
:white_check_mark: Confirmed by @sdzhepa. Thank you for verifying the issue.
Issue Available: @sdzhepa, You will be automatically unassigned. Contributors/Maintainers can claim this issue to continue. To reclaim and continue work, reassign the ticket to yourself.
Hello
Due to questions in slack channels about the status of this issue, I think it is better to provide answers here. This post is based on information that I found in several Jira tickets related to this issue and reflects the current status.
Development/fix for this is going on in ACP2E-963
this internal ticket. So after merge related commits can be found by this ticket id. like https://github.com/magento/magento2/search?q=ACP2E-963&type=commits
As I can see initial development(fix) and testing were already done. Ticket in the preparation for delivery stage. I think it will be merged in 2.4-develop
soon
The target release version for this fix is 2.4.6
cc: @coderimus
The issue has been fixed by the Adobe team in the scope of ACP2E-963
Jira ticket
Related commits: https://github.com/magento/magento2/search?q=ACP2E-963&type=commits
@sdzhepa is the target release for this fix still 2.4.6 (aka, not before March)?
@lauraseidler
According to Jira ticket, ACP2E-963
Target release is 2.4.6
Hi guys, thanks for this fix!
We were struggling with this bug.
Any chance to be released in a Quality Patch before 2.4.6 release?
It would help a lot of merchants!
Any chance to be released in a Quality Patch before 2.4.6 release?
Indeed, just yesterday I had to wait for 35 minutes for Magento to finish running setup:upgrade due to dropping and readding the triggers. It was painful and the downtime cost a lot of money to the merchant. A quality patch would be awesome
We have created a patch with the check statements part of ACP2E-963
on 2.4.5-p1
but unfortunately it's not solving the issue, at each deployment, triggers are created again ...
Appear to be replicating on 2.4.3 as well - just to add
We are still experiencing this issue on 2.4.6.
@sdzhepa we're also still facing this, could you reopen please?
We are reopening this issue for further analysis.
Hi @engcom-Hotel. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:
Area: XXXXX
label to the ticket, indicating the functional areas it may be related to.2.4-develop
branch@magento give me 2.4-develop instance
to deploy test instance on Magento infrastructure. 2.4-develop
branch, please, add the label Reproduced on 2.4.x
.Issue: Confirmed
once verification is complete. Hello @ioweb-gr,
We have rechecked this issue and it seems the triggers are still dropping and creating on each bin/magento setup:upgrade
command. In order to reproduce the issue, we have followed the below steps:
Enable the db.log via the following command:
bin/magento dev:query-log:enable
Debug the subscribe
and unsubscribe
methods from the below class:
https://github.com/magento/magento2/blob/adc4105fcfbeee29d534482d8c6d9c5c1a193a0c/lib/internal/Magento/Framework/Mview/View.php#L1
Run the bin/magento setup:upgrade
command
We can see in the below screenshot of db.log triggers are dropping and creating always: Dropping Triggers
Creating Triggers
In the case of a huge catalog and categories, it might happen that the bin/magento setup:upgrade
command will take time. Hence confirming the issue.
Thanks
:x: Cannot export the issue. This GitHub issue is already linked to Jira issue(s): https://jira.corp.adobe.com/browse/AC-2423
@engcom-Hotel is there any progress on this issue?
@engcom-Hotel let me also add one more relation here to the issue #36667 because when the queries for the related / upsell / cross-sell blocks run as well and are in a stuck state (in sending data phase) , setup:upgrade
won't be able to progress either as in order to drop and readd the triggers it needs to take an exclusive lock on the tables leading to an indefinite time of processing.
Hello,
As I can see this issue was fixed in the scope of the internal Jira ticket ACP2E-963 by the internal team Related commits: https://github.com/search?q=repo%3Amagento%2Fmagento2+ACP2E-963&type=commits
Based on the Jira ticket, the target version is 2.4.6.
Thanks
@engcom-Hotel we are still seeing this in 2.4.6
I can confirm it is still happening in 2.4.6
Let me reopen it for further investigation.
@magento I am working on this
According to this comment https://github.com/magento/magento2/issues/33386#issuecomment-1043290448 and multiple complaints related to this issue, we are reconfirming this issue.
Thanks
:x: Cannot export the issue. This GitHub issue is already linked to Jira issue(s): https://jira.corp.adobe.com/browse/AC-2423
According to the release notes, this issue has been solved with Magento 2.4.6.:
Apparently it's not working properly though
Hi, Is there a quality patch already existing for this issue ? We are facing it on 2.4.7-p2
Hi, in order to battle this core issue, we use the following module for a while now without any issue: https://github.com/pykettk/module-indexer-deploy-config
I have a fairly large catalog on a website and I've noticed that sometimes setup:upgrade takes a very very long time to complete. More than 30 minutes. It's not always occuring which makes it hard to track but by carefully examining what is happening I noticed that the command is dropping and recreating the database triggers when the indexers are set on scheduled mode. However with the indexers on save mode, the backend takes a very long time to save products and especially categories.
So our only option on big catalogs is to use the indexers on schedule mode.
By examining further I went and forced a check on the mview_state table to see what script is changing the indexer mode during setup:upgrade and so the command threw an exception as expected when executing the following query
Then I was able to see that the functions \Magento\Framework\Mview\View::unsubscribe \Magento\Framework\Mview\View::subscribe
Are executed for all indexers regardless, which will trigger dropping and re-adding the database triggers.
However dropping a trigger and recreating in a fairly large database is taking a long time to complete causing significant downtime.
It seems I'm not the only one facing this issue https://magento.stackexchange.com/questions/330245/magento-2-indexers-sometimes-switchon-their-own-to-update-on-save-when-config
Preconditions (*)
Steps to reproduce (*)
Expected result (*)
Actual result (*)
Additional Information
Please provide Severity assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.