kevinohara80 / sfdc-trigger-framework

A minimal trigger framework for your Salesforce Apex Triggers
MIT License
952 stars 515 forks source link

Issue with addToLoopCount method #19

Open darko-dj opened 5 years ago

darko-dj commented 5 years ago

Hi Kevin,

I believe there's an issue with the addToLoopCount method, but only when the amount of records changed is greater than 200.

It's easier to explain with an example, so let's assume the following:

Now if we run a DML on 600 Account records, it will results in 3 Trigger batches (in Trigger context, records are executed in sets of 200):

Therefore, I don't think we can keep track of execution counts per handler itself. It would need to be per record per handler I believe.

Cheers

fbouzeraa commented 4 years ago

Hi darko-dj, Yes, exact ! I didn't see you post before so I posted the same issue, you can see my proposed solution. In fact the loop control is actually based on handler name which will bloc run completly since second run fo all events the trigger lisen, so we should control loop by handlerName+event. I joind my correction. Best regards,

gvgramazio commented 4 years ago

@fbouzeraa You posted an issue (#27) on a slightly different point. Yours is based on the differentiation of loops based on different trigger operations while op is more concerned about differentiation based on the same trigger operation on more than 200 records.

sfdevarch17 commented 2 years ago

We are facing the same issue where for bulk job processing, after the first chunk of 200 records, the trigger is bypassing (i modified to use TriggerHandler.bypass instead of throwing an exception). I am thinking of two options

  1. either to compare the trigger.new record set in each transaction with the previous chunk (if not null) and then bypass the trigger if it is the same, else reset the max loop count.
  2. check the trigger.new.size in the individual handler, if it is less that 200, only then setMaxLoopCount, else clearMaxLoopCount

i like option 1 better. what do you guys think? any alternate approach?

darko-dj commented 2 years ago

Yeah, I would recommend keeping a set of record ids you've already processed. Or a map of record id -> number of times processed, and not process the record if it's at max loop count. Depending on what you're trying to accomplish, it's possible you don't even need to worry about max loop count and just using a set of record ids could do the trick.


From: sfdevarch17 @.> Sent: 16 September 2022 8:37 AM To: kevinohara80/sfdc-trigger-framework @.> Cc: darko-dj @.>; Author @.> Subject: Re: [kevinohara80/sfdc-trigger-framework] Issue with addToLoopCount method (#19)

We are facing the same issue where for bulk job processing, after the first chunk of 200 records, the trigger is bypassing (i modified to use TriggerHandler.bypass instead of throwing an exception). I am thinking of two options

  1. either to compare the trigger.new record set in each transaction with the previous chunk (if not null) and then bypass the trigger if it is the same, else reset the max loop count.
  2. check the trigger.new.size in the individual handler, if it is less that 200, only then setMaxLoopCount, else clearMaxLoopCount

i like option 1 better. what do you guys think? any alternate approach?

— Reply to this email directly, view it on GitHubhttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkevinohara80%2Fsfdc-trigger-framework%2Fissues%2F19%23issuecomment-1248710959&data=05%7C01%7C%7Cf14a89b597b44d15376908da976ade76%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637988782526314193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dqzgyPxFCWQwoD38iM3vjv3mGC4xeMtLa2qNfHZEWps%3D&reserved=0, or unsubscribehttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACUOSJTE5Z3YZNPE2T7XCFLV6OQKPANCNFSM4GOS6M7Q&data=05%7C01%7C%7Cf14a89b597b44d15376908da976ade76%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637988782526314193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=v9xKAuU%2FOW7Q8gdV2lbTeDos901vbHJsJQLt8jpfbf0%3D&reserved=0. You are receiving this because you authored the thread.Message ID: @.***>

fbouzeraa commented 2 years ago

Hi @darko-dj yes you'r right my solution does not cover this cas you discuss (many batch of 200 recs). We should base on records ids as we always done before this framework. It would be best if we change the frame work by pass api using this principle.