jongpie / NebulaLogger

The most robust logger for Salesforce. Works with Apex, Lightning Components, Flow, Process Builder & Integrations. Designed for Salesforce admins, developers & architects.
https://nebulalogger.com
MIT License
651 stars 155 forks source link

Additional Log__c records generated after archiving older log__c records using big object Archiving plugin #694

Open pankaj0509 opened 1 month ago

pankaj0509 commented 1 month ago

Package Edition of Nebula Logger

Unlocked Package

Package Version of Nebula Logger

v4.13.11

New Bug Summary

Hi @jongpie ,

I am using feature https://github.com/jongpie/NebulaLogger/tree/feature/custom-field-mappings . We have included the big object archiving plugin in project.json.

{ "name": "Nebula Logger", "namespace": "", "sourceApiVersion": "60.0", "sfdcLoginUrl": "https://login.salesforce.com", "plugins": { "sfdx-plugin-prettier": { "enabled": true } }, "packageDirectories": [ { "package": "Nebula Logger - Core", "path": "./nebula-logger/core", "definitionFile": "./config/scratch-orgs/base-scratch-def.json", "scopeProfiles": true, "versionNumber": "4.13.11.NEXT", "versionName": "Custom Field Mappings Support", "versionDescription": "Added the ability to set & map custom fields, using a new CMDT LoggerFieldMapping__mdt and new instance methods LogEntryEventBuilder.setField() & setFields()", "releaseNotesUrl": "https://github.com/jongpie/NebulaLogger/releases", "unpackagedMetadata": { "path": "./nebula-logger/extra-tests" }, "default": true }, { "default": false, "package": "Nebula Logger - Core Plugin - Async Failure Additions", "path": "./nebula-logger/plugins/async-failure-additions/plugin", "versionName": "Added logging for Screen Flow failures", "versionNumber": "1.0.2.NEXT", "versionDescription": "Allows unhandled exceptions within screen flows to be automatically logged (toggleable, default off)" }, { "package": "Nebula Logger - Core Plugin - Big Object Archiving", "path": "./nebula-logger/plugins/big-object-archiving/plugin", "versionName": "Beta Release", "versionNumber": "0.9.0.NEXT", "versionDescription": "Initial beta version of new plugin", "default": false }, { "path": "nebula-logger", "package": "testunlocked5", "versionName": "ver 0.1", "versionNumber": "0.1.0.NEXT", "default": false, "versionDescription": "testunlocked package" } ], "packageAliases": { "testunlocked5": "0HoGB000000wk3R0AQ", "testunlocked5@0.1.0-1": "04tGB000003rdwyYAA" } }

fyi, in Above code i am creating my own unlocked package named testunlocked5.

after we installed this unlocked package everything is working fine and expected. however we noticed that when we run LogBatchPurger after making sure that "Log Purge Action" of "Archive" and a "Log Retention Date" <= TODAY on log__C records.

We noticed that logC records are getting archived and I am able to see those corresponding log entries in LogEntryArchiveb. but we also found that new log__C records are also getting created .

is this normal behavior ?

do you have any documentation on how to use big object Archiving plugins ?

Pankaj

jongpie commented 4 weeks ago

Hi @pankaj0509 based on what you've described, I think it's working as expected - but there are a couple of ways to use the Big Object archiving plugin, depending on how you'd like it to behav. Unfortunately, the documentation needs to be updated/expanded, but the original release notes for the plugin has some details about how you can configure the plugin to work (copied below). At the moment, it sounds like you're using option 4 below, but you may want to consider using option 1 instead.

  1. LoggerSettings__c.DefaultSaveMethod__c = EVENT_BUS, LoggerSettings__c.DefaultPlatformEventStorageLocation__c = BIG_OBJECT - with these options, Nebula Logger will still leverage the Event Bus, which ensures that log entries are saved, even if an exception is thrown. This may not be ideal for all orgs/users due to org limits for platform events, but this would provide the most reliable way of logging directly to LogEntryArchive__b & circumvent the custom objects Log__c, LogEntry__c and LogEntryTag__c
  2. LoggerSettings__c.DefaultSaveMethod__c = BIG_OBJECT_IMMEDIATE - with this option, Nebula Logger will skip the Event Bus, and instead try to write directly to the Big Object LogEntryArchive__b. Any Big Object records that are saved will not be rolled back if there are any exceptions in the transaction - however, this option only works if you save the Big Objects before performing DML on any "normal" SObjects. If you perform DML on another SObject first, and then attempt to save directly to the Big Object, the platform will throw a mixed DML exception, and no Big Object records will be saved.
  3. LoggerSettings__c.DefaultSaveMethod__c = BIG_OBJECT_QUEUEABLE - with this option, Nebula Logger will asynchronously save Big Object records using a queueable job. This is helpful in avoiding hitting limits in the original transaction, and also avoids the mixed DML exception that can occur when using BIG_OBJECT_IMMEDIATE (above). However, if an exception occurs in the current transaction, then the queueable job will not be enqueued.
  4. LoggerSettings__c.DefaultSaveMethod__c = EVENT_BUS, LoggerSettings__c.DefaultPlatformEventStorageLocation__c = CUSTOM_OBJECTS, LoggerSettings__c.DefaultLogPurgeAction__c = Archive - with these options configured, Nebula Logger will utilize the Event Bus to ensure any log entries are published (even if an exception occurs), and the data is then initially stored in the custom objects Log__c, LogEntry__c and LogEntryTag__c. Once the log's retention date has passed (Log__c.LogRetentionDate__c <= System.today(), then the plugin will archive the custom object data into LogEntryArchive__b before the custom object data is deleted.