rollbar / pyrollbar

Error tracking and logging from Python to Rollbar
https://docs.rollbar.com/docs/python/
MIT License
213 stars 133 forks source link

batched transform #421

Closed ijsnow closed 1 year ago

ijsnow commented 1 year ago

Description of the change

Add a transform that applies each transform with a single traverse of each object rather than traversing an object for each transform. The hope is that this will result in less CPU usage since the number of nodes in an object will almost always be larger than the number of transforms and in the case that its not true, performance difference will be negligible.

This is an opt in feature with the batch_transforms field on the settings object, but all tests pass when using the BatchedTransform manually.

Type of change

Related issues

Shortcut stories and GitHub issues (delete irrelevant)

Checklists

Development

Code review

danielmorell commented 1 year ago

This is related to #412.

ijsnow commented 1 year ago

Do we care about the depth of a transformed node changing? Only asking because the existing tests pass using the batched transform. I wrote a test case that isn't supported by this change.

def test_switching_depth(self):
    class BoxedNumberTransformer(Transform):
        def transform_number(self, o, key=None):
            return {"num": o}

    class NumberToStringTransformer(Transform):
        def transform_number(self, o, key=None):
            return str(o)

    input = ("hello", {"there": 0})

    got = transform(
        input,
        [BoxedNumberTransformer(), NumberToStringTransformer()],
        batch_transforms=True,
    )

    self.assertEqual(got, ("hello", {"there": {"num": "0"}}))

I'm trying to get a nested traverse call to work but its seeming a little complicated.

shortcut-integration[bot] commented 1 year ago

This pull request has been linked to Shortcut Story #123195: pyrollbar: resource usage problems.

mudetroit commented 1 year ago

Do we care about the depth of a transformed node changing? Only asking because the existing tests pass using the batched transform. I wrote a test case that isn't supported by this change.

I feel like this is probably okay, but interested on @danielmorell's take on it. I feel like the behavior shown is what I would actually expect but perhaps I am wrong.

danielmorell commented 1 year ago

Do we care about the depth of a transformed node changing? Only asking because the existing tests pass using the batched transform. I wrote a test case that isn't supported by this change.

@ijsnow If I understand correctly, the test you show is failing with the changes in this PR?

danielmorell commented 1 year ago

I was able to do some profiling and performance testing. Initially this does not look a lot different from the numbers I was seeing before. but I need to make sure that I'm not messing it up somehow.

ijsnow commented 1 year ago

@danielmorell

@ijsnow If I understand correctly, the test you show is failing with the changes in this PR?

Yes and it looks like a test that should pass, however if I ensure that the change is used by all usages in the tests, all the existing tests pass, which is the only reason I'm asking if it is intended to be supported.

Do we have any way of knowing how many custom transforms are in use in the wild? I'm giving it another quick pass to try to get that test to work but since its behind a flag in the settings maybe we could just document that caveat, if it indeed does help with performance.