scikit-hep / pyhf

pure-Python HistFactory implementation with tensors and autodiff
https://pyhf.readthedocs.io/
Apache License 2.0
283 stars 84 forks source link

replace einsum shuffling in tensorviewer with variable indexiing #594

Open lukasheinrich opened 5 years ago

lukasheinrich commented 5 years ago

Description

I just realized that the tensor backends support indexing like

a = torch.randn(1,2,3)
a[...,0]
a[...,1]
a[...,2]

to access the last axis.. in light of this probably the einsum shuffling in TensorViewer can be replaced

kratsg commented 5 years ago

Why or how is this different from a[:,:,0] and a[:,:,1] and so on? I guess the ... means an arbitrary number of dimensions? Is there a performance difference? Maybe a[...,N] = a.T[N]

lukasheinrich commented 5 years ago

Yeah it’s the same as the colon version just for arbitrary number of parameters, I could see that this does not change the memory layout and this should be faster

On Wed, Oct 2, 2019 at 6:14 PM Giordon Stark notifications@github.com wrote:

Why or how is this different from a[:,:,0] and a[:,:,1] and so on? I guess the ... means an arbitrary number of dimensions? Is there a performance difference? Maybe a[...,N] = a.T[N]

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/diana-hep/pyhf/issues/594?email_source=notifications&email_token=AARV6A6IYQHGLXAAXTCWFLTQMTCFRA5CNFSM4I4VJSRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAFKDTA#issuecomment-537567692, or mute the thread https://github.com/notifications/unsubscribe-auth/AARV6AYAQ7ZWBFNT2DTNEVLQMTCFRANCNFSM4I4VJSRA .

kratsg commented 5 years ago

I could see that this does not change the memory layout and this should be faster

This is new to me (surprisingly!) but faster than the transpose version? Or than the colon access?