Closed Phosphor-Bai closed 4 hours ago
hi @Phosphor-Bai
That's how it should work.
When possible, einops creates a view to existing tensor. To put it simply, values are actually pointing to the values in input tensor. Since you use repeat, several elements can point to the same original value. This strategy allows spending ~zero additional memory and perform zero copying in most cases.
When creating view is impossible, einops creates a new tensor (in other words, materializes result).
For instance np.reshape/torch.reshape
has the same semantics: view when possible, otherwise create a copy.
To understand why sometimes creating view is impossible, you need to understand how striding works: https://stackoverflow.com/questions/53097952/how-to-understand-numpy-strides-for-layman Then you should be able to see how exactly this worked in the examples you posted above.
Describe the bug When using einops.repeat on a dimension with length being 1, the values of the repeated items is interlocked, but if the length is not 1, the values are not interlocked. Btw, by using repeat_interleave in pytorch, in all cases values are not interlocked
Reproduction steps Steps to reproduce the behavior:
Expected behavior The result of
a
andb
are not consistent with those inc
andd
. I'm confused about why dim1 inb
shares the same value and is also shared witha
.Your platform I reproduced this problem both in python3.9 and python3.10, with einops version all being 0.8.0