Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
The commit 7c916c13675bb05b1a5522a9c797b33e997e4f19 tries to give identifiers observed in the source to the proxies of variables. However such identifiers sometimes refer to thunder's internal implementation of the language's construct, and this can be confusing.
Code sample
@thunder.jit
def f(xs, s):
for i, x in enumerate(xs):
s += x
return s
n = 6
xs = [torch.zeros(n) for _ in range(n)]
s = torch.zeros(n)
f(xs, s)
print(thunder.last_traces(f)[-1])
The commit 7c916c13675bb05b1a5522a9c797b33e997e4f19 tries to give identifiers observed in the source to the proxies of variables. However such identifiers sometimes refer to thunder's internal implementation of the language's construct, and this can be confusing.
Code sample
These renamings happen in
thunder/core/jit_ext._maybe_update_proxy_name
, which is called fromthunder.core.interpreter._load_fast_handler
.frame.code
reveals that the irrelevant variable names come from the lookasides implemented inthunder.core.interpreter
.res
is fromSequenceIter.__next__
,elem
is from_enumerate_lookaside
,b
from_binary_op
.Ideal behavior
When deciding the identifiers we can just ignore those in thunder's interpreter, which will result in
Altenatively, we can perhaps label the identifier
x
by an index whenx
is bound to multiple proxies, ascc @t-vi @nikitaved