issues
search
Lightning-AI
/
lightning-thunder
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
Apache License 2.0
1.15k
stars
77
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Adding shape prim
#1113
jjsjann123
closed
2 weeks ago
12
Remove redundant parametrizations for swiglu benchmark
#1112
IvanYashchuk
closed
3 weeks ago
0
quantization: process tensors on meta device directly, maybe implement CPU quantization (if it is easy)
#1111
t-vi
opened
3 weeks ago
4
patching inplace update nvfuser translation
#1110
jjsjann123
closed
3 weeks ago
3
nvfuser translation with inplace update is not correct
#1109
jjsjann123
closed
3 weeks ago
0
[benchmark_litgpt] always `destroy_process_group` with `try-except-finally`
#1108
crcrpar
closed
3 weeks ago
0
try except as some `untyped_storage` is invalid for `data_ptr`, e.g. floa…
#1107
crcrpar
opened
3 weeks ago
0
Define `torchsymbol` for `torch.ops.higher_order.autograd_function_apply`
#1106
crcrpar
closed
2 days ago
8
Add a parametrized benchmark for swiglu
#1105
IvanYashchuk
closed
3 weeks ago
2
pickling failure for auto registered ops fix
#1104
k223kim
closed
2 weeks ago
2
Add liger kernels as an executor
#1103
mruberry
opened
3 weeks ago
0
Use the proxy args in the `FSDPParamUnpaddingVisitor`
#1102
kiya00
closed
3 weeks ago
3
support list[int] as shape for torch.randn
#1101
kshitij12345
closed
3 weeks ago
0
switch ci jobs to 2.4
#1100
t-vi
closed
3 weeks ago
0
Transform to skip distributed collective ops when `group.size() == 1`
#1099
crcrpar
opened
3 weeks ago
0
VJP rule for `PrimIDs.CLONE`, with some `torch.memory_format` handling logic
#1098
crcrpar
closed
2 weeks ago
0
Register `PrimIDs.BITWISE_*` to `nondifferentiable_vjp_symbols`
#1097
crcrpar
closed
3 weeks ago
0
nvfuserex to leverage static shape in trace
#1096
jjsjann123
closed
2 weeks ago
1
Add allow_cpu_scalar_tensors for shape,device check and broadcast functions
#1095
kiya00
closed
2 weeks ago
13
thunder + dynamo : use recompile instead of real_recompile
#1094
kshitij12345
closed
2 weeks ago
3
Expected dtype thunder.dtypes.bfloat16 but found thunder.dtypes.float32 for Dynamo+Thunder and Mixtral-8x7B-v0.1
#1093
mpatel31415
closed
2 weeks ago
8
unhashable type: slice for Thunder and Nous-Hermes-13b
#1092
mpatel31415
closed
3 weeks ago
0
AttributeError for Dynamo+Thunder+FP8 and Phi-3-mini-4k-instruct 'GraphModule' object has no attribute 'real_recompile'
#1091
mpatel31415
closed
2 weeks ago
1
Remove thunder+nvfuser+torch.compile executor from default list in targets.py
#1090
IvanYashchuk
closed
3 weeks ago
0
[CI Fix] Update check-package to v0.11.7
#1089
shino16
closed
3 weeks ago
0
bump pytorch/triton to 2.4/3.0.0
#1088
crcrpar
closed
3 weeks ago
1
bring back `IN_PLACE` tag to copy
#1087
crcrpar
closed
3 weeks ago
0
After adding recomputing_symbols, sort the new consumer according to the data flow
#1086
kiya00
closed
3 weeks ago
1
`masked_fill` casts `value` to `a.dtype`
#1085
crcrpar
closed
3 weeks ago
0
Allow different device and dtype for `prims.copy_` args
#1084
crcrpar
opened
4 weeks ago
0
`thunder.jit`ted `Tensor.masked_fill` and `Tensor.masked_fill_` return `torcdh.float32` tensor even when input is `torch.int64`
#1083
crcrpar
closed
3 weeks ago
0
Add option to `ThunderCompiler` to save `gm.code` or `gm.print_readable` to file
#1082
crcrpar
opened
4 weeks ago
1
Bump pypa/gh-action-pypi-publish from 1.9.0 to 1.10.0
#1081
dependabot[bot]
closed
3 weeks ago
0
Bump nbsphinx from 0.9.4 to 0.9.5
#1080
dependabot[bot]
closed
3 weeks ago
0
Bump myst-parser from 1.0.0 to 4.0.0
#1079
dependabot[bot]
closed
4 weeks ago
2
Bump litgpt from 0.3.1 to 0.4.11
#1078
dependabot[bot]
closed
3 weeks ago
1
Bump ipython[all] from 8.25.0 to 8.27.0
#1077
dependabot[bot]
closed
3 weeks ago
0
Bump hypothesis from 6.104.2 to 6.111.2
#1076
dependabot[bot]
closed
3 weeks ago
0
thunder as torch.compile backend - Add `splitter` to pass unsupported regions to inductor
#1075
kshitij12345
closed
2 weeks ago
9
Memory Leak when using nn.Module hooks and thunder.jit
#1074
kshitij12345
closed
6 days ago
2
use `torch.OP##_` if available not `torch.Tensor.OP##_`
#1073
crcrpar
closed
3 weeks ago
0
`torch.exp_` is not suppurted, but `torch.Tensor.exp_` is
#1072
crcrpar
closed
3 weeks ago
0
trace: add cursor to for bsym insertion
#1071
t-vi
opened
1 month ago
1
Add `TensorProxy.grad` attribute and proxify `Tensor.grad`
#1070
shino16
closed
3 days ago
1
revert using transform to execution, add ad-hoc fix for type_as
#1069
t-vi
closed
1 month ago
1
Implement VJP for Clone
#1068
tfogal
closed
2 weeks ago
2
Proxy order KeyError fix
#1067
riccardofelluga
opened
1 month ago
7
Add graph-by-graph benchmarking of dynamo.ThunderCompiler
#1066
kiya00
opened
1 month ago
1
torch_compile_cat_ex doesn't work when an input is a registered buffer on the module
#1065
IvanYashchuk
closed
1 month ago
5
Add a bunch of benchmarks from NeMo's NeVA
#1064
tfogal
closed
6 days ago
10
Previous
Next