issues
search
hikettei
/
Caten
[wip] Deep Learning Compiler based on Polyhedral Compiler and Light-weight IRs based on Optimizing Pattern Matcher
https://hikettei.github.io/Caten/
Other
15
stars
1
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Transformer > 70 Layers Graph Construction in 1s
#148
hikettei
opened
19 hours ago
0
Implement Bitnet and Sparse Kernel
#147
hikettei
opened
1 week ago
0
reimplement everything
#146
hikettei
opened
1 week ago
1
Plans for rewriting Caten/ajit
#145
hikettei
opened
1 week ago
1
The entire graph compilation and exporter, including Transformer, Tokenizer, Data Loader, weight parser, etc...
#144
hikettei
opened
2 weeks ago
3
Enhancement: Add rearrange
#143
hikettei
opened
2 weeks ago
0
Reconfigurable Polyhedral Compiler for both CPU and GPU
#142
hikettei
opened
2 weeks ago
0
New Backends: CUDA, OpenCL, and (get back) METAL, Vulkan
#141
hikettei
opened
2 weeks ago
0
TODO: Schedule Group Partitioning
#140
hikettei
opened
2 weeks ago
0
Fix: Propagate Scalar anywhere
#139
hikettei
closed
2 weeks ago
2
[Prepreq] Revisit the algorithm in transformer.lisp
#138
hikettei
closed
2 weeks ago
0
Enhancement: XXX-Style Render
#137
hikettei
opened
3 weeks ago
0
Scheduler: Support Symbolic Increment
#136
hikettei
closed
5 days ago
1
Optimize: Scalar Promotion
#135
hikettei
closed
5 days ago
3
[NOT FOR MERGE] Reconfigurable Polyhedral Scheduler
#134
hikettei
closed
2 weeks ago
3
maximize_band_depth and Loop Fusion
#133
hikettei
closed
3 weeks ago
0
Enhancement: defmodule/defclass is always AOT.
#132
hikettei
opened
3 weeks ago
0
Set *no-grad* = T if there's no parameter
#131
hikettei
closed
3 weeks ago
0
a lil improvement and bugfix to the scheduler
#130
hikettei
closed
3 weeks ago
1
Fix: !randint dtype inference
#129
hikettei
opened
3 weeks ago
0
Support INF/-INF/NaN
#128
hikettei
closed
3 weeks ago
0
Feature: MLIR Renderer
#127
hikettei
opened
3 weeks ago
0
Documentation
#126
hikettei
closed
3 weeks ago
0
BugFix: 3D !tril/!triu should be in-place
#125
hikettei
closed
3 weeks ago
0
Optimizer: Logical Operator Simplification by Pattern Matcher
#124
hikettei
opened
4 weeks ago
0
New Linalg Ops: Einsum/Tril/Triu etc...
#123
hikettei
closed
4 weeks ago
0
Fix: ConvND Scheduling
#122
hikettei
closed
5 days ago
1
Enhancement: ShapeTracker produces LazyAssertion (Support Full Symbolic)
#121
hikettei
closed
5 days ago
1
!reshape always use !contiguous
#120
hikettei
closed
4 weeks ago
1
Enhancement: DOT=1
#119
hikettei
closed
4 weeks ago
0
Fix: ConvND Shape Inference
#118
hikettei
closed
4 weeks ago
0
JIT: Post-MultiExpr Optimization between different iteration spaces (Final)
#117
hikettei
closed
1 month ago
5
Post-MultiExpr Fusion in the equivalent domain
#116
hikettei
closed
1 month ago
0
retrying with maximize_coincidence
#115
hikettei
closed
1 month ago
0
Enhancement: MultiExpr in the same domain
#114
hikettei
closed
1 month ago
0
Optimize: Broadcast+Matmul Fusion
#113
hikettei
closed
4 weeks ago
1
JIT: Complete Implementing Post-MultiExpr and Index-Component Fusion
#112
hikettei
closed
1 month ago
2
Enhancement: EXPR Simplifier
#111
hikettei
closed
1 month ago
1
Various enhancements and refactorings on caten/ajit
#110
hikettei
closed
1 month ago
5
Optimize: Matmul+Transpose Fusion
#109
hikettei
closed
2 weeks ago
4
WIP: PyTorch Interop for writing tests
#108
hikettei
closed
1 month ago
0
Calling PyTorch via py4cl for testing
#107
hikettei
closed
1 month ago
0
Allowing (forward module tensor nil)
#106
hikettei
closed
1 month ago
1
[BugFix] Eliminate Known issues in Simplifier
#105
hikettei
closed
1 month ago
2
Enhancement: Export to dot
#104
hikettei
closed
4 weeks ago
0
Optimize: forward is slow (graph construction)
#103
hikettei
closed
1 month ago
3
Milestones
#102
hikettei
opened
1 month ago
0
[Refactor] FastGraph: 10x times faster Pattern Matcher
#101
hikettei
closed
1 month ago
8
make-node: Check the type of attr slots
#100
hikettei
closed
1 month ago
0
Implement Einsum (AOT)
#99
hikettei
closed
4 weeks ago
0
Next