hikettei / cl-waffe2

[Experimental] Graph and Tensor Abstraction for Deep Learning all in Common Lisp
https://hikettei.github.io/cl-waffe2/
MIT License
122 stars 5 forks source link

[Optimize] 20x times faster Unfold by optimizing the layout of tensors #128

Closed hikettei closed 9 months ago

hikettei commented 9 months ago

Changes

CL-WAFFE2-REPL> (with-cpu-jit (CPUTensor LispTensor)
          (proceed-bench
           (call (Conv2D 3 6 `(3 3)) (randn `(10 3 25 25))))
          nil)
[Sorted by Instructions]
 Time(s)  |   Instruction ( * - Beyonds the average execution time)
0.00153*  | <WfInst[op=IM2COLNODE-LISPTENSOR]       : <Input>TID168938 <= op(<Input>TID168925{float, (10 3 25 25)} <Input>TID168938{float, (10 3 3 3 23 23)})>
2.0e-5    | <WfInst[op=PERMUTE-NODE-T]              : <Input>TID168938 <= op(<Input>TID168938{float, (10 3 3 3 23 23)} <Input>TID168938{float, (10 23 23 3 3
                                                                   3)})>
7.4e-5    | <WfInst[op=MOVETENSORNODE-JITCPUTENSOR] : TID168944 <= op(TID168944{float, (10 23 23 3 3 3)} <Input>TID168938{float, (10 23 23 3 3 3)})>
4.0e-6    | <WfInst[op=RESHAPETENSORNODE-T]         : TID169138 <= op(TID168944{float, (10 23 23 3 3 3)} TID168944{float, (5290 27)})>
3.0e-6    | <WfInst[op=RESHAPETENSORNODE-T]         : TID168968 <= op(<Param>TID168968{float, (6 3 3 3)} TID168968{float, (6 27)})>
7.0e-6    | <WfInst[op=PERMUTE-NODE-T]              : TID168968 <= op(TID168968{float, (6 27)} TID168968{float, (27 6)})>
1.0e-6    | <WfInst[op=LAZYTRANSPOSENODE-T]         : TID168968 <= op(TID168968{float, (27 6)})>
3.32e-4*  | <WfInst[op=MATMULNODE-CPUTENSOR]        : TID169032 <= op(TID169138{float, (5290 27)} SV4BW(TID168968{float, (27 6)}) TID169032{float, (5290
                                                                              6)})>
1.0e-6    | <WfInst[op=FLEXIBLE-RANK-NODE-T]        : <Input>TID169067 <= op(<Param>TID169067{float, (6)})>
3.0e-6    | <WfInst[op=RESHAPETENSORNODE-T]         : TID169067 <= op(<Input>TID169067{float, (6)} TID169067{float, (1 6)})>
4.0e-6    | <WfInst[op=VIEWTENSORNODE-T]            : TID169015 <= op(TID169067{float, (5290 6)} TID169067{float, (1 6)})>
3.0e-5    | <WfInst[op=ADDNODE-JITCPUTENSOR]        : TID169032 <= op(TID169032{float, (5290 6)} TID169015{float, (5290 6)})>
3.0e-6    | <WfInst[op=RESHAPETENSORNODE-T]         : TID169032 <= op(TID169032{float, (5290 6)} TID169032{float, (10 23 23 6)})>
1.5e-5    | <WfInst[op=PERMUTE-NODE-T]              : TID169032 <= op(TID169032{float, (10 23 23 6)} TID169032{float, (10 6 23 23)})>

14 Instructions | 8 Tensors | Overheads due to SV4BW(...) -> 1.35e-4(s) 

 Total Time: 0.0020270003 sec

[Sorted by topK]
 Instruction                            | Total time (s) | Time/Total (n-sample=1)
<WfInst[op=IM2COLNODE-LISPTENSOR]       | 0.00153        | 75.480995%
<WfInst[op=MATMULNODE-CPUTENSOR]        | 3.32e-4        | 16.378883%
<WfInst[op=MOVETENSORNODE-JITCPUTENSOR] | 7.4e-5         | 3.6507149%
<WfInst[op=PERMUTE-NODE-T]              | 4.2e-5         | 2.0720272%
<WfInst[op=ADDNODE-JITCPUTENSOR]        | 3.0e-5         | 1.4800195%
<WfInst[op=RESHAPETENSORNODE-T]         | 1.3e-5         | 0.6413418%
<WfInst[op=VIEWTENSORNODE-T]            | 4.0e-6         | 0.19733594%
<WfInst[op=LAZYTRANSPOSENODE-T]         | 1.0e-6         | 0.049333986%
<WfInst[op=FLEXIBLE-RANK-NODE-T]        | 1.0e-6         | 0.049333986%

An elegant notation to repeat several models:

(defsequence NCompose (N)
    (RepeatN N
        (asnode #'!sin)
        (asnode #'!cos)))

(cl-waffe2:dprint (call (NCompose 2) (randn `(3 10))))
Op:COSNODE{CPUTENSOR}
 |Op:SINNODE{CPUTENSOR}
   |Op:COSNODE{CPUTENSOR}
     |Op:SINNODE{CPUTENSOR}
       |<TMP:CPUTENSOR>TID398579(3 10)
       |Op:MOVETENSORNODE{CPUTENSOR}
         |<Input:CPUTENSOR>TID398582(3 10)
     |Op:MOVETENSORNODE{CPUTENSOR}
       |<Input:CPUTENSOR>TID398599(3 10)
   |Op:MOVETENSORNODE{CPUTENSOR}
     |<Input:CPUTENSOR>TID398620(3 10)
 |Op:MOVETENSORNODE{CPUTENSOR}
   |<Input:CPUTENSOR>TID398637(3 10)

<Composite: NCOMPOSE{W170257}(
    <<1 Layers Sequence>>

[1/1]          ↓ 
<Composite: REPEATN-NODE{W170251}(
    <<Repeating x 3>> {
<Composite: ENCAPSULATED-NODE{W170253}(
    #<FUNCTION !SIN>
)>
<Composite: ENCAPSULATED-NODE{W170255}(
    #<FUNCTION !COS>
)>
}
)>)>