Closed gaopengff closed 1 week ago
Register glu_jvp and glu_jvp_backward for XPU. Only glu_jvp needs xpu kernel. glu_jvp_backward will use pytorch's common implementation just like cuda at code.
Register glu_jvp and glu_jvp_backward for XPU. Only glu_jvp needs xpu kernel. glu_jvp_backward will use pytorch's common implementation just like cuda at code.