d2l-ai / d2l-tvm

Dive into Deep Learning Compiler
https://tvm.d2l.ai
644 stars 97 forks source link

How To: Dynamic Batch, Low Bit Quantization Calibration, FP16 Inference, Training via TVM #1

Open kalcohol opened 5 years ago

kalcohol commented 5 years ago

Most wanted: Dynamic Batch Low Bit Quantization Calibration FP16 Inference Training via TVM

Low level: Adapt to a new CPU/GPU Architecture, such as MIPS, Specialization RISC-V(with special SIMD implementation).

kalcohol commented 5 years ago

The more C++ courses, the less injuries caused.

kalcohol commented 5 years ago

add neural architecture search to the list?

yidawang commented 5 years ago

It depends on the latest advance of the area. We will cover some on-going research topics at the end.