bzcheeseman / Hobbit

Learning about and using LLVM/Polly for DL
Apache License 2.0
1 stars 1 forks source link

[Suggestion] Do you consider accepting tensorflow model and then emit LLVM IR? #1

Open FrozenGene opened 6 years ago

FrozenGene commented 6 years ago

I see your test is just call your C++ API. Do you consider in your project, you accept one trained tensorflow model and then emit LLVM IR, JIT it and execute it?

bzcheeseman commented 6 years ago

Your meaning is polly outputs vectorize IR and will do the same work as schedule like TVM. But how polly can control the same or better than TVM? TVM can control the vextorize ir according to specific dnn model

I mean I hope we can get close to TVM, but it may not be possible to beat it in the case where it's polly optimized IR vs hand-optimized tvm schedules. The idea is that there are a lot of optimizations that can be won from low level type things (tiling, register scheduling, vectorization) that polly does already. Maybe I just don't understand your question though?

FrozenGene commented 6 years ago

I understand your meaning. You means that if we introduce Polly, we can benifit from Polly’s tilling, vectorization and so on. Take Resnet18 as the example, we maybe can not beat TVM’s hand-written optimization. But for Resnet50, TVM doesn’t do hand-written optimization, we should get better performance. If so, what happened when TVM introduce Polly into their project? Because TVM can emit LLVM IR, it can also use Polly.

bzcheeseman commented 6 years ago

If so, what happened when TVM introduce Polly into their project? Because TVM can emit LLVM IR, it can also use Polly.

I would applaud them and gladly help out :) We're all in this together!

That being said, I don't think it's in their roadmap because it looks like they're following the Halide style where it focuses on quickly iterating manually as a means to find the best possible solution. While there's no reason this project couldn't go that way as well, I think there's also something to be learned from figuring out how to do a lot of these optimizations automatically with optimizing passes. Eventually I would like to be able to beat (or at least approach) TVM/Halide with well-designed optimizing passes as LLVM plugins, but for now just going from a DNN to an LLVM IR representation is a great first step because we can always use the pass framework to do modifications later.

FrozenGene commented 6 years ago

In fact, I also communicated with one author of TVM about integrating with Polly , he told me that someone is trying to automatic pollyhedra schedule like tensor comprehension. Uses Polly? He didn't told me.