Open zhangzhichen111 opened 2 years ago
I assume that you mean the comparison to the autoregressive context model. Yes. In theory the encoding complexity is at the same level to the autoregressive context model. It is fast just because the implementation of the arithmetic coder is on C++. However, the decoding complexity of our method is in theory faster than the autoregressive version, the reason of which should be obvious.
Sorry, I didn't make my question clear. I mean why can your arithmetic coding implementation be faster(about 10 times)? Is it because of the multi-process acceleration that you adopted the subprocess module in the entropy encoding part?
I noticed that you have implemented your entropy coding part in python before. Is this part of the speedup because you implement it in c++?
Yes. You see executing the python AE multiple times introduces great overhead. C++ implementation makes the evaluation of the complexity focus on the network execution instead of AE.
OK, thank you very much.
I'm sorry to bother you. I don't know why this encoder can be faster?Is it because the subprocess module of python speeds up the encoding process ? It's important to me to know this, thanks for your answer.