descriptinc / cargan

Official repository for the paper "Chunked Autoregressive GAN for Conditional Waveform Synthesis"
https://maxrmorrison.com/sites/cargan
MIT License
188 stars 29 forks source link

inference speed #6

Closed forwiat closed 1 year ago

forwiat commented 2 years ago

Hello @maxrmorrison , thanks for your contribution about cargan. In my experiment of cargan, the model inference speed is slow although it was setted AUTOREGRESSIVE=False. I wonder that how to speed up cargan to close to paper's speed?

maxrmorrison commented 2 years ago

The inference speed is dependent on the compute you are using. If you want comparable results to those given in the paper, make sure your compute is comparable to the compute specifications given in the paper. As well, the benchmark given in the paper is only for the forward pass of the network. It does not include time loading, preprocessing, or saving results. See https://github.com/descriptinc/cargan/blob/master/cargan/evaluate/subjective/__main__.py.

JohnHerry commented 2 years ago

The paper said "we can easily improve generation speed at the cost of reduced training speed, increased memory usage, and slightly increased pitch error by changing the chunk size", How to set to make the model be improved as fast as HiFi-GAN_V2, and generate comparable tone quality?