Closed taichuai closed 4 years ago
similar to #95 There's a PR for compute it parallel with CPU. You can look into that. https://github.com/athena-team/athena/pull/117
Changing device to GPU probably won't accelerate cmvn computation because GPU is good at parallel computation when all input data is prepared. But in our case we need to perform feature extraction sequentially for all wave files.
similar to #95 There's a PR for compute it parallel with CPU. You can look into that.
117
ok, I knew, it sounds great for this problem! Did you have a try about using multithread computation
Changing device to GPU probably won't accelerate cmvn computation because GPU is good at parallel computation when all input data is prepared. But in our case we need to perform feature extraction sequentially for all wave files.
OK, Thanks for your reply
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue is closed. You can also re-open it if needed.
feature_dim = self.audio_featurizer.dim * self.audio_featurizer.num_channels with tf.device("/cpu:0"): self.feature_normalizer.compute_cmvn( self.entries, self.speakers, self.audio_featurizer, feature_dim ) self.feature_normalizer.save_cmvn() return self
in the "compute_cmvn_if_necessary" function of athena/data/datasets/speech_recongnition.py, it usese cpu to compute cmv, can i change it by "with tf.device("/gpu:0")" to accelerate computing?