PacktPublishing / Learn-CUDA-Programming

Learn CUDA Programming, published by Packt
MIT License
987 stars 231 forks source link

Does the "softmax_loss_kernel" function only run in the first thread? #7

Closed hitblackjack closed 4 years ago

hitblackjack commented 4 years ago

In the 42 line of chapter10/10_deep_learning/01_ann/src/loss.cu gloabal void softmax_loss_kernel(......) { int batch_idx = blockDim.x blockIdx.x + threadIdx.x; .... if (batch_idx >0) return; for(int c=0;c<num_outputs;c++) loss += target[batch_idx num_outputs + c] logf(predict[batch_idx num_outputs +c]); workspace[batch_idx] = -loss; .... }

since batch_idx must be zero here, why take the batch_idx to index the target,predict,and wrokspace?

haanjack commented 4 years ago

Thanks for pointing the bug. You are correct. It seems I put the if (batch_idx > 0) return; for debugging and didn't removed... I'll remove the lines to support multi-batch loss calculation.

ManikandanKurup-Packt commented 4 years ago

Hi @haanjack Can we close the issue, if it has been resolved? Thanks.