Closed cuhkcarl closed 5 years ago
where do you parallel? parallel call this function?
where do you parallel? parallel call this function?
yes, I have 25 threads to call this function.
Further, i read the c_api code. To my understanding, the lock in Booster::Predict is just used to maintain the parameter like ### double* outresult, because I could not find any member variable would be modify in boosting and predictor during multithreading. Therefore I think it is thread safe naturally, even without a thread lock.
If I am wrong, please tell me which line of codes would be a risk of thread unsafe.
if you can ensure the argument preds
is not written to the same index in different threads, it is thread-safe.
if you can ensure the argument
preds
is not written to the same index in different threads, it is thread-safe.
that's nice, thank you very much
there is lock in predict now, for the boosting
is modified here https://github.com/microsoft/LightGBM/blob/f01b2aca13e787e2945eba7c2bd85e31c65923f8/src/boosting/gbdt.h#L334-L345
However, I think the num_iteration_forpred could be moved to predictor, then we can remove this lock.
`bool LGBMPredictor::Predict( const float matrix, int nrow, int ncol, std::vector preds)
{
// Comm::LogErr("[LGBMPredictor][debug] nrow:%lu, ncol:%lu", nrow, ncol);
}`
I am going to write predict code like above, without any thread lock. This funtion is supposed to called in multithreading, is that thread safe?