Closed marianrh closed 4 years ago
Hello, Thank you for the contribution!
Generally I try to avoid using global variables as much as possible, but I guess we don't have a lot of options on how to add such a limit given the current package design.
I believe the better way to set the limit would be using a function. In this case the variable can be unexported and accessed using atomics, e.g.:
var maxProcs int64
// ...
func SetMaxProcs(value int) {
atomic.StoreInt64(&maxProcs, int64(value))
}
// ...
limit := int(atomic.LoadInt64(&maxProcs))
if procs > limit && limit > 0 {
procs = limit
}
What do you think?
Regarding the documentation string, I think it would be better to change the phrase parallel processing subroutines
to concurrent processing goroutines
.
Of course, you're right that this has to be set and read atomically. I've done the changes as you suggested.
I added a comment to SetMaxProcs
. I guess we could remove the comment for maxProcs
now that it's not exported anymore, what do you think?
I guess we could remove the comment for maxProcs now that it's not exported anymore, what do you think?
Sure, we can remove it now.
Sure, we can remove it now.
Done.
Thank you!
Hi,
thanks a lot for your work and this great library!
I have a suggestion for limiting the number of parallel processing goroutines.
The reason for the proposal is my use case, where I'm using the
Resize
function in an image processing server. The server is accessed simultaneously by multiple clients. Beside resizing images, the server also has to perform other tasks. Therefore, it's problematic that a singleResize
runsruntime.GOMAXPROCS
goroutines via theparallel
function.This pull request proposes to add an exported global variable that can be used to limit the number of goroutines.
Best regards, Marian