Closed amoyyy closed 3 years ago
Is this the suggested way in V? And I wonder if there are other possible implements. By the way, will a context module (like golang) be provided in future? And is it possible to push and pop self-define length of data into or from channel?
I don't think everything is already decided. So feel free to make proposals (preferably based on Weave API) and don't forget to read https://github.com/vlang/v/issues/1868 .
Before answering your questions: when looking at your code I see one thing you should not do - neither in C, nor in Go, nor in V: modify/read a variable (mut f
) concurrently in different threads without any synchronization mechanism. One proper way to do something like this is making f
a shared
variables and use lock
:
fn coroutine(shared f FifoData, ch chan bool) {
...
lock f {
f.data -= id
}
...
shared f := FifoData{0}
go coroutine(shared f, ch)
...
lock f {
f.data += id
}
...
rlock f {
println(f)
}
Passing mut
variables to concurrent threads will probably be considered unsafe
in near future.
Is this the suggested way in V? And I wonder if there are other possible implements.
Well, the "suggested way" to do something depends on what you want to accomplish. To tell a concurrent thread to end itself you need some atomic flag like sync.Semaphore
. The .closed
flag of a channel is atomic, too, so it's not wrong to abuse that. For a test case this seems an overshoot, but when dealing with real applications you will need some communication between threads anyway. Usually you will not check the flag, but do something like this:
// receiving thread:
new_working_packet := <-ch or {
// could not receive anything because `ch` was closed and nothing is left in the queue
return maybe_some_result
}
// sending thread
ch <- new working thread or {
// could not push into channel because it was closed
return maybe_some_result
}
fn th() ?f64 { ... }
// wait for a thread to finish
handle := go th()
// do something else in between
result := handle.wait() or {
// some error handling
3.141592653589793 // default result
}
By the way, will a context module (like golang) be provided in future?
I'm not aware of any plans, but it might be a nice idea to add something like this to the library. I'll think about it.
And is it possible to push and pop self-define length of data into or from channel?
I'm not sure if I understand correctly what you mean by that. You can push/pop arrays like []f64
that have non-fixed size thru channels. And you can define the queue length when initializing a channel: ch := chan f64{cap: 100}
Maybe we should add a chapter "concurrency dos and don'ts" or "concurrency best practices" to the docs.
By the way, will a context module (like golang) be provided in future?
I'm not aware of any plans, but it might be a nice idea to add something like this to the library. I'll think about it.
Having thought about it I come to the conclusion that everything it Go's context
does, can be handled in ways as described above. And if you look at the examples in Go's context module you'll see that they need select
to handle the channel and the context. This is not necessary when using V's or
branches as above, so I personally think V's way is more elegant.
Go's context
module supports a deadline, though. But implementing this a quite trivial:
ch := chan f64{}
go fn(ch chan f64) {
time.sleep(800 * time.millisecond)
ch.close()
}(ch)
So, I think V doesn't need a context
module. But I might be wrong - if you know any good (real world) use case where V's capabilities are not suitable, please feel free to describe them.
ch := chan f64{} go fn(ch chan bool) { time.sleep(800 * time.millisecond) ch.close() }(ch)
Seems small enough to put it to doc as an example (I find it quite generic to show how to do such stuff in V).
@UweKrueger synchronization mechanism is undoubtedly needed if a shared instance has more than one producer. However, if the parallelism is not achieved by memory sharing but by communication, which means somehow copies of data may exist in different units and less or only one producer exists at one time, thus the lock can be avoided or relieved to some degree. That's one special occasion especially needed in distributed calculation.
As for concurrency communication, currently using channel as signal or using select mechanism surely functions well for universal usage. Context module may not be necessary but can strengthen such practice.
Final question, user-defined length of data can't be passed by channel means user always need to wrap data buffer in V struct, in my understanding, it should only provide reference sharing. Sometimes, direct data sharing may also be needed.
@amoyyy it has been a while since you created this issue. There already is a context module similar to Golang's context. You can check the docs here 👌🏻 https://modules.vlang.io/context.html I'm the main maintainer of that module so let me know if you find any issue while using it 👌🏻
Currently, if a concurrent task is not set up in main(), the cancelation of the child concurrent task should be handled by person. It's fine to using a signal flag communicated by channel like below. a demo:
Is this the suggested way in V? And I wonder if there are other possible implements. By the way, will a context module (like golang) be provided in future? And is it possible to push and pop self-define length of data into or from channel?