twitter-archive / torch-ipc

A set of primitives for parallel computation in Torch
Apache License 2.0
95 stars 28 forks source link

Shared object accessible from map function by all threads #8

Closed octavian-ganea closed 8 years ago

octavian-ganea commented 8 years ago

My application requires that all threads that run inside map can access a big object shared in memory, without copying it for each thread or for each function call inside map.

However, even if something simpler is not working, like the following code:

 y = 5
local res = {ipc.map(3, 
    function(mapid)
      print(y)
    end):join()}

which shows that y is nil at all threads. How can this be done without copying the y each time (assuming that y is a big object)? Thanks.

zakattacktwitter commented 8 years ago

If Y is a big tensor they can definitely share it. That’s the only primitive that can be shared without copying. This was done by design as the sharing of a single lua environment across thread boundaries is very complex. If you want a thread library that works like lua closures then I suggest you check out:

https://github.com/torch/threads

It may allow you to share your larger object without copying, maybe, you’d have to check the documentation to be sure.

Hope this helps, Zak

On Fri, Feb 12, 2016 at 10:03 AM, Octavian Ganea notifications@github.com wrote:

My application requires that all threads that run inside map can access a big object shared in memory, without copying it for each thread or for each function call inside map.

However, even if something simpler is not working, like the following code:

y = 5 local res = {ipc.map(3, function(mapid) local weight = torch.zeros(y, 5) return weight end):join()}

How can this be done without copying the y each time (assuming that y is a big object)? Thanks.

— Reply to this email directly or view it on GitHub https://github.com/twitter/torch-ipc/issues/8.

octavian-ganea commented 8 years ago
ipc = require 'libipc' 
y = torch.ones(5)
local res = {ipc.map(3, 
    function(mapid)
      print(y)
    end):join()}

I tried with y tensor, but got the same behavior, y is nil inside each function.

octavian-ganea commented 8 years ago

Btw, it would be cool to allow the threads to share only one specific table (given as an additional parameter) where shared variables can be put. This should be enough for most of the applications.

zakattacktwitter commented 8 years ago

Hi,

You need to pass the tensor to the map functions. Nothing is done automatically, no upvalues of the mapped function are traversed.

ipc = require 'libipc' y = torch.ones(5) local res = {ipc.map(3, function(y, mapid) —- first argument is y print(y) end, y):join()} —- pass y to the functions

On Fri, Feb 12, 2016 at 10:57 AM, Octavian Ganea notifications@github.com wrote:

ipc = require 'libipc' y = torch.ones(5) local res = {ipc.map(3, function(mapid) print(y) end):join()}

I tried with y tensor, but got the same behavior, y is nil inside each function.

— Reply to this email directly or view it on GitHub https://github.com/twitter/torch-ipc/issues/8#issuecomment-183450377.

zakattacktwitter commented 8 years ago

Yeah sharing a Lua table sounds easy but its actually quite hard as it is embedded inside a particular Lua environment, its not just a contiguous block of memory that can be easily identified.

On Fri, Feb 12, 2016 at 11:06 AM, Octavian Ganea notifications@github.com wrote:

Btw, it would be cool to allow the threads to share only one specific table (given as an additional parameter) where shared variables can be put. This should be enough for most of the applications.

— Reply to this email directly or view it on GitHub https://github.com/twitter/torch-ipc/issues/8#issuecomment-183452460.

octavian-ganea commented 8 years ago

I see, so passing a tensor y as a parameter to the map function is not going to perform any copy of y, but rather pass the pointer to each thread. Thanks for the clarification.

zakattacktwitter commented 8 years ago

I am going to close this as we won't ever be adding a way to share anything beyond tensors across thread boundaries. Thanks.