Closed robotsorcerer closed 8 years ago
I figured out a way to broadcast the (lstm) network weights and biases.
print('==> ros publisher initializations')
ros.init('soft_robot')
local spinner = ros.AsyncSpinner()
spinner:start()
local nh = ros.NodeHandle()
local neural_weights = ros.MsgSpec('std_msgs/String')
local pub = nh:advertise("neural_net", neural_weights, 100, false)
ros.spinOnce()
msg = ros.Message(neural_weights)
if pub:getNumSubscribers() == 0 then
print('please subscribe to the /neural_net topic')
else
print('publishing neunet weights: ', neunet)
local weights, biases
weights = neunet.modules[1].recurrentModule.modules[7].weight
biases = neunet.modules[1].recurrentModule.modules[7].bias
print(weights, 'weights')
msg.data = tostring(weights)
pub:publish(msg)
end
I had to convert the weights to string first of all though. If you have a better method you would broadcast a network being trained in real-time, please do let me know.
Cheers!
Hi Lekan, publishing the weights as string message is possible but definitely not the optimal solution. I have created an example on how to use std_msgs/Float64MultiArray
messages. Please see demo/publish_multi_array.lua. For non periodic messages like neural network configurations I would recommend not to use publisher/subscriber but a service...
Thanks for your work. This works for single weight matrices. In recurrent modules, we often have weight matrices for the recurrent layers of the neural network and another weight matrix for the output layer. This means, the whole weight of the network has to be concatenated into a
`[ (batchSize x r);
(batchSize x o) ]`
matrix where r
is the length of the recurrent weight matrix and o
is the length of the output weight matrix (ideally, they should be the same). For example, if I have an rnn recurrent layer followed by a linear module (for a single input single output system) meaning the weights of the network may be aggregated as
recWeights = tensorToMsg(netmods[1].recurrentModule.modules[4].weight) --recurrent weights
recBiases = tensorToMsg(netmods[1].recurrentModule.modules[4].bias) -- recurrent biases
outWeights = tensorToMsg(netmods[1].module.modules[4].weight) --output linear layer weights
outBiases = tensorToMsg(netmods[1].module.modules[4].bias) --output linear layer biases
--concatenate all the weights and biases in a lua table
netparams = {['recurrentWeights']=recWeights, ['recurrentBiases']=recBiases, ['outWeights']=outWeights, ['outBiases']=outBiases}
The multi-dimensional array example works for each weight matrix. I am not sure if it is efficient publishing separate weight layers of a network asynchronously (using the ros::AsyncSpinner
as you have) since this would defeat the purpose of using the network at a subscriber site when weight messages do not arrive in an ordered manner. I know ros
does not have an explicit way of publishing a lua table. I think it would be better we write a ros function that can handle a lua table of weight and bias tensors. What do you think?
The most natural way with ROS would be to define a custom message that contains all the information you want to transfer at once, e.g it could contain an array of Float64MultiArray
or multiple members with different names. You might check the Creating a ROS msg and srv tutorial.
Regarding out of order messages I would recommend to use a ros-service instead of normal pubsub.
Hey,
Nice job on this wrapper.
Just a quickie here: what's your recommended way of advertising a trained neural network in torch e.g. to move_it? I tried using an
std_msgs/String
and otherstd_msgs
class representations such asstd_msgs/Float64MultiArray.msg
but I keep running into errors such asWould be glad to hear your opinion. Thank you!
EDIT 1
It won't take an std_msgs/Byte.msg either: