Closed ghost closed 7 years ago
Hi, Dang,
Yes, I will do by next week (all my machines are being occupied by a new project, stopping me to double check scripts before uploading:).
Thanks for your interest!
Regards, Shu
On Wed, Jun 14, 2017 at 8:38 PM, Dang Kang notifications@github.com wrote:
Dear Shu Kong:
I am wondering is it OK to release the training script of Cityscape as well? I notice it has been released for NYUv2 but not sure the parameters will be the same for Cityscape. I am doing some autonomous vehicle project and wish to try that dataset.
Thanks very much for the help,
Yours, Dang Kang.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJEVhqVuUsnbfDiCJIvvGU1kYRg4aks5sEKc2gaJpZM4N6sXy .
Hi Shu:
No problem. Thanks very much for your kind help!
Best Regards,
Yours, Dang Kang.
On Thu, Jun 15, 2017 at 12:05 PM, Shu Kong notifications@github.com wrote:
Hi, Dang,
Yes, I will do by next week (all my machines are being occupied by a new project, stopping me to double check scripts before uploading:).
Thanks for your interest!
Regards, Shu
On Wed, Jun 14, 2017 at 8:38 PM, Dang Kang notifications@github.com wrote:
Dear Shu Kong:
I am wondering is it OK to release the training script of Cityscape as well? I notice it has been released for NYUv2 but not sure the parameters will be the same for Cityscape. I am doing some autonomous vehicle project and wish to try that dataset.
Thanks very much for the help,
Yours, Dang Kang.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJEVhq VuUsnbfDiCJIvvGU1kYRg4aks5sEKc2gaJpZM4N6sXy .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/3#issuecomment-308622775, or mute the thread https://github.com/notifications/unsubscribe-auth/AXe0hfCSal4beReFAoYbt0I6IS8j_gbxks5sEK1ugaJpZM4N6sXy .
Hi, Dang,
I've uploaded the training script on cityscapes. Please try it and let me know if you have problems with it:)
Regards, Shu
On Wed, Jun 14, 2017 at 9:23 PM, Dang Kang notifications@github.com wrote:
Hi Shu:
No problem. Thanks very much for your kind help!
Best Regards,
Yours, Dang Kang.
On Thu, Jun 15, 2017 at 12:05 PM, Shu Kong notifications@github.com wrote:
Hi, Dang,
Yes, I will do by next week (all my machines are being occupied by a new project, stopping me to double check scripts before uploading:).
Thanks for your interest!
Regards, Shu
On Wed, Jun 14, 2017 at 8:38 PM, Dang Kang notifications@github.com wrote:
Dear Shu Kong:
I am wondering is it OK to release the training script of Cityscape as well? I notice it has been released for NYUv2 but not sure the parameters will be the same for Cityscape. I am doing some autonomous vehicle project and wish to try that dataset.
Thanks very much for the help,
Yours, Dang Kang.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJEVhq VuUsnbfDiCJIvvGU1kYRg4aks5sEKc2gaJpZM4N6sXy .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective- Understanding-in-the-loop/issues/3#issuecomment-308622775, or mute the thread https://github.com/notifications/unsubscribe-auth/ AXe0hfCSal4beReFAoYbt0I6IS8j_gbxks5sEK1ugaJpZM4N6sXy .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/3#issuecomment-308624788, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJDByOOxTl9Hk3WBhiJXBOoSX00s8ks5sELGsgaJpZM4N6sXy .
Hi, Shu:
Great to hear that :-). Thanks so much for the help!
Best Regards,
Yours, Dang Kang.
On Mon, Jun 19, 2017 at 11:49 PM, Shu Kong notifications@github.com wrote:
Hi, Dang,
I've uploaded the training script on cityscapes. Please try it and let me know if you have problems with it:)
Regards, Shu
On Wed, Jun 14, 2017 at 9:23 PM, Dang Kang notifications@github.com wrote:
Hi Shu:
No problem. Thanks very much for your kind help!
Best Regards,
Yours, Dang Kang.
On Thu, Jun 15, 2017 at 12:05 PM, Shu Kong notifications@github.com wrote:
Hi, Dang,
Yes, I will do by next week (all my machines are being occupied by a new project, stopping me to double check scripts before uploading:).
Thanks for your interest!
Regards, Shu
On Wed, Jun 14, 2017 at 8:38 PM, Dang Kang notifications@github.com wrote:
Dear Shu Kong:
I am wondering is it OK to release the training script of Cityscape as well? I notice it has been released for NYUv2 but not sure the parameters will be the same for Cityscape. I am doing some autonomous vehicle project and wish to try that dataset.
Thanks very much for the help,
Yours, Dang Kang.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJEVhq VuUsnbfDiCJIvvGU1kYRg4aks5sEKc2gaJpZM4N6sXy .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective- Understanding-in-the-loop/issues/3#issuecomment-308622775, or mute the thread https://github.com/notifications/unsubscribe-auth/ AXe0hfCSal4beReFAoYbt0I6IS8j_gbxks5sEK1ugaJpZM4N6sXy .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/issues/3#issuecomment-308624788, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJDByO OxTl9Hk3WBhiJXBOoSX00s8ks5sELGsgaJpZM4N6sXy
.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/3#issuecomment-309482129, or mute the thread https://github.com/notifications/unsubscribe-auth/AXe0hRPUrIrwhQRzRs9L7YYts7oZh81zks5sFpiZgaJpZM4N6sXy .
Hi @aimerykong , I was wondering how I could get the grouth truth depth image in cityscapes? specifically ::
curGTDepthName = sprintf(opts.imdb.depth_path, mode, opts.imdb.images.city{images(img_i)}, #????? Where does this come from? Im sure Cityscapes doesnt produce it. gtDepthOrg = imread(curGTDepthName);
Also I am trying to convert this into tensorflow for a baseline. I have an issue in trying to print the model and grate a graph for the network like this:
For example in the traincityscapes script if you look at the netbasemodel::
K>> netbasemodel
netbasemodel =
DagNN with properties:
layers: [1×495 struct]
vars: [1×498 struct]
params: [1×504 struct]
meta: [1×1 struct]
mode: 'normal'
holdOn: 0
accumulateParamDers: 0
conserveMemory: 1
parameterServer: []
device: 'cpu'
K>> netbasemodel.print() In an assignment A(:) = B, the number of elements in A and B must be the same.
Error in dagnn.DagNN/getVarSizes (line 42) sizes(out) = layer.block.getOutputSizes(sizes(in)) ;
Error in dagnn.DagNN/print (line 86) varSizes = obj.getVarSizes(inputSizes) ;
Do you know how I can generate this image for you model?
Hi, Athma,
Good to hear you are trying to convert this to tensorflow!
First as for depth, we use disparity map as the depth information. You can download disparity map from cityscapes website ( https://www.cityscapes-dataset.com/downloads/)
As I add new layers in our model, I don't take care of visualization part about drawing the DAG in matconvnet. While the layer is simply the dot product (elementwise product), I believe, in tensorflow, this can be done using tensorflow's build-in function, say tf.multiply. So I'd believe if the layer is replaced with tf.multiply, it's straightforward to visualize using tensorflow's tool.
Hope this helps.
Regards, Shu
On Wed, Jul 19, 2017 at 6:04 PM, Athma Narayanan Lakshminarayanan < notifications@github.com> wrote:
Hi @aimerykong https://github.com/aimerykong , I was wondering how I could get the grouth truth depth image in cityscapes? specifically ::
curGTDepthName = sprintf(opts.imdb.depth_path, mode, opts.imdb.images.city{images(img_i)}, #????? Where does this come from? Im sure Cityscapes doesnt produce it. gtDepthOrg = imread(curGTDepthName);
Also I am trying to convert this into tensorflow for a baseline. I have an issue in trying to print the model and grate a graph for the network like this: [image: image] https://user-images.githubusercontent.com/26149657/28395893-5ddd4f28-6cac-11e7-9523-6fc70f551be2.png
For example in the traincityscapes script if you look at the netbasemodel::
K>> netbasemodel
netbasemodel =
DagNN with properties:
layers: [1×495 struct] vars: [1×498 struct] params: [1×504 struct] meta: [1×1 struct] mode: 'normal' holdOn: 0
accumulateParamDers: 0 conserveMemory: 1 parameterServer: [] device: 'cpu'
K>> netbasemodel.print() In an assignment A(:) = B, the number of elements in A and B must be the same.
Error in dagnn.DagNN/getVarSizes (line 42) sizes(out) = layer.block.getOutputSizes(sizes(in)) ;
Error in dagnn.DagNN/print (line 86) varSizes = obj.getVarSizes(inputSizes) ;
Do you know how I can generate this image for you model?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/3#issuecomment-316566435, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJD3RDrW3-6WRlVAUIw4NCSh9TKcdks5sPqeIgaJpZM4N6sXy .
Hi @aimerykong , Thanks a lot for the quick reply.I had just one last question while converting it to tensorflow.
I guess the only custom layer that needs to be implemented in tensorflow is dagnn.MaskGating. Rest are all basically conv,aspp,batch normalisation,pooling,softmax and concatenation(all available in TF).
Could you tell me what the Maskgating layer actually does? I am sorry i am new to matconvnet and have difficulty understanding where the custom layers are written.
For example while recurrentModule3_depthGatingLayer taking in the following inputs: recurrentModule3_block2_pyramidAtrous_pool1_relu, recurrentModule3_block2_pyramidAtrous_pool2_relu , recurrentModule3_block2_pyramidAtrous_pool4_relu , recurrentModule3_block2_pyramidAtrous_pool8_relu, recurrentModule3_block2_pyramidAtrous_pool16_relu , recurrentModule3_depthSoftmax
and outputs, recurrentModule3_depthGatingLayer
So my questions are: 1)What does Maskgating do to these layers? 2)How is it able to gate the depth maps? 3)What are the input dimesions and what are the ouput dimensions 4)If it is a special operation how to backpropogate?
Thanks a lot for all the help!
The following python snippet demonstrates the multiplcative gating -- As it only involves element-wise product, computing the gradients should be straightforward.
################################################## import numpy as np
H, W, C = 10, 10, 8 # height, width, channel scaleNum = 5 # number of scales
scaleFeaMapList = [] for i in range(scaleNum): scaleFeaMapList.append( np.random.rand(H,W,C).astype('f') )
mask = np.random.rand(H,W,scaleNum).astype('f')
result = np.zeros((H,W,C), dtype=np.float32) for i in range(scaleNum): curMask = np.reshape(mask[:,:,i],(H,W,1)) curMask = np.repeat(curMask, C, axis=2)
A = scaleFeaMapList[i] result += curMask*A # elementwise dot product
##################################################
On Tue, Aug 1, 2017 at 5:45 PM, Athma Narayanan Lakshminarayanan < notifications@github.com> wrote:
Hi @aimerykong https://github.com/aimerykong , Thanks a lot for the quick reply.I had just one last question while converting it to tensorflow.
I guess the only custom layer that needs to be implemented in tensorflow is dagnn.MaskGating. Rest are all basically conv,aspp,batch normalisation,pooling,softmax and concatenation(all available in TF).
Could you tell me what the Maskgating layer actually does? I am sorry i am new to matconvnet and have difficulty understanding where the custom layers are written.
For example while recurrentModule3_depthGatingLayer taking in the following inputs: recurrentModule3_block2_pyramidAtrous_pool1_relu, recurrentModule3_block2_pyramidAtrous_pool2_relu , recurrentModule3_block2_pyramidAtrous_pool4_relu , recurrentModule3_block2_pyramidAtrous_pool8_relu, recurrentModule3_block2_pyramidAtrous_pool16_relu , recurrentModule3_depthSoftmax
and outputs, recurrentModule3_depthGatingLayer
So my questions are: 1)What does Maskgating do to these layers? 2)How is it able to gate the depth maps? 3)What are the input dimesions and what are the ouput dimensions 4)If it is a special operation how to backpropogate?
Thanks a lot for all the help!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/3#issuecomment-319536188, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJGPkkAvv04HlA8YpobhXnzpKc0Ztks5sT8aNgaJpZM4N6sXy .
Dear Shu Kong:
I am wondering is it OK to release the training script of Cityscape as well? I notice it has been released for NYUv2 but not sure the parameters will be the same for Cityscape. I am doing some autonomous vehicle project and wish to try that dataset.
Thanks very much for the help,
Yours, Dang Kang.