Open AlexPalauDomenech opened 7 years ago
Did you specify the Custom Class field when you created your own dataset?
No, I didn't. I thought that if I used "car" as the label, it was not necessary. What do you recommend me to specify in that field?
https://github.com/NVIDIA/DIGITS/blob/master/digits/extensions/data/objectDetection/README.md See 'Custom class mappings' part
I thought that if I used "car" as the label, it was not necessary.
That's correct.
@ls27706 do you think an upright bounding box is a good locater for your objects? Don't you think semantic segmentation such as in this medical imaging example would be more appropriate? Object detection and image segmentation actually share a lot of common traits: the feature extractor is very similar (FCN type of network) but the difference is in the network output. It takes some effort to tune the bounding box regressor in Detectnet. In semantic image segmentation you do not have to do that though you need to post-process the network output to locate the area of interest.
@gheinrich Thanks for the answers, I really appreciate that you guys are investing your time to help me.
The reason for using object detection instead of image segmentation was that we already had the bounding boxes located manually by experts. We cannot spend all the resources needed to manually label the regions to segment the images (we have more than 120k images and we are only doing an internship).
In addition, the outputs we desire is one point more or less in the center of the blood vessel... The aim is not to fit perfectly the blood vessel, but to mimic the bounding boxes placed by the doctor. Anyway, we will take a look at the image segmentation algorithm and how to make it run.
Do you still recommend image segmentation?
What do you think may be causing our algorithm to not place always the bounding box? It would be much easier and faster to us if we could keep working on object detection instead of changing to image segmentation.
@ls27706 I am working on the human detection for aerial images. And I met the same problem as you described. When I was training my own dataset, the precision, recall and MAP were always zero at each epoch. When doing testing, there is always no bounding box for all test images.
So far, I have no idea how to solve it. I think that this kind of failure is very probably related to the setting of network or dataset.
//////////////////////////////////////////////////////////////////////////////// Here are some details when I created the dataset: //////////////////////////////////////////////////////////////////////////////// Here is an example of label format: Human 0.00 0 0.00 832.00 464.00 860.00 541.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 //////////////////////////////////////////////////////////////////////////////// Here are some details when I created the module: //////////////////////////////////////////////////////////////////////////////// The detectnet_network.prototxt is as below: https://raw.githubusercontent.com/NVIDIA/caffe/caffe-0.15/examples/kitti/detectnet_network.prototxt Just modify the image_size to fit my dataset and set crop_bboxes to false.
@gheinrich Can you give some advises? Thanks.
@look-recognize Hey, thanks for your reply! It's good to know I'm not the only one having issues with the mAP and the bounding boxes placement. Anyway, I think our problems come from a different source.
I would appreciate if you could modify your post so the thread is more readable (for instance, you could upload the code in a txt and then insert it in your post: Your network.txt) or create a new issue, to avoid divert from my problem.
The reason I think we don't have the same problem source is because you have less than 200 images and have trained only for 30 epoch. I have 120k images (very different images from the ones detectnet has been trained btw) and I have been training the network for more than 500 epoch. Here you can find some issues on this forum where they discuss your problem, normally they say that you have to wait a little bit longer and add more images:
https://github.com/NVIDIA/DIGITS/issues/1279
Furthermore, I have some questions: How come your label shapes are different in training and validation? Are you sure your objects to detect are big enough (50x50 to 400x400)? In the example you copied here, it seems to be just 28 pixels high.
I think this problem rises from the fact that for Fine-tuning the DetectNet, some layers must be renamed but the author of the DetectNet tutorial has missed that.
otherwise a pre-trained network should converge much faster than this otherwise it is training from scratch!
Hi, @ls27706 Thanks for your clarification and suggestion about this issue.
For your questions:
@ls27706
Do you still recommend image segmentation?
You can understand Detectnet as a network that does two things at the same time:
The coverage
graph from the description of this issue suggests that DetectNet does a good job at (1). However the fact that you don't get positive mAP
suggests that DetectNet didn't manage to solve (2). That is the issue most users face with DetectNet: there are a bunch of hyper-parameters to get right to solve (2). My suggestion for you, if you wish to experiment, is to carefully study the bounding box generation path to understand how all those hyper-parameters work. Alternatively, you can train a regular image segmentation network and implement (2) on your end through some post-processing script - this way you might find it easier to tune it for your own application.
Hello @ls27706. Here is an explanation to what could be happening.
The coverage map you see is actually the predicted offset for the bounding box corner at each pixel. The coverage map shows the likelihood that an object is present at each location. The clustering layer that post-processes the DetectNet output applies a threshold based on the coverage value, so even though the coverage map shows that the network is sensitive to the presence of the objects, the coverage value may not be exceeding this threshold.
Credit where credit is due to @jbarker-nvidia.
Hi, I just want to add that I'm trying to use DetectNet for areal object detection, and I'm having the same problem that @ls27706 described so I'll be interested to see what it takes to solve this problem. I'm trying to compare DetectNet against several other networks and network structures, like SegNet and VGGNet. In my application, detection or segmentation are both valid approaches.
I have tried many variations on my KITTI output, and I have yet to get a DetectNet network to place a single BBOX on my images. I haven't figured out the problem yet, but after a lot of trial and error, I have two hunches:
Dontcare
class. I can't figure out how, but there seems to be a connection.You shouldn't need to include any boxes explicitely marked as Dontcare. Also you should be fine with DIGITS 4.
What do your training losses and metrics look like? In particular, are any of them pegged at zero for the whole time?
The most common cause of issues I've seen with training is that there is a problem with the data format at ingest time causing the database to not contain any valid bounding boxes for objects of interest.
I'd confirm that, after tweaking the input parameters of on the protox I am getting amazingly good results
Any ideas on what to modify for multi class detection? I've seen this: https://github.com/NVIDIA/caffe/pull/157 Any more info is appreciated
@bpinaya Here is an example prototxt for a two class version of DetectNet. Tuning this models can be tricky though if you have objects of widely varying sizes and numbers.
@bpinaya
What input parameters did you modify?
@lesterlo In input_shape check the dims, also in the layer with the name cluster you have you make it suit your dimensions, sometimes you might see your mAP not improving but if you eval one image in the bbox/regressor you can see a box being generated, that depends on the parameters of the cluster, as I said, you need to resize according to your image size and the minimum size of the object you are trying to detect. Also when training run it many times with different learning rates, at the start I couldn't get the mAP to move from 0, but in one epoch I got some values that were not 0, with those weights I retrained and got better and better results, you gotta play a bit with it, good luck! If you need further help tell me. Check also this article from nvidia.
My image size is 1280x1024 and bounding box size is above 50x50;how to set cluster layer parameters{ name: "cluster" type: "Python" bottom: "coverage" bottom: "bboxes" top: "bbox-list" include { phase: TEST } python_param { module: "caffe.layers.detectnet.clustering" layer: "ClusterDetections" param_str: "1280, 1024, 16, 0.6, 3, 0.02, 22, 1" } } layer { name: "cluster_gt" type: "Python" bottom: "coverage-label" bottom: "bbox-label" top: "bbox-list-label" include { phase: TEST stage: "val" } python_param { module: "caffe.layers.detectnet.clustering" layer: "ClusterGroundtruth" param_str: "1280,1024, 20, 1" } } layer { name: "score" type: "Python" bottom: "bbox-list-label" bottom: "bbox-list" top: "bbox-list-scored" include { phase: TEST stage: "val" } python_param { module: "caffe.layers.detectnet.mean_ap" layer: "ScoreDetections" } } layer { name: "mAP" type: "Python" bottom: "bbox-list-scored" top: "mAP" top: "precision" top: "recall" include { phase: TEST stage: "val" } python_param { module: "caffe.layers.detectnet.mean_ap" layer: "mAP" param_str: "1280, 1024, 20" } }
We are using Digit 4 detectnet for detecting and tracking tiny object in our work.We have created a dataset of 1299 images for training ;107 images for the validation along with their labels.The format of labels used is mentioned below:Car 0.0 0 0.0 1153.5 122.5 1194.5 158.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0.We took image size 10001000 and bounding box size above 5050; Now successfully we have created its model but MAP Value is zero throughout and showing 0 boundingbox detected .We have used the custom class as Dontcare,Car ;Used the kittidataset trained net by changing only image locations. My queries are: 1)How can we increase MAP and form bounding box? 2)How to modify the network used in the kitty dataset for our grayscale images?
I have used the network of KITTI DATASET
@sulthanashafi I had a similar situation, and these were the steps I followed to alleviate the flat graph:
NOTE: You can use any size image for training. It does not have to be 1248x384. Please ensure the 'necessary' modifications are made to the network provided here. You may also want to consider revisiting Point 2. After that, you can use the network attached in Point 3 and see if it helps your case.
Kind Regards, Shreyas Ramesh.
Iam really thankful for the response given in time and will try with the customNetwork you provided.May i also know about the pertained weight’s you have used.
On 24-Mar-2017, at 8:06 AM, shreyasramesh notifications@github.com wrote:
@sulthanashafi https://github.com/sulthanashafi I had a similar situation, and these were the steps I followed to alleviate the flat graph:
I made sure that all my images were 1248x384, object sizes were at least 50px50p and the bounding boxes actually were bounding the object correctly. My training set & validation set sizes were increased - Somewhere around 2.5K images for training and 500 images for validation. Used the following custom DetectNet Network - customNetwork.txt https://github.com/NVIDIA/DIGITS/files/866700/customNetwork.txt NOTE: You can use any size image for training. It does not have to be 1248x384. Please ensure the 'necessary' modifications are made to the network provided here. You may also want to consider revisiting Point 2. After that, you can use the network attached in Point 3 and see if it helps your case.
Kind Regards, Shreyas Ramesh.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-288917229, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckEGshUGHreKN_0ytCpxGnxx5rInsks5roywzgaJpZM4LfTqE.
You can use the vanilla blvc_GoogLeNet model for pre-trained weights as suggested by the DetectNet blog.
Hi,
I have got really good response for my input when i changed my custom net as mentioned.However not drawing boundingbox.So, i have increased my dataset to 6000 training and 650 validation images.but unfortunately ,iam facing another problem of loading the dataset. Error message:File "/home/sfm/digits/digits/tools/create_generic_db.py", line 247, in run entry_value = self.extension.encode_entry(entry_id) File "/home/sfm/digits/digits/extensions/data/objectDetection/data.py", line 57, in encode_entry img = digits.utils.image.load_image(image_filename) File "/home/sfm/digits/digits/utils/image.py", line 67, in load_image raise errors.LoadImageError, 'IOError: %s' % e.message LoadImageError: IOError: image file is truncated (0 bytes not processed)
I have tried to add the changes in the image.py as mentioned down:
from PIL import ImageFileImageFile.LOAD_TRUNCATED_IMAGES = True
But it is still giving this error.Please suggest if you have resolved this type of problem?I have used png and txt files.
On 24-Mar-2017, at 8:06 AM, shreyasramesh notifications@github.com wrote:
@sulthanashafi https://github.com/sulthanashafi I had a similar situation, and these were the steps I followed to alleviate the flat graph:
NOTE: You can use any size image for training. It does not have to be 1248x384. Please ensure the 'necessary' modifications are made to the network provided here. You may also want to consider revisiting Point 2. After that, you can use the network attached in Point 3 and see if it helps your case.
Kind Regards, Shreyas Ramesh.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-288917229, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckEGshUGHreKN_0ytCpxGnxx5rInsks5roywzgaJpZM4LfTqE .
Unfortunately, I haven't come across the creation of data set issue that you currently face.
The best way around this maybe to try and create data sets of incremental image chunks, say 1.5k, 2k, 2.5k images and so on.
I should have added a disclaimer in my previous post that my model did not predict the bounding boxes as well. I wasn't very concerned of no bounding boxes because my requirement was satisfied through the coverage prediction (pretty good).
The bounding boxes not appearing may have something to do with the coverage or clustering threshold. That's just a loose guess.
If you find a way around the problems, do let me know :)
I have solved the problem of truncation n it was due to image corruption .Evenafter ,modifying the custom net as you suggested,iam unable to get the boundingbox yet.My object size is 20X20 pixels and bounding box of size 50x50.how can i adjust stride value along with other parameters.Please help me to find it.
Also iam attaching the part of log file of a running model.07:31:38.303081 10805 net.cpp:264] This network produces output precision I0405 07:31:38.303083 10805 net.cpp:264] This network produces output recall I0405 07:31:38.303221 10805 net.cpp:284] Network initialization done. I0405 07:31:38.304024 10805 solver.cpp:60] Solver scaffolding done. I0405 07:31:38.308969 10805 caffe.cpp:135] Finetuning from /home/ubuntu/DIGITS/examples/object-detection/bvlc_googlenet.caffemodel I0405 07:31:38.391417 10805 upgrade_proto.cpp:52] Attempting to upgrade input file specified using deprecated V1LayerParameter: /home/ubuntu/DIGITS/examples/object-detection/bvlc_googlenet.caffemodel I0405 07:31:38.449021 10805 upgrade_proto.cpp:60] Successfully upgraded file specified using deprecated V1LayerParameter I0405 07:31:38.449602 10805 net.cpp:791] Ignoring source layer data I0405 07:31:38.449610 10805 net.cpp:791] Ignoring source layer label_data_1_split I0405 07:31:38.450469 10805 net.cpp:791] Ignoring source layer loss1/ave_pool I0405 07:31:38.450475 10805 net.cpp:791] Ignoring source layer loss1/conv I0405 07:31:38.450479 10805 net.cpp:791] Ignoring source layer loss1/relu_conv I0405 07:31:38.450482 10805 net.cpp:791] Ignoring source layer loss1/fc I0405 07:31:38.450485 10805 net.cpp:791] Ignoring source layer loss1/relu_fc I0405 07:31:38.450489 10805 net.cpp:791] Ignoring source layer loss1/drop_fc I0405 07:31:38.450494 10805 net.cpp:791] Ignoring source layer loss1/classifier I0405 07:31:38.450496 10805 net.cpp:791] Ignoring source layer loss1/loss I0405 07:31:38.451746 10805 net.cpp:791] Ignoring source layer loss2/ave_pool I0405 07:31:38.451752 10805 net.cpp:791] Ignoring source layer loss2/conv I0405 07:31:38.451756 10805 net.cpp:791] Ignoring source layer loss2/relu_conv I0405 07:31:38.451759 10805 net.cpp:791] Ignoring source layer loss2/fc I0405 07:31:38.451762 10805 net.cpp:791] Ignoring source layer loss2/relu_fc I0405 07:31:38.451766 10805 net.cpp:791] Ignoring source layer loss2/drop_fc I0405 07:31:38.451769 10805 net.cpp:791] Ignoring source layer loss2/classifier I0405 07:31:38.451772 10805 net.cpp:791] Ignoring source layer loss2/loss I0405 07:31:38.452405 10805 net.cpp:791] Ignoring source layer pool4/3x3_s2 I0405 07:31:38.452411 10805 net.cpp:791] Ignoring source layer pool4/3x3_s2_pool4/3x3_s2_0_split I0405 07:31:38.454190 10805 net.cpp:791] Ignoring source layer pool5/7x7_s1 I0405 07:31:38.454195 10805 net.cpp:791] Ignoring source layer pool5/drop_7x7_s1 I0405 07:31:38.454198 10805 net.cpp:791] Ignoring source layer loss3/classifier I0405 07:31:38.454202 10805 net.cpp:791] Ignoring source layer loss3/loss3 I0405 07:31:38.543020 10805 upgrade_proto.cpp:52] Attempting to upgrade input file specified using deprecated V1LayerParameter: /home/ubuntu/DIGITS/examples/object-detection/bvlc_googlenet.caffemodel I0405 07:31:38.599452 10805 upgrade_proto.cpp:60] Successfully upgraded file specified using deprecated V1LayerParameter I0405 07:31:38.599731 10805 net.cpp:791] Ignoring source layer data I0405 07:31:38.599737 10805 net.cpp:791] Ignoring source layer label_data_1_split I0405 07:31:38.600605 10805 net.cpp:791] Ignoring source layer loss1/ave_pool I0405 07:31:38.600611 10805 net.cpp:791] Ignoring source layer loss1/conv I0405 07:31:38.600615 10805 net.cpp:791] Ignoring source layer loss1/relu_conv I0405 07:31:38.600618 10805 net.cpp:791] Ignoring source layer loss1/fc I0405 07:31:38.600622 10805 net.cpp:791] Ignoring source layer loss1/relu_fc I0405 07:31:38.600626 10805 net.cpp:791] Ignoring source layer loss1/drop_fc I0405 07:31:38.600630 10805 net.cpp:791] Ignoring source layer loss1/classifier I0405 07:31:38.600633 10805 net.cpp:791] Ignoring source layer loss1/loss I0405 07:31:38.601871 10805 net.cpp:791] Ignoring source layer loss2/ave_pool I0405 07:31:38.601876 10805 net.cpp:791] Ignoring source layer loss2/conv I0405 07:31:38.601879 10805 net.cpp:791] Ignoring source layer loss2/relu_conv I0405 07:31:38.601883 10805 net.cpp:791] Ignoring source layer loss2/fc I0405 07:31:38.601886 10805 net.cpp:791] Ignoring source layer loss2/relu_fc I0405 07:31:38.601889 10805 net.cpp:791] Ignoring source layer loss2/drop_fc I0405 07:31:38.601892 10805 net.cpp:791] Ignoring source layer loss2/classifier I0405 07:31:38.601897 10805 net.cpp:791] Ignoring source layer loss2/loss I0405 07:31:38.602520 10805 net.cpp:791] Ignoring source layer pool4/3x3_s2 I0405 07:31:38.602526 10805 net.cpp:791] Ignoring source layer pool4/3x3_s2_pool4/3x3_s2_0_split I0405 07:31:38.604306 10805 net.cpp:791] Ignoring source layer pool5/7x7_s1 I0405 07:31:38.604313 10805 net.cpp:791] Ignoring source layer pool5/drop_7x7_s1 I0405 07:31:38.604343 10805 net.cpp:791] Ignoring source layer loss3/classifier I0405 07:31:38.604346 10805 net.cpp:791] Ignoring source layer loss3/loss3 I0405 07:31:38.610388 10805 caffe.cpp:231] Starting Optimization I0405 07:31:38.610404 10805 solver.cpp:304] Solving I0405 07:31:38.610407 10805 solver.cpp:305] Learning Rate Policy: step I0405 07:31:38.616950 10805 solver.cpp:362] Iteration 0, Testing net (#0) I0405 07:31:38.616967 10805 net.cpp:723] Ignoring source layer train_data I0405 07:31:38.616971 10805 net.cpp:723] Ignoring source layer train_label I0405 07:31:38.616974 10805 net.cpp:723] Ignoring source layer train_transform I0405 07:32:24.327752 10805 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 07:32:24.327882 10805 solver.cpp:429] Test net output #1: loss_coverage = 37.2286 ( 1 = 37.2286 loss) I0405 07:32:24.327898 10805 solver.cpp:429] Test net output #2: mAP = 0 I0405 07:32:24.327903 10805 solver.cpp:429] Test net output #3: precision = 0 I0405 07:32:24.327908 10805 solver.cpp:429] Test net output #4: recall = 0 I0405 07:32:53.049880 10805 solver.cpp:242] Iteration 0 (0 iter/s, 74.4405s/125 iter), loss = 185.764 I0405 07:32:53.049921 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 07:32:53.049929 10805 solver.cpp:261] Train net output #1: loss_coverage = 196.138 ( 1 = 196.138 loss) I0405 07:32:53.049954 10805 sgd_solver.cpp:106] Iteration 0, lr = 0.0001 I0405 07:40:03.954133 10805 solver.cpp:242] Iteration 125 (0.290083 iter/s, 430.911s/125 iter), loss = 0.000555313 I0405 07:40:03.954197 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 07:40:03.954206 10805 solver.cpp:261] Train net output #1: loss_coverage = 0.000211057 ( 1 = 0.000211057 loss) I0405 07:40:03.954217 10805 sgd_solver.cpp:106] Iteration 125, lr = 0.0001 I0405 07:47:13.932087 10805 solver.cpp:242] Iteration 250 (0.290708 iter/s, 429.984s/125 iter), loss = 0.000184559 I0405 07:47:13.932155 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 07:47:13.932164 10805 solver.cpp:261] Train net output #1: loss_coverage = 0.000198643 ( 1 = 0.000198643 loss) I0405 07:47:13.932175 10805 sgd_solver.cpp:106] Iteration 250, lr = 0.0001 I0405 07:54:24.201158 10805 solver.cpp:242] Iteration 375 (0.290512 iter/s, 430.276s/125 iter), loss = 1.11369e-05 I0405 07:54:24.201268 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 07:54:24.201279 10805 solver.cpp:261] Train net output #1: loss_coverage = 4.27858e-07 ( 1 = 4.27858e-07 loss) I0405 07:54:24.201292 10805 sgd_solver.cpp:106] Iteration 375, lr = 0.0001 I0405 08:01:34.949679 10805 solver.cpp:242] Iteration 500 (0.290188 iter/s, 430.755s/125 iter), loss = 6.95455e-06 I0405 08:01:34.949753 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:01:34.949762 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.40064e-07 ( 1 = 1.40064e-07 loss) I0405 08:01:34.949774 10805 sgd_solver.cpp:106] Iteration 500, lr = 0.0001 I0405 08:08:45.109619 10805 solver.cpp:242] Iteration 625 (0.290585 iter/s, 430.166s/125 iter), loss = 7.43293e-06 I0405 08:08:45.109740 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:08:45.109750 10805 solver.cpp:261] Train net output #1: loss_coverage = 5.08794e-07 ( 1 = 5.08794e-07 loss) I0405 08:08:45.109762 10805 sgd_solver.cpp:106] Iteration 625, lr = 0.0001 I0405 08:15:56.125247 10805 solver.cpp:242] Iteration 750 (0.290008 iter/s, 431.022s/125 iter), loss = 6.5936e-06 I0405 08:15:56.125321 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:15:56.125330 10805 solver.cpp:261] Train net output #1: loss_coverage = 9.4829e-08 ( 1 = 9.4829e-08 loss) I0405 08:15:56.125341 10805 sgd_solver.cpp:106] Iteration 750, lr = 0.0001 I0405 08:23:06.809845 10805 solver.cpp:242] Iteration 875 (0.290231 iter/s, 430.691s/125 iter), loss = 6.58587e-06 I0405 08:23:06.809952 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:23:06.809962 10805 solver.cpp:261] Train net output #1: loss_coverage = 2.47024e-08 ( 1 = 2.47024e-08 loss) I0405 08:23:06.809973 10805 sgd_solver.cpp:106] Iteration 875, lr = 0.0001 I0405 08:30:13.902102 10805 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1000.caffemodel I0405 08:30:14.075453 10805 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1000.solverstate I0405 08:30:14.154969 10805 solver.cpp:362] Iteration 1000, Testing net (#0) I0405 08:30:14.154994 10805 net.cpp:723] Ignoring source layer train_data I0405 08:30:14.154999 10805 net.cpp:723] Ignoring source layer train_label I0405 08:30:14.155001 10805 net.cpp:723] Ignoring source layer train_transform I0405 08:30:46.368716 10805 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:30:46.368768 10805 solver.cpp:429] Test net output #1: loss_coverage = 2.5254e-09 ( 1 = 2.5254e-09 loss) I0405 08:30:46.368774 10805 solver.cpp:429] Test net output #2: mAP = 0 I0405 08:30:46.368779 10805 solver.cpp:429] Test net output #3: precision = 0 I0405 08:30:46.368783 10805 solver.cpp:429] Test net output #4: recall = 0 I0405 08:30:49.844825 10805 solver.cpp:242] Iteration 1000 (0.269954 iter/s, 463.042s/125 iter), loss = 6.62253e-06 I0405 08:30:49.844871 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:30:49.844879 10805 solver.cpp:261] Train net output #1: loss_coverage = 4.88123e-09 ( 1 = 4.88123e-09 loss) I0405 08:30:49.844892 10805 sgd_solver.cpp:106] Iteration 1000, lr = 0.0001 I0405 08:38:01.590390 10805 solver.cpp:242] Iteration 1125 (0.289518 iter/s, 431.752s/125 iter), loss = 6.53392e-06 I0405 08:38:01.590512 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:38:01.590523 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.53214e-08 ( 1 = 1.53214e-08 loss) I0405 08:38:01.590536 10805 sgd_solver.cpp:106] Iteration 1125, lr = 0.0001 I0405 08:45:12.595553 10805 solver.cpp:242] Iteration 1250 (0.290015 iter/s, 431.012s/125 iter), loss = 6.57547e-06 I0405 08:45:12.595669 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:45:12.595679 10805 solver.cpp:261] Train net output #1: loss_coverage = 3.36113e-08 ( 1 = 3.36113e-08 loss) I0405 08:45:12.595690 10805 sgd_solver.cpp:106] Iteration 1250, lr = 0.0001 I0405 08:52:23.706305 10805 solver.cpp:242] Iteration 1375 (0.289944 iter/s, 431.117s/125 iter), loss = 6.5818e-06 I0405 08:52:23.706420 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:52:23.706430 10805 solver.cpp:261] Train net output #1: loss_coverage = 6.63541e-08 ( 1 = 6.63541e-08 loss) I0405 08:52:23.706444 10805 sgd_solver.cpp:106] Iteration 1375, lr = 0.0001 I0405 08:59:38.584282 10805 solver.cpp:242] Iteration 1500 (0.287433 iter/s, 434.884s/125 iter), loss = 6.54109e-06 I0405 08:59:38.584388 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 08:59:38.584398 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.83484e-09 ( 1 = 1.83484e-09 loss) I0405 08:59:38.584409 10805 sgd_solver.cpp:106] Iteration 1500, lr = 0.0001 I0405 09:06:49.712990 10805 solver.cpp:242] Iteration 1625 (0.289932 iter/s, 431.135s/125 iter), loss = 6.54078e-06 I0405 09:06:49.713052 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:06:49.713062 10805 solver.cpp:261] Train net output #1: loss_coverage = 2.30632e-08 ( 1 = 2.30632e-08 loss) I0405 09:06:49.713073 10805 sgd_solver.cpp:106] Iteration 1625, lr = 0.0001 I0405 09:14:00.508719 10805 solver.cpp:242] Iteration 1750 (0.290156 iter/s, 430.802s/125 iter), loss = 6.54784e-06 I0405 09:14:00.508831 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:14:00.508841 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.27617e-09 ( 1 = 1.27617e-09 loss) I0405 09:14:00.508854 10805 sgd_solver.cpp:106] Iteration 1750, lr = 0.0001 I0405 09:21:11.027292 10805 solver.cpp:242] Iteration 1875 (0.290343 iter/s, 430.525s/125 iter), loss = 6.53608e-06 I0405 09:21:11.027410 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:21:11.027421 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.06149e-08 ( 1 = 1.06149e-08 loss) I0405 09:21:11.027432 10805 sgd_solver.cpp:106] Iteration 1875, lr = 0.0001 I0405 09:28:18.673650 10805 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2000.caffemodel I0405 09:28:18.770432 10805 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2000.solverstate I0405 09:28:18.845625 10805 solver.cpp:362] Iteration 2000, Testing net (#0) I0405 09:28:18.845649 10805 net.cpp:723] Ignoring source layer train_data I0405 09:28:18.845652 10805 net.cpp:723] Ignoring source layer train_label I0405 09:28:18.845656 10805 net.cpp:723] Ignoring source layer train_transform I0405 09:28:51.019788 10805 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:28:51.019839 10805 solver.cpp:429] Test net output #1: loss_coverage = 4.84738e-10 ( 1 = 4.84738e-10 loss) I0405 09:28:51.019845 10805 solver.cpp:429] Test net output #2: mAP = 0 I0405 09:28:51.019850 10805 solver.cpp:429] Test net output #3: precision = 0 I0405 09:28:51.019853 10805 solver.cpp:429] Test net output #4: recall = 0 I0405 09:28:54.442570 10805 solver.cpp:242] Iteration 2000 (0.269732 iter/s, 463.422s/125 iter), loss = 6.52936e-06 I0405 09:28:54.442618 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:28:54.442627 10805 solver.cpp:261] Train net output #1: loss_coverage = 3.26203e-09 ( 1 = 3.26203e-09 loss) I0405 09:28:54.442639 10805 sgd_solver.cpp:106] Iteration 2000, lr = 0.0001 I0405 09:36:04.768800 10805 solver.cpp:242] Iteration 2125 (0.290473 iter/s, 430.333s/125 iter), loss = 6.53187e-06 I0405 09:36:04.768874 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:36:04.768883 10805 solver.cpp:261] Train net output #1: loss_coverage = 3.91586e-09 ( 1 = 3.91586e-09 loss) I0405 09:36:04.768895 10805 sgd_solver.cpp:106] Iteration 2125, lr = 0.0001 I0405 09:43:16.111186 10805 solver.cpp:242] Iteration 2250 (0.289789 iter/s, 431.349s/125 iter), loss = 6.53357e-06 I0405 09:43:16.111294 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:43:16.111305 10805 solver.cpp:261] Train net output #1: loss_coverage = 9.55734e-09 ( 1 = 9.55734e-09 loss) I0405 09:43:16.111318 10805 sgd_solver.cpp:106] Iteration 2250, lr = 0.0001 I0405 09:50:27.727860 10805 solver.cpp:242] Iteration 2375 (0.289604 iter/s, 431.623s/125 iter), loss = 6.5452e-06 I0405 09:50:27.727929 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:50:27.727938 10805 solver.cpp:261] Train net output #1: loss_coverage = 3.07198e-08 ( 1 = 3.07198e-08 loss) I0405 09:50:27.727951 10805 sgd_solver.cpp:106] Iteration 2375, lr = 0.0001 I0405 09:57:38.621101 10805 solver.cpp:242] Iteration 2500 (0.290091 iter/s, 430.9s/125 iter), loss = 6.5351e-06 I0405 09:57:38.621173 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 09:57:38.621183 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.59404e-09 ( 1 = 1.59404e-09 loss) I0405 09:57:38.621196 10805 sgd_solver.cpp:106] Iteration 2500, lr = 0.0001 I0405 10:04:48.995395 10805 solver.cpp:242] Iteration 2625 (0.29044 iter/s, 430.381s/125 iter), loss = 6.61031e-06 I0405 10:04:48.995465 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:04:48.995476 10805 solver.cpp:261] Train net output #1: loss_coverage = 8.52904e-08 ( 1 = 8.52904e-08 loss) I0405 10:04:48.995487 10805 sgd_solver.cpp:106] Iteration 2625, lr = 0.0001 I0405 10:11:59.327877 10805 solver.cpp:242] Iteration 2750 (0.290469 iter/s, 430.339s/125 iter), loss = 6.53092e-06 I0405 10:11:59.328006 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:11:59.328016 10805 solver.cpp:261] Train net output #1: loss_coverage = 5.19372e-10 ( 1 = 5.19372e-10 loss) I0405 10:11:59.328029 10805 sgd_solver.cpp:106] Iteration 2750, lr = 0.0001 I0405 10:19:10.767535 10805 solver.cpp:242] Iteration 2875 (0.289723 iter/s, 431.446s/125 iter), loss = 6.52986e-06 I0405 10:19:10.767607 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:19:10.767616 10805 solver.cpp:261] Train net output #1: loss_coverage = 2.13752e-09 ( 1 = 2.13752e-09 loss) I0405 10:19:10.767628 10805 sgd_solver.cpp:106] Iteration 2875, lr = 0.0001 I0405 10:26:18.753135 10805 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_3000.caffemodel I0405 10:26:18.849423 10805 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3000.solverstate I0405 10:26:18.924594 10805 solver.cpp:362] Iteration 3000, Testing net (#0) I0405 10:26:18.924618 10805 net.cpp:723] Ignoring source layer train_data I0405 10:26:18.924623 10805 net.cpp:723] Ignoring source layer train_label I0405 10:26:18.924625 10805 net.cpp:723] Ignoring source layer train_transform I0405 10:26:51.054725 10805 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:26:51.054814 10805 solver.cpp:429] Test net output #1: loss_coverage = 8.3085e-10 ( 1 = 8.3085e-10 loss) I0405 10:26:51.054821 10805 solver.cpp:429] Test net output #2: mAP = 0 I0405 10:26:51.054826 10805 solver.cpp:429] Test net output #3: precision = 0 I0405 10:26:51.054831 10805 solver.cpp:429] Test net output #4: recall = 0 I0405 10:26:54.495252 10805 solver.cpp:242] Iteration 3000 (0.269551 iter/s, 463.735s/125 iter), loss = 6.53555e-06 I0405 10:26:54.495306 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:26:54.495316 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.43668e-08 ( 1 = 1.43668e-08 loss) I0405 10:26:54.495328 10805 sgd_solver.cpp:106] Iteration 3000, lr = 0.0001 I0405 10:34:05.892824 10805 solver.cpp:242] Iteration 3125 (0.289751 iter/s, 431.404s/125 iter), loss = 6.53267e-06 I0405 10:34:05.892890 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:34:05.892900 10805 solver.cpp:261] Train net output #1: loss_coverage = 5.32612e-09 ( 1 = 5.32612e-09 loss) I0405 10:34:05.892912 10805 sgd_solver.cpp:106] Iteration 3125, lr = 0.0001 I0405 10:41:16.927286 10805 solver.cpp:242] Iteration 3250 (0.289995 iter/s, 431.041s/125 iter), loss = 6.53195e-06 I0405 10:41:16.927358 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:41:16.927368 10805 solver.cpp:261] Train net output #1: loss_coverage = 2.41027e-10 ( 1 = 2.41027e-10 loss) I0405 10:41:16.927381 10805 sgd_solver.cpp:106] Iteration 3250, lr = 0.0001 I0405 10:48:27.582367 10805 solver.cpp:242] Iteration 3375 (0.290251 iter/s, 430.662s/125 iter), loss = 6.54171e-06 I0405 10:48:27.582479 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:48:27.582489 10805 solver.cpp:261] Train net output #1: loss_coverage = 6.9496e-08 ( 1 = 6.9496e-08 loss) I0405 10:48:27.582501 10805 sgd_solver.cpp:106] Iteration 3375, lr = 0.0001 I0405 10:55:37.950569 10805 solver.cpp:242] Iteration 3500 (0.290444 iter/s, 430.375s/125 iter), loss = 6.54797e-06 I0405 10:55:37.950680 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 10:55:37.950690 10805 solver.cpp:261] Train net output #1: loss_coverage = 3.29924e-09 ( 1 = 3.29924e-09 loss) I0405 10:55:37.950701 10805 sgd_solver.cpp:106] Iteration 3500, lr = 0.0001 I0405 11:02:49.500916 10805 solver.cpp:242] Iteration 3625 (0.289649 iter/s, 431.557s/125 iter), loss = 6.5465e-06 I0405 11:02:49.501060 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:02:49.501070 10805 solver.cpp:261] Train net output #1: loss_coverage = 7.7442e-09 ( 1 = 7.7442e-09 loss) I0405 11:02:49.501082 10805 sgd_solver.cpp:106] Iteration 3625, lr = 0.0001 I0405 11:09:59.940454 10805 solver.cpp:242] Iteration 3750 (0.290396 iter/s, 430.446s/125 iter), loss = 6.54071e-06 I0405 11:09:59.940570 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:09:59.940582 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.39099e-08 ( 1 = 1.39099e-08 loss) I0405 11:09:59.940593 10805 sgd_solver.cpp:106] Iteration 3750, lr = 0.0001 I0405 11:17:10.870913 10805 solver.cpp:242] Iteration 3875 (0.290066 iter/s, 430.937s/125 iter), loss = 6.57502e-06 I0405 11:17:10.870985 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:17:10.870993 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.91127e-10 ( 1 = 1.91127e-10 loss) I0405 11:17:10.871006 10805 sgd_solver.cpp:106] Iteration 3875, lr = 0.0001 I0405 11:24:17.192687 10805 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_4000.caffemodel I0405 11:24:17.288594 10805 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4000.solverstate I0405 11:24:17.363762 10805 solver.cpp:362] Iteration 4000, Testing net (#0) I0405 11:24:17.363786 10805 net.cpp:723] Ignoring source layer train_data I0405 11:24:17.363790 10805 net.cpp:723] Ignoring source layer train_label I0405 11:24:17.363795 10805 net.cpp:723] Ignoring source layer train_transform I0405 11:24:49.471812 10805 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:24:49.471863 10805 solver.cpp:429] Test net output #1: loss_coverage = 8.36573e-10 ( 1 = 8.36573e-10 loss) I0405 11:24:49.471869 10805 solver.cpp:429] Test net output #2: mAP = 0 I0405 11:24:49.471873 10805 solver.cpp:429] Test net output #3: precision = 0 I0405 11:24:49.471877 10805 solver.cpp:429] Test net output #4: recall = 0 I0405 11:24:52.864939 10805 solver.cpp:242] Iteration 4000 (0.270562 iter/s, 462.001s/125 iter), loss = 6.54231e-06 I0405 11:24:52.864984 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:24:52.864994 10805 solver.cpp:261] Train net output #1: loss_coverage = 9.87864e-09 ( 1 = 9.87864e-09 loss) I0405 11:24:52.865005 10805 sgd_solver.cpp:106] Iteration 4000, lr = 0.0001 I0405 11:32:03.987061 10805 solver.cpp:242] Iteration 4125 (0.289937 iter/s, 431.129s/125 iter), loss = 6.53122e-06 I0405 11:32:03.987184 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:32:03.987193 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.86242e-09 ( 1 = 1.86242e-09 loss) I0405 11:32:03.987207 10805 sgd_solver.cpp:106] Iteration 4125, lr = 0.0001 I0405 11:39:15.008373 10805 solver.cpp:242] Iteration 4250 (0.290004 iter/s, 431.028s/125 iter), loss = 6.53259e-06 I0405 11:39:15.008481 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:39:15.008492 10805 solver.cpp:261] Train net output #1: loss_coverage = 3.45587e-09 ( 1 = 3.45587e-09 loss) I0405 11:39:15.008503 10805 sgd_solver.cpp:106] Iteration 4250, lr = 0.0001 I0405 11:46:26.112397 10805 solver.cpp:242] Iteration 4375 (0.289949 iter/s, 431.111s/125 iter), loss = 6.56894e-06 I0405 11:46:26.112471 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:46:26.112481 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.62321e-07 ( 1 = 1.62321e-07 loss) I0405 11:46:26.112494 10805 sgd_solver.cpp:106] Iteration 4375, lr = 0.0001 I0405 11:53:36.412757 10805 solver.cpp:242] Iteration 4500 (0.29049 iter/s, 430.307s/125 iter), loss = 6.53572e-06 I0405 11:53:36.412880 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 11:53:36.412891 10805 solver.cpp:261] Train net output #1: loss_coverage = 1.66248e-09 ( 1 = 1.66248e-09 loss) I0405 11:53:36.412904 10805 sgd_solver.cpp:106] Iteration 4500, lr = 0.0001 I0405 12:00:46.703377 10805 solver.cpp:242] Iteration 4625 (0.290497 iter/s, 430.297s/125 iter), loss = 6.53119e-06 I0405 12:00:46.703514 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 12:00:46.703526 10805 solver.cpp:261] Train net output #1: loss_coverage = 5.9152e-10 ( 1 = 5.9152e-10 loss) I0405 12:00:46.703537 10805 sgd_solver.cpp:106] Iteration 4625, lr = 0.0001 I0405 12:07:57.257758 10805 solver.cpp:242] Iteration 4750 (0.290319 iter/s, 430.561s/125 iter), loss = 6.56478e-06 I0405 12:07:57.257838 10805 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0405 12:07:57.257849 10805 solver.cpp:261] Train net output #1: loss_coverage = 4.66453e-09 ( 1 = 4.66453e-09 loss) I0405 12:07:57.257861 10805 sgd_solver.cpp:106] Iteration 4750, lr = 0.0001
Hi, I'm sorry, I do not have a fix for the bounding box issue yet. Did you have any progress? How does the coverage output prediction look?
I haven’t got a good coverage.May i know how you have uploaded the input image and label set.Do we have to use the prepare_kitti_data.py code for train validation set split.I had directly uploaded it to the virtual server.I think it may be one of the reason.
On 10-Apr-2017, at 2:37 PM, shreyasramesh notifications@github.com wrote:
Hi, I'm sorry, I do not have a fix for the bounding box issue yet. Did you have any progress? How does the coverage output prediction look?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292891995, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckNkpkl3TeLWe-qVB6x3_dfJEAd5Wks5rufFEgaJpZM4LfTqE.
I uploaded to the directory directly as two folders:
train: images, labels
val: images, labels
prepare_kitti_data.py will (I believe) setup the directory structure for the KITTI data set. Plus, for me, the graph shows some valid bboxloss_val and coverageloss_val values.
Also have you got the coverage correctly.i used as you mentioned but got no coverage or boundingbox yet.But in the classifier step,it is showing a difference in the pixels tone detected.Please may i know about your object size .Also have you got till coverage.
On 10-Apr-2017, at 2:37 PM, shreyasramesh notifications@github.com wrote:
Hi, I'm sorry, I do not have a fix for the bounding box issue yet. Did you have any progress? How does the coverage output prediction look?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292891995, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckNkpkl3TeLWe-qVB6x3_dfJEAd5Wks5rufFEgaJpZM4LfTqE.
Iam getting an output like this.As iam a beginner,can you please help me infer from this.
"cvg/classifier"
Weights (Convolution layer) 1,025 learned parameters
Data shape: [ 1 1024 1 1] Mean: -0.000232986 Std deviation: 0.031378
-0.078-04.01206.0533 Value
"cvg/classifier"
Activation
Data shape: [ 1 62 62] Mean: -46.7877 Std deviation: 9.44972
-55.6 -32.7 -9.82 Value
Data shape: [ 1 62 62]
"coverage"
Activation
Mean: 2.6783e-08 Std deviation: 1.03183e-06
6.83e0-.205000.2070200544 Value
"bbox/regressor"
Weights (Convolution layer) 4,100 learned parameters
Data shape: [ 4 1024 1 1] Mean: -5.34631e-37 Std deviation: 0.0
-4.28e-1-3.547e4-.3278e-34 Value
"bboxes"
Activation
Data shape: [ 4 62 62] Mean: 1.05164e-33 Std deviation: 0.0
-1.34e6-3.728e1-3.356e-32 Value
"bbox-list"
Activation
Totals
Data shape: [50 5] Mean: 0.0 Std deviation: 0.0
-0.500 0.00 0.500 Value
Total learned parameters:
5,978,677
On 10-Apr-2017, at 2:37 PM, shreyasramesh notifications@github.com wrote:
Hi, I'm sorry, I do not have a fix for the bounding box issue yet. Did you have any progress? How does the coverage output prediction look?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292891995, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckNkpkl3TeLWe-qVB6x3_dfJEAd5Wks5rufFEgaJpZM4LfTqE.
Also pls may i know ,why haven’t you specified data_param in your detect net { backend: LMDB source: "examples/kitti/kitti_train_images.lmdb" batch_size: 10 } for any of the sets
On 10-Apr-2017, at 2:49 PM, shreyasramesh notifications@github.com wrote:
I uploaded to the directory directly as two folders:
train: images, labels
val: images, labels
prepare_kitti_data.py will (I believe) setup the directory structure for the KITTI data set. Plus, for me, the graph shows some valid bboxloss_val and coverageloss_val values.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292894806, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckCkxxjEejeQqQYapRzMIJU8FCaQZks5rufQOgaJpZM4LfTqE.
My coverage looks OK. Just no bounding boxes around the predicted coverage.
I was always under the impression this was because the clustering threshold was too high and that's why I wasn't seeing any boxes. However, when I tried manipulating the script to set the threshold low, I still did not see any bounding boxes.
My image size is: 1248x364.
We select the LMDB created, when we configure the network using DIGITS. No need to specify it again in the script.
Can you upload the output you see when you test one image? You can save the webpage, zip it and upload it here. Let me have a look and see if what you're facing is similar to my issue.
Thanks for the file. Will compare with my results and let you know ASAP.
https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa
Meanwhile, please check this publicly available prototext and see if this makes sense for your problem at hand. If it does, tune your images to fit the above prototext image format: 1280x1280 instead of 1000x1000.
https://devblogs.nvidia.com/parallelforall/exploring-spacenet-dataset-using-digits/
Really thankful.Meanwhile i will check with the trained nets mentioned.
On 10-Apr-2017, at 3:45 PM, shreyasramesh notifications@github.com wrote:
Thanks for the file. Will compare with my results and let you know ASAP.
https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa Meanwhile, please check this publicly available prototext and see if this makes sense for your problem at hand. If it does, tune your images to fit the above prototext image format: 1280x1280 instead of 1000x1000.
https://devblogs.nvidia.com/parallelforall/exploring-spacenet-dataset-using-digits/ https://devblogs.nvidia.com/parallelforall/exploring-spacenet-dataset-using-digits/ — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292908117, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckCDkLdqVsyfZX4qfn8iHRbSKqNbyks5rugFBgaJpZM4LfTqE.
Iam attaching a custom code for trying on your dataset changing the stride value.Hope it will work for you.Also have you got to suggest any modification in my dataset.
On 10-Apr-2017, at 3:45 PM, shreyasramesh notifications@github.com wrote:
Thanks for the file. Will compare with my results and let you know ASAP.
https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa Meanwhile, please check this publicly available prototext and see if this makes sense for your problem at hand. If it does, tune your images to fit the above prototext image format: 1280x1280 instead of 1000x1000.
https://devblogs.nvidia.com/parallelforall/exploring-spacenet-dataset-using-digits/ https://devblogs.nvidia.com/parallelforall/exploring-spacenet-dataset-using-digits/ — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292908117, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckCDkLdqVsyfZX4qfn8iHRbSKqNbyks5rugFBgaJpZM4LfTqE.
name: "DetectNet" layer { name: "train_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_train_images.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "train_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_train_labels.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "val_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_test_images.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "val_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_test_labels.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "deploy_data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 384 dim: 1248 } } include: { phase: TEST not_stage: "val" } }
layer { name: "train_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: true object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } detectnet_augmentation_param: { crop_prob: 1 shift_x: 32 shift_y: 32 flip_prob: 0.5 rotation_prob: 0 max_rotate_degree: 5 scale_prob: 0.4 scale_min: 0.8 scale_max: 1.2 hue_rotation_prob: 0.8 hue_rotation: 30 desaturation_prob: 0.8 desaturation_max: 0.8 } transform_param: { mean_value: 127 } include: { phase: TRAIN } } layer { name: "val_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: false object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } transform_param: { mean_value: 127 } include: { phase: TEST stage: "val" } } layer { name: "deploy_transform" type: "Power" bottom: "data" top: "transformed_data" power_param { shift: -127 } include: { phase: TEST not_stage: "val" } }
layer { name: "slice-label" type: "Slice" bottom: "transformed_label" top: "foreground-label" top: "bbox-label" top: "size-label" top: "obj-label" top: "coverage-label" slice_param { slice_dim: 1 slice_point: 1 slice_point: 5 slice_point: 7 slice_point: 8 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage-block" type: "Concat" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" top: "coverage-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "size-block" type: "Concat" bottom: "size-label" bottom: "size-label" top: "size-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "obj-block" type: "Concat" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" top: "obj-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-label-norm" type: "Eltwise" bottom: "bbox-label" bottom: "size-block" top: "bbox-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-obj-norm" type: "Eltwise" bottom: "bbox-label-norm" bottom: "obj-block" top: "bbox-obj-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
######################################################################
######################################################################
layer { name: "conv1/7x7_s2" type: "Convolution" bottom: "transformed_data" top: "conv1/7x7_s2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 3 kernel_size: 7 stride: 2 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv1/relu_7x7" type: "ReLU" bottom: "conv1/7x7_s2" top: "conv1/7x7_s2" }
layer { name: "pool1/3x3_s2" type: "Pooling" bottom: "conv1/7x7_s2" top: "pool1/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "pool1/norm1" type: "LRN" bottom: "pool1/3x3_s2" top: "pool1/norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "conv2/3x3_reduce" type: "Convolution" bottom: "pool1/norm1" top: "conv2/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3_reduce" type: "ReLU" bottom: "conv2/3x3_reduce" top: "conv2/3x3_reduce" }
layer { name: "conv2/3x3" type: "Convolution" bottom: "conv2/3x3_reduce" top: "conv2/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3" type: "ReLU" bottom: "conv2/3x3" top: "conv2/3x3" }
layer { name: "conv2/norm2" type: "LRN" bottom: "conv2/3x3" top: "conv2/norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "pool2/3x3_s2" type: "Pooling" bottom: "conv2/norm2" top: "pool2/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "inception_3a/1x1" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_1x1" type: "ReLU" bottom: "inception_3a/1x1" top: "inception_3a/1x1" }
layer { name: "inception_3a/3x3_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3_reduce" type: "ReLU" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3_reduce" }
layer { name: "inception_3a/3x3" type: "Convolution" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3" type: "ReLU" bottom: "inception_3a/3x3" top: "inception_3a/3x3" }
layer { name: "inception_3a/5x5_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5_reduce" type: "ReLU" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5_reduce" } layer { name: "inception_3a/5x5" type: "Convolution" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5" type: "ReLU" bottom: "inception_3a/5x5" top: "inception_3a/5x5" }
layer { name: "inception_3a/pool" type: "Pooling" bottom: "pool2/3x3_s2" top: "inception_3a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } }
layer { name: "inception_3a/pool_proj" type: "Convolution" bottom: "inception_3a/pool" top: "inception_3a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_pool_proj" type: "ReLU" bottom: "inception_3a/pool_proj" top: "inception_3a/pool_proj" }
layer { name: "inception_3a/output" type: "Concat" bottom: "inception_3a/1x1" bottom: "inception_3a/3x3" bottom: "inception_3a/5x5" bottom: "inception_3a/pool_proj" top: "inception_3a/output" }
layer { name: "inception_3b/1x1" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3b/relu_1x1" type: "ReLU" bottom: "inception_3b/1x1" top: "inception_3b/1x1" }
layer { name: "inception_3b/3x3_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3_reduce" type: "ReLU" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3_reduce" } layer { name: "inception_3b/3x3" type: "Convolution" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3" type: "ReLU" bottom: "inception_3b/3x3" top: "inception_3b/3x3" }
layer { name: "inception_3b/5x5_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5_reduce" type: "ReLU" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5_reduce" } layer { name: "inception_3b/5x5" type: "Convolution" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5" type: "ReLU" bottom: "inception_3b/5x5" top: "inception_3b/5x5" }
layer { name: "inception_3b/pool" type: "Pooling" bottom: "inception_3a/output" top: "inception_3b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_3b/pool_proj" type: "Convolution" bottom: "inception_3b/pool" top: "inception_3b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_pool_proj" type: "ReLU" bottom: "inception_3b/pool_proj" top: "inception_3b/pool_proj" } layer { name: "inception_3b/output" type: "Concat" bottom: "inception_3b/1x1" bottom: "inception_3b/3x3" bottom: "inception_3b/5x5" bottom: "inception_3b/pool_proj" top: "inception_3b/output" }
layer { name: "pool3/3x3_s2" type: "Pooling" bottom: "inception_3b/output" top: "pool3/3x3_s2" pooling_param { pool: MAX kernel_size: 1 stride: 1 } }
layer { name: "inception_4a/1x1" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_1x1" type: "ReLU" bottom: "inception_4a/1x1" top: "inception_4a/1x1" }
layer { name: "inception_4a/3x3_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3_reduce" type: "ReLU" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3_reduce" }
layer { name: "inception_4a/3x3" type: "Convolution" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 208 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3" type: "ReLU" bottom: "inception_4a/3x3" top: "inception_4a/3x3" }
layer { name: "inception_4a/5x5_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5_reduce" type: "ReLU" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5_reduce" } layer { name: "inception_4a/5x5" type: "Convolution" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5" type: "ReLU" bottom: "inception_4a/5x5" top: "inception_4a/5x5" } layer { name: "inception_4a/pool" type: "Pooling" bottom: "pool3/3x3_s2" top: "inception_4a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4a/pool_proj" type: "Convolution" bottom: "inception_4a/pool" top: "inception_4a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_pool_proj" type: "ReLU" bottom: "inception_4a/pool_proj" top: "inception_4a/pool_proj" } layer { name: "inception_4a/output" type: "Concat" bottom: "inception_4a/1x1" bottom: "inception_4a/3x3" bottom: "inception_4a/5x5" bottom: "inception_4a/pool_proj" top: "inception_4a/output" }
layer { name: "inception_4b/1x1" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4b/relu_1x1" type: "ReLU" bottom: "inception_4b/1x1" top: "inception_4b/1x1" } layer { name: "inception_4b/3x3_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3_reduce" type: "ReLU" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3_reduce" } layer { name: "inception_4b/3x3" type: "Convolution" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 224 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3" type: "ReLU" bottom: "inception_4b/3x3" top: "inception_4b/3x3" } layer { name: "inception_4b/5x5_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5_reduce" type: "ReLU" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5_reduce" } layer { name: "inception_4b/5x5" type: "Convolution" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5" type: "ReLU" bottom: "inception_4b/5x5" top: "inception_4b/5x5" } layer { name: "inception_4b/pool" type: "Pooling" bottom: "inception_4a/output" top: "inception_4b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4b/pool_proj" type: "Convolution" bottom: "inception_4b/pool" top: "inception_4b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_pool_proj" type: "ReLU" bottom: "inception_4b/pool_proj" top: "inception_4b/pool_proj" } layer { name: "inception_4b/output" type: "Concat" bottom: "inception_4b/1x1" bottom: "inception_4b/3x3" bottom: "inception_4b/5x5" bottom: "inception_4b/pool_proj" top: "inception_4b/output" }
layer { name: "inception_4c/1x1" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_1x1" type: "ReLU" bottom: "inception_4c/1x1" top: "inception_4c/1x1" }
layer { name: "inception_4c/3x3_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_3x3_reduce" type: "ReLU" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3_reduce" } layer { name: "inception_4c/3x3" type: "Convolution" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_3x3" type: "ReLU" bottom: "inception_4c/3x3" top: "inception_4c/3x3" } layer { name: "inception_4c/5x5_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5_reduce" type: "ReLU" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5_reduce" } layer { name: "inception_4c/5x5" type: "Convolution" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5" type: "ReLU" bottom: "inception_4c/5x5" top: "inception_4c/5x5" } layer { name: "inception_4c/pool" type: "Pooling" bottom: "inception_4b/output" top: "inception_4c/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4c/pool_proj" type: "Convolution" bottom: "inception_4c/pool" top: "inception_4c/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_pool_proj" type: "ReLU" bottom: "inception_4c/pool_proj" top: "inception_4c/pool_proj" } layer { name: "inception_4c/output" type: "Concat" bottom: "inception_4c/1x1" bottom: "inception_4c/3x3" bottom: "inception_4c/5x5" bottom: "inception_4c/pool_proj" top: "inception_4c/output" }
layer { name: "inception_4d/1x1" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_1x1" type: "ReLU" bottom: "inception_4d/1x1" top: "inception_4d/1x1" } layer { name: "inception_4d/3x3_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 144 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3_reduce" type: "ReLU" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3_reduce" } layer { name: "inception_4d/3x3" type: "Convolution" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 288 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3" type: "ReLU" bottom: "inception_4d/3x3" top: "inception_4d/3x3" } layer { name: "inception_4d/5x5_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5_reduce" type: "ReLU" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5_reduce" } layer { name: "inception_4d/5x5" type: "Convolution" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5" type: "ReLU" bottom: "inception_4d/5x5" top: "inception_4d/5x5" } layer { name: "inception_4d/pool" type: "Pooling" bottom: "inception_4c/output" top: "inception_4d/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4d/pool_proj" type: "Convolution" bottom: "inception_4d/pool" top: "inception_4d/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_pool_proj" type: "ReLU" bottom: "inception_4d/pool_proj" top: "inception_4d/pool_proj" } layer { name: "inception_4d/output" type: "Concat" bottom: "inception_4d/1x1" bottom: "inception_4d/3x3" bottom: "inception_4d/5x5" bottom: "inception_4d/pool_proj" top: "inception_4d/output" }
layer { name: "inception_4e/1x1" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_1x1" type: "ReLU" bottom: "inception_4e/1x1" top: "inception_4e/1x1" } layer { name: "inception_4e/3x3_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3_reduce" type: "ReLU" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3_reduce" } layer { name: "inception_4e/3x3" type: "Convolution" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3" type: "ReLU" bottom: "inception_4e/3x3" top: "inception_4e/3x3" } layer { name: "inception_4e/5x5_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5_reduce" type: "ReLU" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5_reduce" } layer { name: "inception_4e/5x5" type: "Convolution" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5" type: "ReLU" bottom: "inception_4e/5x5" top: "inception_4e/5x5" } layer { name: "inception_4e/pool" type: "Pooling" bottom: "inception_4d/output" top: "inception_4e/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4e/pool_proj" type: "Convolution" bottom: "inception_4e/pool" top: "inception_4e/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_pool_proj" type: "ReLU" bottom: "inception_4e/pool_proj" top: "inception_4e/pool_proj" } layer { name: "inception_4e/output" type: "Concat" bottom: "inception_4e/1x1" bottom: "inception_4e/3x3" bottom: "inception_4e/5x5" bottom: "inception_4e/pool_proj" top: "inception_4e/output" }
layer { name: "inception_5a/1x1" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_1x1" type: "ReLU" bottom: "inception_5a/1x1" top: "inception_5a/1x1" }
layer { name: "inception_5a/3x3_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3_reduce" type: "ReLU" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3_reduce" }
layer { name: "inception_5a/3x3" type: "Convolution" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3" type: "ReLU" bottom: "inception_5a/3x3" top: "inception_5a/3x3" } layer { name: "inception_5a/5x5_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5_reduce" type: "ReLU" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5_reduce" } layer { name: "inception_5a/5x5" type: "Convolution" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5" type: "ReLU" bottom: "inception_5a/5x5" top: "inception_5a/5x5" } layer { name: "inception_5a/pool" type: "Pooling" bottom: "inception_4e/output" top: "inception_5a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5a/pool_proj" type: "Convolution" bottom: "inception_5a/pool" top: "inception_5a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_pool_proj" type: "ReLU" bottom: "inception_5a/pool_proj" top: "inception_5a/pool_proj" } layer { name: "inception_5a/output" type: "Concat" bottom: "inception_5a/1x1" bottom: "inception_5a/3x3" bottom: "inception_5a/5x5" bottom: "inception_5a/pool_proj" top: "inception_5a/output" }
layer { name: "inception_5b/1x1" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_1x1" type: "ReLU" bottom: "inception_5b/1x1" top: "inception_5b/1x1" } layer { name: "inception_5b/3x3_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3_reduce" type: "ReLU" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3_reduce" } layer { name: "inception_5b/3x3" type: "Convolution" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3" type: "ReLU" bottom: "inception_5b/3x3" top: "inception_5b/3x3" } layer { name: "inception_5b/5x5_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5_reduce" type: "ReLU" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5_reduce" } layer { name: "inception_5b/5x5" type: "Convolution" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5" type: "ReLU" bottom: "inception_5b/5x5" top: "inception_5b/5x5" } layer { name: "inception_5b/pool" type: "Pooling" bottom: "inception_5a/output" top: "inception_5b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5b/pool_proj" type: "Convolution" bottom: "inception_5b/pool" top: "inception_5b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_pool_proj" type: "ReLU" bottom: "inception_5b/pool_proj" top: "inception_5b/pool_proj" } layer { name: "inception_5b/output" type: "Concat" bottom: "inception_5b/1x1" bottom: "inception_5b/3x3" bottom: "inception_5b/5x5" bottom: "inception_5b/pool_proj" top: "inception_5b/output" } layer { name: "pool5/drop_s1" type: "Dropout" bottom: "inception_5b/output" top: "pool5/drop_s1" dropout_param { dropout_ratio: 0.4 } } layer { name: "cvg/classifier" type: "Convolution" bottom: "pool5/drop_s1" top: "cvg/classifier" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 1 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } } layer { name: "coverage/sig" type: "Sigmoid" bottom: "cvg/classifier" top: "coverage" } layer { name: "bbox/regressor" type: "Convolution" bottom: "pool5/drop_s1" top: "bboxes" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 4 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } }
######################################################################
######################################################################
layer { name: "bbox_mask" type: "Eltwise" bottom: "bboxes" bottom: "coverage-block" top: "bboxes-masked" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-norm" type: "Eltwise" bottom: "bboxes-masked" bottom: "size-block" top: "bboxes-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-obj-norm" type: "Eltwise" bottom: "bboxes-masked-norm" bottom: "obj-block" top: "bboxes-obj-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
layer { name: "bbox_loss" type: "L1Loss" bottom: "bboxes-obj-masked-norm" bottom: "bbox-obj-label-norm" top: "loss_bbox" loss_weight: 2 include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage_loss" type: "EuclideanLoss" bottom: "coverage" bottom: "coverage-label" top: "loss_coverage" include { phase: TRAIN } include { phase: TEST stage: "val" } }
layer { type: 'Python' name: 'cluster' bottom: 'coverage' bottom: 'bboxes' top: 'bbox-list' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterDetections' param_str : '1248, 352, 8, 0.6, 3, 0.02, 22, 1' } include: { phase: TEST } }
layer { type: 'Python' name: 'cluster_gt' bottom: 'coverage-label' bottom: 'bbox-label' top: 'bbox-list-label' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterGroundtruth' param_str : '1248, 352, 8, 1' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'score' bottom: 'bbox-list-label' bottom: 'bbox-list' top: 'bbox-list-scored' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'ScoreDetections' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'mAP' bottom: 'bbox-list-scored' top: 'mAP' top: 'precision' top: 'recall' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'mAP' param_str : '1248, 352, 8' } include: { phase: TEST stage: "val" } }
Did the custom script produce bounding boxes for your dataset?
Sorry, got very busy yesterday. I'll definitely have a look today!
On Apr 11, 2017 10:57 AM, "sulthanashafi" notifications@github.com wrote:
Iam attaching a custom code for trying on your dataset changing the stride value.Hope it will work for you.Also have you got to suggest any modification in my dataset.
On 10-Apr-2017, at 3:45 PM, shreyasramesh notifications@github.com wrote:
Thanks for the file. Will compare with my results and let you know ASAP.
https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa Meanwhile, please check this publicly available prototext and see if this makes sense for your problem at hand. If it does, tune your images to fit the above prototext image format: 1280x1280 instead of 1000x1000.
https://devblogs.nvidia.com/parallelforall/exploring- spacenet-dataset-using-digits/ https://devblogs.nvidia.com/ parallelforall/exploring-spacenet-dataset-using-digits/ — You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292908117>, or mute the thread https://github.com/notifications/unsubscribe-auth/ AZZckCDkLdqVsyfZX4qfn8iHRbSKqNbyks5rugFBgaJpZM4LfTqE.
DetectNet network
Data/Input layers
name: "DetectNet" layer { name: "train_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_train_images.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "train_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_train_labels.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "val_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_test_images.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "val_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_test_labels.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "deploy_data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 384 dim: 1248 } } include: { phase: TEST not_stage: "val" } }
Data transformation layers
layer { name: "train_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: true object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } detectnet_augmentation_param: { crop_prob: 1 shift_x: 32 shift_y: 32 flip_prob: 0.5 rotation_prob: 0 max_rotate_degree: 5 scale_prob: 0.4 scale_min: 0.8 scale_max: 1.2 hue_rotation_prob: 0.8 hue_rotation: 30 desaturation_prob: 0.8 desaturation_max: 0.8 } transform_param: { mean_value: 127 } include: { phase: TRAIN } } layer { name: "val_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: false object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } transform_param: { mean_value: 127 } include: { phase: TEST stage: "val" } } layer { name: "deploy_transform" type: "Power" bottom: "data" top: "transformed_data" power_param { shift: -127 } include: { phase: TEST not_stage: "val" } }
Label conversion layers
layer { name: "slice-label" type: "Slice" bottom: "transformed_label" top: "foreground-label" top: "bbox-label" top: "size-label" top: "obj-label" top: "coverage-label" slice_param { slice_dim: 1 slice_point: 1 slice_point: 5 slice_point: 7 slice_point: 8 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage-block" type: "Concat" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" top: "coverage-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "size-block" type: "Concat" bottom: "size-label" bottom: "size-label" top: "size-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "obj-block" type: "Concat" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" top: "obj-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-label-norm" type: "Eltwise" bottom: "bbox-label" bottom: "size-block" top: "bbox-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-obj-norm" type: "Eltwise" bottom: "bbox-label-norm" bottom: "obj-block" top: "bbox-obj-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
######################################################################
Start of convolutional network
######################################################################
layer { name: "conv1/7x7_s2" type: "Convolution" bottom: "transformed_data" top: "conv1/7x7_s2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 3 kernel_size: 7 stride: 2 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv1/relu_7x7" type: "ReLU" bottom: "conv1/7x7_s2" top: "conv1/7x7_s2" }
layer { name: "pool1/3x3_s2" type: "Pooling" bottom: "conv1/7x7_s2" top: "pool1/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "pool1/norm1" type: "LRN" bottom: "pool1/3x3_s2" top: "pool1/norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "conv2/3x3_reduce" type: "Convolution" bottom: "pool1/norm1" top: "conv2/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3_reduce" type: "ReLU" bottom: "conv2/3x3_reduce" top: "conv2/3x3_reduce" }
layer { name: "conv2/3x3" type: "Convolution" bottom: "conv2/3x3_reduce" top: "conv2/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3" type: "ReLU" bottom: "conv2/3x3" top: "conv2/3x3" }
layer { name: "conv2/norm2" type: "LRN" bottom: "conv2/3x3" top: "conv2/norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "pool2/3x3_s2" type: "Pooling" bottom: "conv2/norm2" top: "pool2/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "inception_3a/1x1" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_1x1" type: "ReLU" bottom: "inception_3a/1x1" top: "inception_3a/1x1" }
layer { name: "inception_3a/3x3_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3_reduce" type: "ReLU" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3_reduce" }
layer { name: "inception_3a/3x3" type: "Convolution" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3" type: "ReLU" bottom: "inception_3a/3x3" top: "inception_3a/3x3" }
layer { name: "inception_3a/5x5_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5_reduce" type: "ReLU" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5_reduce" } layer { name: "inception_3a/5x5" type: "Convolution" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5" type: "ReLU" bottom: "inception_3a/5x5" top: "inception_3a/5x5" }
layer { name: "inception_3a/pool" type: "Pooling" bottom: "pool2/3x3_s2" top: "inception_3a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } }
layer { name: "inception_3a/pool_proj" type: "Convolution" bottom: "inception_3a/pool" top: "inception_3a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_pool_proj" type: "ReLU" bottom: "inception_3a/pool_proj" top: "inception_3a/pool_proj" }
layer { name: "inception_3a/output" type: "Concat" bottom: "inception_3a/1x1" bottom: "inception_3a/3x3" bottom: "inception_3a/5x5" bottom: "inception_3a/pool_proj" top: "inception_3a/output" }
layer { name: "inception_3b/1x1" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3b/relu_1x1" type: "ReLU" bottom: "inception_3b/1x1" top: "inception_3b/1x1" }
layer { name: "inception_3b/3x3_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3_reduce" type: "ReLU" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3_reduce" } layer { name: "inception_3b/3x3" type: "Convolution" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3" type: "ReLU" bottom: "inception_3b/3x3" top: "inception_3b/3x3" }
layer { name: "inception_3b/5x5_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5_reduce" type: "ReLU" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5_reduce" } layer { name: "inception_3b/5x5" type: "Convolution" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5" type: "ReLU" bottom: "inception_3b/5x5" top: "inception_3b/5x5" }
layer { name: "inception_3b/pool" type: "Pooling" bottom: "inception_3a/output" top: "inception_3b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_3b/pool_proj" type: "Convolution" bottom: "inception_3b/pool" top: "inception_3b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_pool_proj" type: "ReLU" bottom: "inception_3b/pool_proj" top: "inception_3b/pool_proj" } layer { name: "inception_3b/output" type: "Concat" bottom: "inception_3b/1x1" bottom: "inception_3b/3x3" bottom: "inception_3b/5x5" bottom: "inception_3b/pool_proj" top: "inception_3b/output" }
layer { name: "pool3/3x3_s2" type: "Pooling" bottom: "inception_3b/output" top: "pool3/3x3_s2" pooling_param { pool: MAX kernel_size: 1 stride: 1 } }
layer { name: "inception_4a/1x1" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_1x1" type: "ReLU" bottom: "inception_4a/1x1" top: "inception_4a/1x1" }
layer { name: "inception_4a/3x3_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3_reduce" type: "ReLU" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3_reduce" }
layer { name: "inception_4a/3x3" type: "Convolution" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 208 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3" type: "ReLU" bottom: "inception_4a/3x3" top: "inception_4a/3x3" }
layer { name: "inception_4a/5x5_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5_reduce" type: "ReLU" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5_reduce" } layer { name: "inception_4a/5x5" type: "Convolution" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5" type: "ReLU" bottom: "inception_4a/5x5" top: "inception_4a/5x5" } layer { name: "inception_4a/pool" type: "Pooling" bottom: "pool3/3x3_s2" top: "inception_4a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4a/pool_proj" type: "Convolution" bottom: "inception_4a/pool" top: "inception_4a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_pool_proj" type: "ReLU" bottom: "inception_4a/pool_proj" top: "inception_4a/pool_proj" } layer { name: "inception_4a/output" type: "Concat" bottom: "inception_4a/1x1" bottom: "inception_4a/3x3" bottom: "inception_4a/5x5" bottom: "inception_4a/pool_proj" top: "inception_4a/output" }
layer { name: "inception_4b/1x1" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4b/relu_1x1" type: "ReLU" bottom: "inception_4b/1x1" top: "inception_4b/1x1" } layer { name: "inception_4b/3x3_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3_reduce" type: "ReLU" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3_reduce" } layer { name: "inception_4b/3x3" type: "Convolution" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 224 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3" type: "ReLU" bottom: "inception_4b/3x3" top: "inception_4b/3x3" } layer { name: "inception_4b/5x5_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5_reduce" type: "ReLU" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5_reduce" } layer { name: "inception_4b/5x5" type: "Convolution" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5" type: "ReLU" bottom: "inception_4b/5x5" top: "inception_4b/5x5" } layer { name: "inception_4b/pool" type: "Pooling" bottom: "inception_4a/output" top: "inception_4b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4b/pool_proj" type: "Convolution" bottom: "inception_4b/pool" top: "inception_4b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_pool_proj" type: "ReLU" bottom: "inception_4b/pool_proj" top: "inception_4b/pool_proj" } layer { name: "inception_4b/output" type: "Concat" bottom: "inception_4b/1x1" bottom: "inception_4b/3x3" bottom: "inception_4b/5x5" bottom: "inception_4b/pool_proj" top: "inception_4b/output" }
layer { name: "inception_4c/1x1" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_1x1" type: "ReLU" bottom: "inception_4c/1x1" top: "inception_4c/1x1" }
layer { name: "inception_4c/3x3_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_3x3_reduce" type: "ReLU" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3_reduce" } layer { name: "inception_4c/3x3" type: "Convolution" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_3x3" type: "ReLU" bottom: "inception_4c/3x3" top: "inception_4c/3x3" } layer { name: "inception_4c/5x5_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5_reduce" type: "ReLU" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5_reduce" } layer { name: "inception_4c/5x5" type: "Convolution" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5" type: "ReLU" bottom: "inception_4c/5x5" top: "inception_4c/5x5" } layer { name: "inception_4c/pool" type: "Pooling" bottom: "inception_4b/output" top: "inception_4c/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4c/pool_proj" type: "Convolution" bottom: "inception_4c/pool" top: "inception_4c/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_pool_proj" type: "ReLU" bottom: "inception_4c/pool_proj" top: "inception_4c/pool_proj" } layer { name: "inception_4c/output" type: "Concat" bottom: "inception_4c/1x1" bottom: "inception_4c/3x3" bottom: "inception_4c/5x5" bottom: "inception_4c/pool_proj" top: "inception_4c/output" }
layer { name: "inception_4d/1x1" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_1x1" type: "ReLU" bottom: "inception_4d/1x1" top: "inception_4d/1x1" } layer { name: "inception_4d/3x3_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 144 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3_reduce" type: "ReLU" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3_reduce" } layer { name: "inception_4d/3x3" type: "Convolution" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 288 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3" type: "ReLU" bottom: "inception_4d/3x3" top: "inception_4d/3x3" } layer { name: "inception_4d/5x5_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5_reduce" type: "ReLU" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5_reduce" } layer { name: "inception_4d/5x5" type: "Convolution" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5" type: "ReLU" bottom: "inception_4d/5x5" top: "inception_4d/5x5" } layer { name: "inception_4d/pool" type: "Pooling" bottom: "inception_4c/output" top: "inception_4d/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4d/pool_proj" type: "Convolution" bottom: "inception_4d/pool" top: "inception_4d/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_pool_proj" type: "ReLU" bottom: "inception_4d/pool_proj" top: "inception_4d/pool_proj" } layer { name: "inception_4d/output" type: "Concat" bottom: "inception_4d/1x1" bottom: "inception_4d/3x3" bottom: "inception_4d/5x5" bottom: "inception_4d/pool_proj" top: "inception_4d/output" }
layer { name: "inception_4e/1x1" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_1x1" type: "ReLU" bottom: "inception_4e/1x1" top: "inception_4e/1x1" } layer { name: "inception_4e/3x3_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3_reduce" type: "ReLU" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3_reduce" } layer { name: "inception_4e/3x3" type: "Convolution" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3" type: "ReLU" bottom: "inception_4e/3x3" top: "inception_4e/3x3" } layer { name: "inception_4e/5x5_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5_reduce" type: "ReLU" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5_reduce" } layer { name: "inception_4e/5x5" type: "Convolution" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5" type: "ReLU" bottom: "inception_4e/5x5" top: "inception_4e/5x5" } layer { name: "inception_4e/pool" type: "Pooling" bottom: "inception_4d/output" top: "inception_4e/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4e/pool_proj" type: "Convolution" bottom: "inception_4e/pool" top: "inception_4e/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_pool_proj" type: "ReLU" bottom: "inception_4e/pool_proj" top: "inception_4e/pool_proj" } layer { name: "inception_4e/output" type: "Concat" bottom: "inception_4e/1x1" bottom: "inception_4e/3x3" bottom: "inception_4e/5x5" bottom: "inception_4e/pool_proj" top: "inception_4e/output" }
layer { name: "inception_5a/1x1" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_1x1" type: "ReLU" bottom: "inception_5a/1x1" top: "inception_5a/1x1" }
layer { name: "inception_5a/3x3_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3_reduce" type: "ReLU" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3_reduce" }
layer { name: "inception_5a/3x3" type: "Convolution" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3" type: "ReLU" bottom: "inception_5a/3x3" top: "inception_5a/3x3" } layer { name: "inception_5a/5x5_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5_reduce" type: "ReLU" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5_reduce" } layer { name: "inception_5a/5x5" type: "Convolution" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5" type: "ReLU" bottom: "inception_5a/5x5" top: "inception_5a/5x5" } layer { name: "inception_5a/pool" type: "Pooling" bottom: "inception_4e/output" top: "inception_5a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5a/pool_proj" type: "Convolution" bottom: "inception_5a/pool" top: "inception_5a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_pool_proj" type: "ReLU" bottom: "inception_5a/pool_proj" top: "inception_5a/pool_proj" } layer { name: "inception_5a/output" type: "Concat" bottom: "inception_5a/1x1" bottom: "inception_5a/3x3" bottom: "inception_5a/5x5" bottom: "inception_5a/pool_proj" top: "inception_5a/output" }
layer { name: "inception_5b/1x1" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_1x1" type: "ReLU" bottom: "inception_5b/1x1" top: "inception_5b/1x1" } layer { name: "inception_5b/3x3_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3_reduce" type: "ReLU" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3_reduce" } layer { name: "inception_5b/3x3" type: "Convolution" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3" type: "ReLU" bottom: "inception_5b/3x3" top: "inception_5b/3x3" } layer { name: "inception_5b/5x5_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5_reduce" type: "ReLU" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5_reduce" } layer { name: "inception_5b/5x5" type: "Convolution" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5" type: "ReLU" bottom: "inception_5b/5x5" top: "inception_5b/5x5" } layer { name: "inception_5b/pool" type: "Pooling" bottom: "inception_5a/output" top: "inception_5b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5b/pool_proj" type: "Convolution" bottom: "inception_5b/pool" top: "inception_5b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_pool_proj" type: "ReLU" bottom: "inception_5b/pool_proj" top: "inception_5b/pool_proj" } layer { name: "inception_5b/output" type: "Concat" bottom: "inception_5b/1x1" bottom: "inception_5b/3x3" bottom: "inception_5b/5x5" bottom: "inception_5b/pool_proj" top: "inception_5b/output" } layer { name: "pool5/drop_s1" type: "Dropout" bottom: "inception_5b/output" top: "pool5/drop_s1" dropout_param { dropout_ratio: 0.4 } } layer { name: "cvg/classifier" type: "Convolution" bottom: "pool5/drop_s1" top: "cvg/classifier" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 1 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } } layer { name: "coverage/sig" type: "Sigmoid" bottom: "cvg/classifier" top: "coverage" } layer { name: "bbox/regressor" type: "Convolution" bottom: "pool5/drop_s1" top: "bboxes" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 4 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } }
######################################################################
End of convolutional network
######################################################################
Convert bboxes
layer { name: "bbox_mask" type: "Eltwise" bottom: "bboxes" bottom: "coverage-block" top: "bboxes-masked" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-norm" type: "Eltwise" bottom: "bboxes-masked" bottom: "size-block" top: "bboxes-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-obj-norm" type: "Eltwise" bottom: "bboxes-masked-norm" bottom: "obj-block" top: "bboxes-obj-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
Loss layers
layer { name: "bbox_loss" type: "L1Loss" bottom: "bboxes-obj-masked-norm" bottom: "bbox-obj-label-norm" top: "loss_bbox" loss_weight: 2 include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage_loss" type: "EuclideanLoss" bottom: "coverage" bottom: "coverage-label" top: "loss_coverage" include { phase: TRAIN } include { phase: TEST stage: "val" } }
Cluster bboxes
layer { type: 'Python' name: 'cluster' bottom: 'coverage' bottom: 'bboxes' top: 'bbox-list' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterDetections' param_str : '1248, 352, 8, 0.6, 3, 0.02, 22, 1' } include: { phase: TEST } }
Calculate mean average precision
layer { type: 'Python' name: 'cluster_gt' bottom: 'coverage-label' bottom: 'bbox-label' top: 'bbox-list-label' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterGroundtruth' param_str : '1248, 352, 8, 1' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'score' bottom: 'bbox-list-label' bottom: 'bbox-list' top: 'bbox-list-scored' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'ScoreDetections' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'mAP' bottom: 'bbox-list-scored' top: 'mAP' top: 'precision' top: 'recall' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'mAP' param_str : '1248, 352, 8' } include: { phase: TEST stage: "val" } }
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293153575, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aAv6CfJETsM5bD2qfHRiXbGMwIWbks5ruw8ygaJpZM4LfTqE . ...
No.It haven’t produced boundingbox yet for me and now it is running.I will inform if it gives any better output…but you may also try with it as ideally it should work.Also do you know the parameters other than stride ,dimension in the cluster threshold section .
On 11-Apr-2017, at 12:46 PM, shreyasramesh notifications@github.com wrote:
Did the custom script produce bounding boxes for your dataset?
Sorry, got very busy yesterday. I'll definitely have a look today!
On Apr 11, 2017 10:57 AM, "sulthanashafi" notifications@github.com wrote:
Iam attaching a custom code for trying on your dataset changing the stride value.Hope it will work for you.Also have you got to suggest any modification in my dataset.
On 10-Apr-2017, at 3:45 PM, shreyasramesh notifications@github.com wrote:
Thanks for the file. Will compare with my results and let you know ASAP.
https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa Meanwhile, please check this publicly available prototext and see if this makes sense for your problem at hand. If it does, tune your images to fit the above prototext image format: 1280x1280 instead of 1000x1000.
https://devblogs.nvidia.com/parallelforall/exploring- spacenet-dataset-using-digits/ https://devblogs.nvidia.com/ parallelforall/exploring-spacenet-dataset-using-digits/ — You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292908117>, or mute the thread https://github.com/notifications/unsubscribe-auth/ AZZckCDkLdqVsyfZX4qfn8iHRbSKqNbyks5rugFBgaJpZM4LfTqE.
DetectNet network
Data/Input layers
name: "DetectNet" layer { name: "train_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_train_images.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "train_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_train_labels.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "val_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_test_images.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "val_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_test_labels.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "deploy_data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 384 dim: 1248 } } include: { phase: TEST not_stage: "val" } }
Data transformation layers
layer { name: "train_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: true object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } detectnet_augmentation_param: { crop_prob: 1 shift_x: 32 shift_y: 32 flip_prob: 0.5 rotation_prob: 0 max_rotate_degree: 5 scale_prob: 0.4 scale_min: 0.8 scale_max: 1.2 hue_rotation_prob: 0.8 hue_rotation: 30 desaturation_prob: 0.8 desaturation_max: 0.8 } transform_param: { mean_value: 127 } include: { phase: TRAIN } } layer { name: "val_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: false object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } transform_param: { mean_value: 127 } include: { phase: TEST stage: "val" } } layer { name: "deploy_transform" type: "Power" bottom: "data" top: "transformed_data" power_param { shift: -127 } include: { phase: TEST not_stage: "val" } }
Label conversion layers
layer { name: "slice-label" type: "Slice" bottom: "transformed_label" top: "foreground-label" top: "bbox-label" top: "size-label" top: "obj-label" top: "coverage-label" slice_param { slice_dim: 1 slice_point: 1 slice_point: 5 slice_point: 7 slice_point: 8 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage-block" type: "Concat" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" top: "coverage-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "size-block" type: "Concat" bottom: "size-label" bottom: "size-label" top: "size-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "obj-block" type: "Concat" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" top: "obj-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-label-norm" type: "Eltwise" bottom: "bbox-label" bottom: "size-block" top: "bbox-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-obj-norm" type: "Eltwise" bottom: "bbox-label-norm" bottom: "obj-block" top: "bbox-obj-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
######################################################################
Start of convolutional network
######################################################################
layer { name: "conv1/7x7_s2" type: "Convolution" bottom: "transformed_data" top: "conv1/7x7_s2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 3 kernel_size: 7 stride: 2 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv1/relu_7x7" type: "ReLU" bottom: "conv1/7x7_s2" top: "conv1/7x7_s2" }
layer { name: "pool1/3x3_s2" type: "Pooling" bottom: "conv1/7x7_s2" top: "pool1/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "pool1/norm1" type: "LRN" bottom: "pool1/3x3_s2" top: "pool1/norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "conv2/3x3_reduce" type: "Convolution" bottom: "pool1/norm1" top: "conv2/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3_reduce" type: "ReLU" bottom: "conv2/3x3_reduce" top: "conv2/3x3_reduce" }
layer { name: "conv2/3x3" type: "Convolution" bottom: "conv2/3x3_reduce" top: "conv2/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3" type: "ReLU" bottom: "conv2/3x3" top: "conv2/3x3" }
layer { name: "conv2/norm2" type: "LRN" bottom: "conv2/3x3" top: "conv2/norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "pool2/3x3_s2" type: "Pooling" bottom: "conv2/norm2" top: "pool2/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "inception_3a/1x1" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_1x1" type: "ReLU" bottom: "inception_3a/1x1" top: "inception_3a/1x1" }
layer { name: "inception_3a/3x3_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3_reduce" type: "ReLU" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3_reduce" }
layer { name: "inception_3a/3x3" type: "Convolution" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3" type: "ReLU" bottom: "inception_3a/3x3" top: "inception_3a/3x3" }
layer { name: "inception_3a/5x5_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5_reduce" type: "ReLU" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5_reduce" } layer { name: "inception_3a/5x5" type: "Convolution" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5" type: "ReLU" bottom: "inception_3a/5x5" top: "inception_3a/5x5" }
layer { name: "inception_3a/pool" type: "Pooling" bottom: "pool2/3x3_s2" top: "inception_3a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } }
layer { name: "inception_3a/pool_proj" type: "Convolution" bottom: "inception_3a/pool" top: "inception_3a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_pool_proj" type: "ReLU" bottom: "inception_3a/pool_proj" top: "inception_3a/pool_proj" }
layer { name: "inception_3a/output" type: "Concat" bottom: "inception_3a/1x1" bottom: "inception_3a/3x3" bottom: "inception_3a/5x5" bottom: "inception_3a/pool_proj" top: "inception_3a/output" }
layer { name: "inception_3b/1x1" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3b/relu_1x1" type: "ReLU" bottom: "inception_3b/1x1" top: "inception_3b/1x1" }
layer { name: "inception_3b/3x3_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3_reduce" type: "ReLU" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3_reduce" } layer { name: "inception_3b/3x3" type: "Convolution" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3" type: "ReLU" bottom: "inception_3b/3x3" top: "inception_3b/3x3" }
layer { name: "inception_3b/5x5_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5_reduce" type: "ReLU" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5_reduce" } layer { name: "inception_3b/5x5" type: "Convolution" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5" type: "ReLU" bottom: "inception_3b/5x5" top: "inception_3b/5x5" }
layer { name: "inception_3b/pool" type: "Pooling" bottom: "inception_3a/output" top: "inception_3b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_3b/pool_proj" type: "Convolution" bottom: "inception_3b/pool" top: "inception_3b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_pool_proj" type: "ReLU" bottom: "inception_3b/pool_proj" top: "inception_3b/pool_proj" } layer { name: "inception_3b/output" type: "Concat" bottom: "inception_3b/1x1" bottom: "inception_3b/3x3" bottom: "inception_3b/5x5" bottom: "inception_3b/pool_proj" top: "inception_3b/output" }
layer { name: "pool3/3x3_s2" type: "Pooling" bottom: "inception_3b/output" top: "pool3/3x3_s2" pooling_param { pool: MAX kernel_size: 1 stride: 1 } }
layer { name: "inception_4a/1x1" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_1x1" type: "ReLU" bottom: "inception_4a/1x1" top: "inception_4a/1x1" }
layer { name: "inception_4a/3x3_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3_reduce" type: "ReLU" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3_reduce" }
layer { name: "inception_4a/3x3" type: "Convolution" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 208 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3" type: "ReLU" bottom: "inception_4a/3x3" top: "inception_4a/3x3" }
layer { name: "inception_4a/5x5_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5_reduce" type: "ReLU" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5_reduce" } layer { name: "inception_4a/5x5" type: "Convolution" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5" type: "ReLU" bottom: "inception_4a/5x5" top: "inception_4a/5x5" } layer { name: "inception_4a/pool" type: "Pooling" bottom: "pool3/3x3_s2" top: "inception_4a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4a/pool_proj" type: "Convolution" bottom: "inception_4a/pool" top: "inception_4a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_pool_proj" type: "ReLU" bottom: "inception_4a/pool_proj" top: "inception_4a/pool_proj" } layer { name: "inception_4a/output" type: "Concat" bottom: "inception_4a/1x1" bottom: "inception_4a/3x3" bottom: "inception_4a/5x5" bottom: "inception_4a/pool_proj" top: "inception_4a/output" }
layer { name: "inception_4b/1x1" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4b/relu_1x1" type: "ReLU" bottom: "inception_4b/1x1" top: "inception_4b/1x1" } layer { name: "inception_4b/3x3_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3_reduce" type: "ReLU" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3_reduce" } layer { name: "inception_4b/3x3" type: "Convolution" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 224 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3" type: "ReLU" bottom: "inception_4b/3x3" top: "inception_4b/3x3" } layer { name: "inception_4b/5x5_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5_reduce" type: "ReLU" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5_reduce" } layer { name: "inception_4b/5x5" type: "Convolution" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5" type: "ReLU" bottom: "inception_4b/5x5" top: "inception_4b/5x5" } layer { name: "inception_4b/pool" type: "Pooling" bottom: "inception_4a/output" top: "inception_4b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4b/pool_proj" type: "Convolution" bottom: "inception_4b/pool" top: "inception_4b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_pool_proj" type: "ReLU" bottom: "inception_4b/pool_proj" top: "inception_4b/pool_proj" } layer { name: "inception_4b/output" type: "Concat" bottom: "inception_4b/1x1" bottom: "inception_4b/3x3" bottom: "inception_4b/5x5" bottom: "inception_4b/pool_proj" top: "inception_4b/output" }
layer { name: "inception_4c/1x1" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_1x1" type: "ReLU" bottom: "inception_4c/1x1" top: "inception_4c/1x1" }
layer { name: "inception_4c/3x3_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_3x3_reduce" type: "ReLU" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3_reduce" } layer { name: "inception_4c/3x3" type: "Convolution" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_3x3" type: "ReLU" bottom: "inception_4c/3x3" top: "inception_4c/3x3" } layer { name: "inception_4c/5x5_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5_reduce" type: "ReLU" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5_reduce" } layer { name: "inception_4c/5x5" type: "Convolution" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5" type: "ReLU" bottom: "inception_4c/5x5" top: "inception_4c/5x5" } layer { name: "inception_4c/pool" type: "Pooling" bottom: "inception_4b/output" top: "inception_4c/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4c/pool_proj" type: "Convolution" bottom: "inception_4c/pool" top: "inception_4c/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_pool_proj" type: "ReLU" bottom: "inception_4c/pool_proj" top: "inception_4c/pool_proj" } layer { name: "inception_4c/output" type: "Concat" bottom: "inception_4c/1x1" bottom: "inception_4c/3x3" bottom: "inception_4c/5x5" bottom: "inception_4c/pool_proj" top: "inception_4c/output" }
layer { name: "inception_4d/1x1" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_1x1" type: "ReLU" bottom: "inception_4d/1x1" top: "inception_4d/1x1" } layer { name: "inception_4d/3x3_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 144 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3_reduce" type: "ReLU" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3_reduce" } layer { name: "inception_4d/3x3" type: "Convolution" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 288 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3" type: "ReLU" bottom: "inception_4d/3x3" top: "inception_4d/3x3" } layer { name: "inception_4d/5x5_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5_reduce" type: "ReLU" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5_reduce" } layer { name: "inception_4d/5x5" type: "Convolution" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5" type: "ReLU" bottom: "inception_4d/5x5" top: "inception_4d/5x5" } layer { name: "inception_4d/pool" type: "Pooling" bottom: "inception_4c/output" top: "inception_4d/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4d/pool_proj" type: "Convolution" bottom: "inception_4d/pool" top: "inception_4d/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_pool_proj" type: "ReLU" bottom: "inception_4d/pool_proj" top: "inception_4d/pool_proj" } layer { name: "inception_4d/output" type: "Concat" bottom: "inception_4d/1x1" bottom: "inception_4d/3x3" bottom: "inception_4d/5x5" bottom: "inception_4d/pool_proj" top: "inception_4d/output" }
layer { name: "inception_4e/1x1" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_1x1" type: "ReLU" bottom: "inception_4e/1x1" top: "inception_4e/1x1" } layer { name: "inception_4e/3x3_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3_reduce" type: "ReLU" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3_reduce" } layer { name: "inception_4e/3x3" type: "Convolution" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3" type: "ReLU" bottom: "inception_4e/3x3" top: "inception_4e/3x3" } layer { name: "inception_4e/5x5_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5_reduce" type: "ReLU" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5_reduce" } layer { name: "inception_4e/5x5" type: "Convolution" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5" type: "ReLU" bottom: "inception_4e/5x5" top: "inception_4e/5x5" } layer { name: "inception_4e/pool" type: "Pooling" bottom: "inception_4d/output" top: "inception_4e/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4e/pool_proj" type: "Convolution" bottom: "inception_4e/pool" top: "inception_4e/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_pool_proj" type: "ReLU" bottom: "inception_4e/pool_proj" top: "inception_4e/pool_proj" } layer { name: "inception_4e/output" type: "Concat" bottom: "inception_4e/1x1" bottom: "inception_4e/3x3" bottom: "inception_4e/5x5" bottom: "inception_4e/pool_proj" top: "inception_4e/output" }
layer { name: "inception_5a/1x1" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_1x1" type: "ReLU" bottom: "inception_5a/1x1" top: "inception_5a/1x1" }
layer { name: "inception_5a/3x3_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3_reduce" type: "ReLU" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3_reduce" }
layer { name: "inception_5a/3x3" type: "Convolution" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3" type: "ReLU" bottom: "inception_5a/3x3" top: "inception_5a/3x3" } layer { name: "inception_5a/5x5_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5_reduce" type: "ReLU" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5_reduce" } layer { name: "inception_5a/5x5" type: "Convolution" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5" type: "ReLU" bottom: "inception_5a/5x5" top: "inception_5a/5x5" } layer { name: "inception_5a/pool" type: "Pooling" bottom: "inception_4e/output" top: "inception_5a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5a/pool_proj" type: "Convolution" bottom: "inception_5a/pool" top: "inception_5a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_pool_proj" type: "ReLU" bottom: "inception_5a/pool_proj" top: "inception_5a/pool_proj" } layer { name: "inception_5a/output" type: "Concat" bottom: "inception_5a/1x1" bottom: "inception_5a/3x3" bottom: "inception_5a/5x5" bottom: "inception_5a/pool_proj" top: "inception_5a/output" }
layer { name: "inception_5b/1x1" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_1x1" type: "ReLU" bottom: "inception_5b/1x1" top: "inception_5b/1x1" } layer { name: "inception_5b/3x3_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3_reduce" type: "ReLU" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3_reduce" } layer { name: "inception_5b/3x3" type: "Convolution" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3" type: "ReLU" bottom: "inception_5b/3x3" top: "inception_5b/3x3" } layer { name: "inception_5b/5x5_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5_reduce" type: "ReLU" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5_reduce" } layer { name: "inception_5b/5x5" type: "Convolution" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5" type: "ReLU" bottom: "inception_5b/5x5" top: "inception_5b/5x5" } layer { name: "inception_5b/pool" type: "Pooling" bottom: "inception_5a/output" top: "inception_5b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5b/pool_proj" type: "Convolution" bottom: "inception_5b/pool" top: "inception_5b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_pool_proj" type: "ReLU" bottom: "inception_5b/pool_proj" top: "inception_5b/pool_proj" } layer { name: "inception_5b/output" type: "Concat" bottom: "inception_5b/1x1" bottom: "inception_5b/3x3" bottom: "inception_5b/5x5" bottom: "inception_5b/pool_proj" top: "inception_5b/output" } layer { name: "pool5/drop_s1" type: "Dropout" bottom: "inception_5b/output" top: "pool5/drop_s1" dropout_param { dropout_ratio: 0.4 } } layer { name: "cvg/classifier" type: "Convolution" bottom: "pool5/drop_s1" top: "cvg/classifier" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 1 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } } layer { name: "coverage/sig" type: "Sigmoid" bottom: "cvg/classifier" top: "coverage" } layer { name: "bbox/regressor" type: "Convolution" bottom: "pool5/drop_s1" top: "bboxes" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 4 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } }
######################################################################
End of convolutional network
######################################################################
Convert bboxes
layer { name: "bbox_mask" type: "Eltwise" bottom: "bboxes" bottom: "coverage-block" top: "bboxes-masked" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-norm" type: "Eltwise" bottom: "bboxes-masked" bottom: "size-block" top: "bboxes-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-obj-norm" type: "Eltwise" bottom: "bboxes-masked-norm" bottom: "obj-block" top: "bboxes-obj-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
Loss layers
layer { name: "bbox_loss" type: "L1Loss" bottom: "bboxes-obj-masked-norm" bottom: "bbox-obj-label-norm" top: "loss_bbox" loss_weight: 2 include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage_loss" type: "EuclideanLoss" bottom: "coverage" bottom: "coverage-label" top: "loss_coverage" include { phase: TRAIN } include { phase: TEST stage: "val" } }
Cluster bboxes
layer { type: 'Python' name: 'cluster' bottom: 'coverage' bottom: 'bboxes' top: 'bbox-list' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterDetections' param_str : '1248, 352, 8, 0.6, 3, 0.02, 22, 1' } include: { phase: TEST } }
Calculate mean average precision
layer { type: 'Python' name: 'cluster_gt' bottom: 'coverage-label' bottom: 'bbox-label' top: 'bbox-list-label' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterGroundtruth' param_str : '1248, 352, 8, 1' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'score' bottom: 'bbox-list-label' bottom: 'bbox-list' top: 'bbox-list-scored' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'ScoreDetections' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'mAP' bottom: 'bbox-list-scored' top: 'mAP' top: 'precision' top: 'recall' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'mAP' param_str : '1248, 352, 8' } include: { phase: TEST stage: "val" } }
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293153575, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aAv6CfJETsM5bD2qfHRiXbGMwIWbks5ruw8ygaJpZM4LfTqE . ... — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293172179, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckOomlXp0aK6Ip7mgSeQ7QNUeFl1Rks5ruyjXgaJpZM4LfTqE.
It is from deploy.text .param_str: "1248, 352, 8, 0.6, 3, 0.02, 22, 1" the first two are dimension,third is stride value,last 1 is for number of classes.May i know what other 0.6,3,0.02,22 used for and how to change it.
On 11-Apr-2017, at 12:46 PM, shreyasramesh notifications@github.com wrote:
Did the custom script produce bounding boxes for your dataset?
Sorry, got very busy yesterday. I'll definitely have a look today!
On Apr 11, 2017 10:57 AM, "sulthanashafi" notifications@github.com wrote:
Iam attaching a custom code for trying on your dataset changing the stride value.Hope it will work for you.Also have you got to suggest any modification in my dataset.
On 10-Apr-2017, at 3:45 PM, shreyasramesh notifications@github.com wrote:
Thanks for the file. Will compare with my results and let you know ASAP.
https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa https://gist.github.com/jbarker-nvidia/127947d8a961bfbe2d0d403dd9bed2aa Meanwhile, please check this publicly available prototext and see if this makes sense for your problem at hand. If it does, tune your images to fit the above prototext image format: 1280x1280 instead of 1000x1000.
https://devblogs.nvidia.com/parallelforall/exploring- spacenet-dataset-using-digits/ https://devblogs.nvidia.com/ parallelforall/exploring-spacenet-dataset-using-digits/ — You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-292908117>, or mute the thread https://github.com/notifications/unsubscribe-auth/ AZZckCDkLdqVsyfZX4qfn8iHRbSKqNbyks5rugFBgaJpZM4LfTqE.
DetectNet network
Data/Input layers
name: "DetectNet" layer { name: "train_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_train_images.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "train_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_train_labels.lmdb" batch_size: 10 } include: { phase: TRAIN } } layer { name: "val_data" type: "Data" top: "data" data_param { backend: LMDB source: "examples/kitti/kitti_test_images.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "val_label" type: "Data" top: "label" data_param { backend: LMDB source: "examples/kitti/kitti_test_labels.lmdb" batch_size: 6 } include: { phase: TEST stage: "val" } } layer { name: "deploy_data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 384 dim: 1248 } } include: { phase: TEST not_stage: "val" } }
Data transformation layers
layer { name: "train_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: true object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } detectnet_augmentation_param: { crop_prob: 1 shift_x: 32 shift_y: 32 flip_prob: 0.5 rotation_prob: 0 max_rotate_degree: 5 scale_prob: 0.4 scale_min: 0.8 scale_max: 1.2 hue_rotation_prob: 0.8 hue_rotation: 30 desaturation_prob: 0.8 desaturation_max: 0.8 } transform_param: { mean_value: 127 } include: { phase: TRAIN } } layer { name: "val_transform" type: "DetectNetTransformation" bottom: "data" bottom: "label" top: "transformed_data" top: "transformed_label" detectnet_groundtruth_param: { stride: 8 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN coverage_type: RECTANGULAR min_cvg_len: 20 obj_norm: true image_size_x: 1248 image_size_y: 384 crop_bboxes: false object_class: { src: 1 dst: 0} # obj class 1 -> cvg index 0 } transform_param: { mean_value: 127 } include: { phase: TEST stage: "val" } } layer { name: "deploy_transform" type: "Power" bottom: "data" top: "transformed_data" power_param { shift: -127 } include: { phase: TEST not_stage: "val" } }
Label conversion layers
layer { name: "slice-label" type: "Slice" bottom: "transformed_label" top: "foreground-label" top: "bbox-label" top: "size-label" top: "obj-label" top: "coverage-label" slice_param { slice_dim: 1 slice_point: 1 slice_point: 5 slice_point: 7 slice_point: 8 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage-block" type: "Concat" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" bottom: "foreground-label" top: "coverage-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "size-block" type: "Concat" bottom: "size-label" bottom: "size-label" top: "size-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "obj-block" type: "Concat" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" bottom: "obj-label" top: "obj-block" concat_param { concat_dim: 1 } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-label-norm" type: "Eltwise" bottom: "bbox-label" bottom: "size-block" top: "bbox-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bb-obj-norm" type: "Eltwise" bottom: "bbox-label-norm" bottom: "obj-block" top: "bbox-obj-label-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
######################################################################
Start of convolutional network
######################################################################
layer { name: "conv1/7x7_s2" type: "Convolution" bottom: "transformed_data" top: "conv1/7x7_s2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 3 kernel_size: 7 stride: 2 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv1/relu_7x7" type: "ReLU" bottom: "conv1/7x7_s2" top: "conv1/7x7_s2" }
layer { name: "pool1/3x3_s2" type: "Pooling" bottom: "conv1/7x7_s2" top: "pool1/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "pool1/norm1" type: "LRN" bottom: "pool1/3x3_s2" top: "pool1/norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "conv2/3x3_reduce" type: "Convolution" bottom: "pool1/norm1" top: "conv2/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3_reduce" type: "ReLU" bottom: "conv2/3x3_reduce" top: "conv2/3x3_reduce" }
layer { name: "conv2/3x3" type: "Convolution" bottom: "conv2/3x3_reduce" top: "conv2/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "conv2/relu_3x3" type: "ReLU" bottom: "conv2/3x3" top: "conv2/3x3" }
layer { name: "conv2/norm2" type: "LRN" bottom: "conv2/3x3" top: "conv2/norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } }
layer { name: "pool2/3x3_s2" type: "Pooling" bottom: "conv2/norm2" top: "pool2/3x3_s2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }
layer { name: "inception_3a/1x1" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_1x1" type: "ReLU" bottom: "inception_3a/1x1" top: "inception_3a/1x1" }
layer { name: "inception_3a/3x3_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3_reduce" type: "ReLU" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3_reduce" }
layer { name: "inception_3a/3x3" type: "Convolution" bottom: "inception_3a/3x3_reduce" top: "inception_3a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3a/relu_3x3" type: "ReLU" bottom: "inception_3a/3x3" top: "inception_3a/3x3" }
layer { name: "inception_3a/5x5_reduce" type: "Convolution" bottom: "pool2/3x3_s2" top: "inception_3a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5_reduce" type: "ReLU" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5_reduce" } layer { name: "inception_3a/5x5" type: "Convolution" bottom: "inception_3a/5x5_reduce" top: "inception_3a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_5x5" type: "ReLU" bottom: "inception_3a/5x5" top: "inception_3a/5x5" }
layer { name: "inception_3a/pool" type: "Pooling" bottom: "pool2/3x3_s2" top: "inception_3a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } }
layer { name: "inception_3a/pool_proj" type: "Convolution" bottom: "inception_3a/pool" top: "inception_3a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3a/relu_pool_proj" type: "ReLU" bottom: "inception_3a/pool_proj" top: "inception_3a/pool_proj" }
layer { name: "inception_3a/output" type: "Concat" bottom: "inception_3a/1x1" bottom: "inception_3a/3x3" bottom: "inception_3a/5x5" bottom: "inception_3a/pool_proj" top: "inception_3a/output" }
layer { name: "inception_3b/1x1" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_3b/relu_1x1" type: "ReLU" bottom: "inception_3b/1x1" top: "inception_3b/1x1" }
layer { name: "inception_3b/3x3_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3_reduce" type: "ReLU" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3_reduce" } layer { name: "inception_3b/3x3" type: "Convolution" bottom: "inception_3b/3x3_reduce" top: "inception_3b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_3x3" type: "ReLU" bottom: "inception_3b/3x3" top: "inception_3b/3x3" }
layer { name: "inception_3b/5x5_reduce" type: "Convolution" bottom: "inception_3a/output" top: "inception_3b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5_reduce" type: "ReLU" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5_reduce" } layer { name: "inception_3b/5x5" type: "Convolution" bottom: "inception_3b/5x5_reduce" top: "inception_3b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_5x5" type: "ReLU" bottom: "inception_3b/5x5" top: "inception_3b/5x5" }
layer { name: "inception_3b/pool" type: "Pooling" bottom: "inception_3a/output" top: "inception_3b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_3b/pool_proj" type: "Convolution" bottom: "inception_3b/pool" top: "inception_3b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_3b/relu_pool_proj" type: "ReLU" bottom: "inception_3b/pool_proj" top: "inception_3b/pool_proj" } layer { name: "inception_3b/output" type: "Concat" bottom: "inception_3b/1x1" bottom: "inception_3b/3x3" bottom: "inception_3b/5x5" bottom: "inception_3b/pool_proj" top: "inception_3b/output" }
layer { name: "pool3/3x3_s2" type: "Pooling" bottom: "inception_3b/output" top: "pool3/3x3_s2" pooling_param { pool: MAX kernel_size: 1 stride: 1 } }
layer { name: "inception_4a/1x1" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_1x1" type: "ReLU" bottom: "inception_4a/1x1" top: "inception_4a/1x1" }
layer { name: "inception_4a/3x3_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3_reduce" type: "ReLU" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3_reduce" }
layer { name: "inception_4a/3x3" type: "Convolution" bottom: "inception_4a/3x3_reduce" top: "inception_4a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 208 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4a/relu_3x3" type: "ReLU" bottom: "inception_4a/3x3" top: "inception_4a/3x3" }
layer { name: "inception_4a/5x5_reduce" type: "Convolution" bottom: "pool3/3x3_s2" top: "inception_4a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 16 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5_reduce" type: "ReLU" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5_reduce" } layer { name: "inception_4a/5x5" type: "Convolution" bottom: "inception_4a/5x5_reduce" top: "inception_4a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_5x5" type: "ReLU" bottom: "inception_4a/5x5" top: "inception_4a/5x5" } layer { name: "inception_4a/pool" type: "Pooling" bottom: "pool3/3x3_s2" top: "inception_4a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4a/pool_proj" type: "Convolution" bottom: "inception_4a/pool" top: "inception_4a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4a/relu_pool_proj" type: "ReLU" bottom: "inception_4a/pool_proj" top: "inception_4a/pool_proj" } layer { name: "inception_4a/output" type: "Concat" bottom: "inception_4a/1x1" bottom: "inception_4a/3x3" bottom: "inception_4a/5x5" bottom: "inception_4a/pool_proj" top: "inception_4a/output" }
layer { name: "inception_4b/1x1" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4b/relu_1x1" type: "ReLU" bottom: "inception_4b/1x1" top: "inception_4b/1x1" } layer { name: "inception_4b/3x3_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3_reduce" type: "ReLU" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3_reduce" } layer { name: "inception_4b/3x3" type: "Convolution" bottom: "inception_4b/3x3_reduce" top: "inception_4b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 224 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_3x3" type: "ReLU" bottom: "inception_4b/3x3" top: "inception_4b/3x3" } layer { name: "inception_4b/5x5_reduce" type: "Convolution" bottom: "inception_4a/output" top: "inception_4b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5_reduce" type: "ReLU" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5_reduce" } layer { name: "inception_4b/5x5" type: "Convolution" bottom: "inception_4b/5x5_reduce" top: "inception_4b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_5x5" type: "ReLU" bottom: "inception_4b/5x5" top: "inception_4b/5x5" } layer { name: "inception_4b/pool" type: "Pooling" bottom: "inception_4a/output" top: "inception_4b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4b/pool_proj" type: "Convolution" bottom: "inception_4b/pool" top: "inception_4b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4b/relu_pool_proj" type: "ReLU" bottom: "inception_4b/pool_proj" top: "inception_4b/pool_proj" } layer { name: "inception_4b/output" type: "Concat" bottom: "inception_4b/1x1" bottom: "inception_4b/3x3" bottom: "inception_4b/5x5" bottom: "inception_4b/pool_proj" top: "inception_4b/output" }
layer { name: "inception_4c/1x1" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_1x1" type: "ReLU" bottom: "inception_4c/1x1" top: "inception_4c/1x1" }
layer { name: "inception_4c/3x3_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } }
layer { name: "inception_4c/relu_3x3_reduce" type: "ReLU" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3_reduce" } layer { name: "inception_4c/3x3" type: "Convolution" bottom: "inception_4c/3x3_reduce" top: "inception_4c/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_3x3" type: "ReLU" bottom: "inception_4c/3x3" top: "inception_4c/3x3" } layer { name: "inception_4c/5x5_reduce" type: "Convolution" bottom: "inception_4b/output" top: "inception_4c/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 24 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5_reduce" type: "ReLU" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5_reduce" } layer { name: "inception_4c/5x5" type: "Convolution" bottom: "inception_4c/5x5_reduce" top: "inception_4c/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_5x5" type: "ReLU" bottom: "inception_4c/5x5" top: "inception_4c/5x5" } layer { name: "inception_4c/pool" type: "Pooling" bottom: "inception_4b/output" top: "inception_4c/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4c/pool_proj" type: "Convolution" bottom: "inception_4c/pool" top: "inception_4c/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4c/relu_pool_proj" type: "ReLU" bottom: "inception_4c/pool_proj" top: "inception_4c/pool_proj" } layer { name: "inception_4c/output" type: "Concat" bottom: "inception_4c/1x1" bottom: "inception_4c/3x3" bottom: "inception_4c/5x5" bottom: "inception_4c/pool_proj" top: "inception_4c/output" }
layer { name: "inception_4d/1x1" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 112 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_1x1" type: "ReLU" bottom: "inception_4d/1x1" top: "inception_4d/1x1" } layer { name: "inception_4d/3x3_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 144 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3_reduce" type: "ReLU" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3_reduce" } layer { name: "inception_4d/3x3" type: "Convolution" bottom: "inception_4d/3x3_reduce" top: "inception_4d/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 288 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_3x3" type: "ReLU" bottom: "inception_4d/3x3" top: "inception_4d/3x3" } layer { name: "inception_4d/5x5_reduce" type: "Convolution" bottom: "inception_4c/output" top: "inception_4d/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5_reduce" type: "ReLU" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5_reduce" } layer { name: "inception_4d/5x5" type: "Convolution" bottom: "inception_4d/5x5_reduce" top: "inception_4d/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_5x5" type: "ReLU" bottom: "inception_4d/5x5" top: "inception_4d/5x5" } layer { name: "inception_4d/pool" type: "Pooling" bottom: "inception_4c/output" top: "inception_4d/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4d/pool_proj" type: "Convolution" bottom: "inception_4d/pool" top: "inception_4d/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 64 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4d/relu_pool_proj" type: "ReLU" bottom: "inception_4d/pool_proj" top: "inception_4d/pool_proj" } layer { name: "inception_4d/output" type: "Concat" bottom: "inception_4d/1x1" bottom: "inception_4d/3x3" bottom: "inception_4d/5x5" bottom: "inception_4d/pool_proj" top: "inception_4d/output" }
layer { name: "inception_4e/1x1" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_1x1" type: "ReLU" bottom: "inception_4e/1x1" top: "inception_4e/1x1" } layer { name: "inception_4e/3x3_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3_reduce" type: "ReLU" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3_reduce" } layer { name: "inception_4e/3x3" type: "Convolution" bottom: "inception_4e/3x3_reduce" top: "inception_4e/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_3x3" type: "ReLU" bottom: "inception_4e/3x3" top: "inception_4e/3x3" } layer { name: "inception_4e/5x5_reduce" type: "Convolution" bottom: "inception_4d/output" top: "inception_4e/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5_reduce" type: "ReLU" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5_reduce" } layer { name: "inception_4e/5x5" type: "Convolution" bottom: "inception_4e/5x5_reduce" top: "inception_4e/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_5x5" type: "ReLU" bottom: "inception_4e/5x5" top: "inception_4e/5x5" } layer { name: "inception_4e/pool" type: "Pooling" bottom: "inception_4d/output" top: "inception_4e/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_4e/pool_proj" type: "Convolution" bottom: "inception_4e/pool" top: "inception_4e/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_4e/relu_pool_proj" type: "ReLU" bottom: "inception_4e/pool_proj" top: "inception_4e/pool_proj" } layer { name: "inception_4e/output" type: "Concat" bottom: "inception_4e/1x1" bottom: "inception_4e/3x3" bottom: "inception_4e/5x5" bottom: "inception_4e/pool_proj" top: "inception_4e/output" }
layer { name: "inception_5a/1x1" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_1x1" type: "ReLU" bottom: "inception_5a/1x1" top: "inception_5a/1x1" }
layer { name: "inception_5a/3x3_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 160 kernel_size: 1 weight_filler { type: "xavier" std: 0.09 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3_reduce" type: "ReLU" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3_reduce" }
layer { name: "inception_5a/3x3" type: "Convolution" bottom: "inception_5a/3x3_reduce" top: "inception_5a/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 320 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_3x3" type: "ReLU" bottom: "inception_5a/3x3" top: "inception_5a/3x3" } layer { name: "inception_5a/5x5_reduce" type: "Convolution" bottom: "inception_4e/output" top: "inception_5a/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 32 kernel_size: 1 weight_filler { type: "xavier" std: 0.2 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5_reduce" type: "ReLU" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5_reduce" } layer { name: "inception_5a/5x5" type: "Convolution" bottom: "inception_5a/5x5_reduce" top: "inception_5a/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_5x5" type: "ReLU" bottom: "inception_5a/5x5" top: "inception_5a/5x5" } layer { name: "inception_5a/pool" type: "Pooling" bottom: "inception_4e/output" top: "inception_5a/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5a/pool_proj" type: "Convolution" bottom: "inception_5a/pool" top: "inception_5a/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5a/relu_pool_proj" type: "ReLU" bottom: "inception_5a/pool_proj" top: "inception_5a/pool_proj" } layer { name: "inception_5a/output" type: "Concat" bottom: "inception_5a/1x1" bottom: "inception_5a/3x3" bottom: "inception_5a/5x5" bottom: "inception_5a/pool_proj" top: "inception_5a/output" }
layer { name: "inception_5b/1x1" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/1x1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_1x1" type: "ReLU" bottom: "inception_5b/1x1" top: "inception_5b/1x1" } layer { name: "inception_5b/3x3_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/3x3_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 1 decay_mult: 0 } convolution_param { num_output: 192 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3_reduce" type: "ReLU" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3_reduce" } layer { name: "inception_5b/3x3" type: "Convolution" bottom: "inception_5b/3x3_reduce" top: "inception_5b/3x3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_3x3" type: "ReLU" bottom: "inception_5b/3x3" top: "inception_5b/3x3" } layer { name: "inception_5b/5x5_reduce" type: "Convolution" bottom: "inception_5a/output" top: "inception_5b/5x5_reduce" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 48 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5_reduce" type: "ReLU" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5_reduce" } layer { name: "inception_5b/5x5" type: "Convolution" bottom: "inception_5b/5x5_reduce" top: "inception_5b/5x5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 pad: 2 kernel_size: 5 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_5x5" type: "ReLU" bottom: "inception_5b/5x5" top: "inception_5b/5x5" } layer { name: "inception_5b/pool" type: "Pooling" bottom: "inception_5a/output" top: "inception_5b/pool" pooling_param { pool: MAX kernel_size: 3 stride: 1 pad: 1 } } layer { name: "inception_5b/pool_proj" type: "Convolution" bottom: "inception_5b/pool" top: "inception_5b/pool_proj" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 128 kernel_size: 1 weight_filler { type: "xavier" std: 0.1 } bias_filler { type: "constant" value: 0.2 } } } layer { name: "inception_5b/relu_pool_proj" type: "ReLU" bottom: "inception_5b/pool_proj" top: "inception_5b/pool_proj" } layer { name: "inception_5b/output" type: "Concat" bottom: "inception_5b/1x1" bottom: "inception_5b/3x3" bottom: "inception_5b/5x5" bottom: "inception_5b/pool_proj" top: "inception_5b/output" } layer { name: "pool5/drop_s1" type: "Dropout" bottom: "inception_5b/output" top: "pool5/drop_s1" dropout_param { dropout_ratio: 0.4 } } layer { name: "cvg/classifier" type: "Convolution" bottom: "pool5/drop_s1" top: "cvg/classifier" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 1 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } } layer { name: "coverage/sig" type: "Sigmoid" bottom: "cvg/classifier" top: "coverage" } layer { name: "bbox/regressor" type: "Convolution" bottom: "pool5/drop_s1" top: "bboxes" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 4 kernel_size: 1 weight_filler { type: "xavier" std: 0.03 } bias_filler { type: "constant" value: 0. } } }
######################################################################
End of convolutional network
######################################################################
Convert bboxes
layer { name: "bbox_mask" type: "Eltwise" bottom: "bboxes" bottom: "coverage-block" top: "bboxes-masked" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-norm" type: "Eltwise" bottom: "bboxes-masked" bottom: "size-block" top: "bboxes-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "bbox-obj-norm" type: "Eltwise" bottom: "bboxes-masked-norm" bottom: "obj-block" top: "bboxes-obj-masked-norm" eltwise_param { operation: PROD } include { phase: TRAIN } include { phase: TEST stage: "val" } }
Loss layers
layer { name: "bbox_loss" type: "L1Loss" bottom: "bboxes-obj-masked-norm" bottom: "bbox-obj-label-norm" top: "loss_bbox" loss_weight: 2 include { phase: TRAIN } include { phase: TEST stage: "val" } } layer { name: "coverage_loss" type: "EuclideanLoss" bottom: "coverage" bottom: "coverage-label" top: "loss_coverage" include { phase: TRAIN } include { phase: TEST stage: "val" } }
Cluster bboxes
layer { type: 'Python' name: 'cluster' bottom: 'coverage' bottom: 'bboxes' top: 'bbox-list' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterDetections' param_str : '1248, 352, 8, 0.6, 3, 0.02, 22, 1' } include: { phase: TEST } }
Calculate mean average precision
layer { type: 'Python' name: 'cluster_gt' bottom: 'coverage-label' bottom: 'bbox-label' top: 'bbox-list-label' python_param { module: 'caffe.layers.detectnet.clustering' layer: 'ClusterGroundtruth' param_str : '1248, 352, 8, 1' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'score' bottom: 'bbox-list-label' bottom: 'bbox-list' top: 'bbox-list-scored' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'ScoreDetections' } include: { phase: TEST stage: "val" } } layer { type: 'Python' name: 'mAP' bottom: 'bbox-list-scored' top: 'mAP' top: 'precision' top: 'recall' python_param { module: 'caffe.layers.detectnet.mean_ap' layer: 'mAP' param_str : '1248, 352, 8' } include: { phase: TEST stage: "val" } }
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293153575, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aAv6CfJETsM5bD2qfHRiXbGMwIWbks5ruw8ygaJpZM4LfTqE . ... — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293172179, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckOomlXp0aK6Ip7mgSeQ7QNUeFl1Rks5ruyjXgaJpZM4LfTqE.
I think/hope we have this documented somewhere, but for now check out this piece of code: https://github.com/NVIDIA/caffe/blob/v0.15.14/python/caffe/layers/detectnet/clustering.py#L108-L115
At first glance, it looks like my network's coverage layer prediction is different from yours. (Picture attached). However, it looks like your network's output for cvg/classifier layer is similar to mine.
Please refer to the link provided by @lukeyeager for the param names.
Thanks @lukeyeager https://github.com/lukeyeager and shreyasramesh.Iam not getting even the coverage value.Also i will try lowering the threshold parameters mentioned.Please let me know for any other changes could be applied.
On 11-Apr-2017, at 10:16 PM, shreyasramesh notifications@github.com wrote:
At first glance, it looks like my network's coverage layer prediction is different from yours. (Picture attached). However, it looks like your network's output for cvg/classifier layer is similar to mine.
https://cloud.githubusercontent.com/assets/25034088/24919794/6e3dc21e-1f02-11e7-8663-219d6e2342d2.png Please refer to the link provided by @lukeyeager https://github.com/lukeyeager for the param names.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293324342, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckLjEzr9WD9BWCfiS_hRLc6SWu6Muks5ru65ogaJpZM4LfTqE.
Hi,Iam trying the changes one by one;haven’t got any improvement till now.May i get the log file of a working kitty data set as i lost it and felt it will be good if compared with it.Also please let me be updated too with the detect net improvements.
On 11-Apr-2017, at 10:16 PM, shreyasramesh notifications@github.com wrote:
At first glance, it looks like my network's coverage layer prediction is different from yours. (Picture attached). However, it looks like your network's output for cvg/classifier layer is similar to mine.
https://cloud.githubusercontent.com/assets/25034088/24919794/6e3dc21e-1f02-11e7-8663-219d6e2342d2.png Please refer to the link provided by @lukeyeager https://github.com/lukeyeager for the param names.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-293324342, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckLjEzr9WD9BWCfiS_hRLc6SWu6Muks5ru65ogaJpZM4LfTqE.
Hi, Iam getting a log file with 0 values.Do anyone suggest where am i wrong pls. I0413 06:08:56.441606 22206 net.cpp:159] Memory required for data: 5099532756 I0413 06:08:56.441609 22206 net.cpp:222] mAP does not need backward computation. I0413 06:08:56.441613 22206 net.cpp:222] score does not need backward computation. I0413 06:08:56.441617 22206 net.cpp:222] cluster_gt does not need backward computation. I0413 06:08:56.441622 22206 net.cpp:222] cluster does not need backward computation. I0413 06:08:56.441649 22206 net.cpp:220] coverage_loss needs backward computation. I0413 06:08:56.441654 22206 net.cpp:220] bbox_loss needs backward computation. I0413 06:08:56.441659 22206 net.cpp:220] bbox-obj-norm needs backward computation. I0413 06:08:56.441663 22206 net.cpp:220] bbox-norm needs backward computation. I0413 06:08:56.441668 22206 net.cpp:220] bbox_mask needs backward computation. I0413 06:08:56.441673 22206 net.cpp:220] bboxes_bbox/regressor_0_split needs backward computation. I0413 06:08:56.441678 22206 net.cpp:220] bbox/regressor needs backward computation. I0413 06:08:56.441682 22206 net.cpp:220] coverage_coverage/sig_0_split needs backward computation. I0413 06:08:56.441686 22206 net.cpp:220] coverage/sig needs backward computation. I0413 06:08:56.441690 22206 net.cpp:220] cvg/classifier needs backward computation. I0413 06:08:56.441695 22206 net.cpp:220] pool5/drop_s1_pool5/drop_s1_0_split needs backward computation. I0413 06:08:56.441699 22206 net.cpp:220] pool5/drop_s1 needs backward computation. I0413 06:08:56.441704 22206 net.cpp:220] inception_5b/output needs backward computation. I0413 06:08:56.441709 22206 net.cpp:220] inception_5b/relu_pool_proj needs backward computation. I0413 06:08:56.441712 22206 net.cpp:220] inception_5b/pool_proj needs backward computation. I0413 06:08:56.441716 22206 net.cpp:220] inception_5b/pool needs backward computation. I0413 06:08:56.441721 22206 net.cpp:220] inception_5b/relu_5x5 needs backward computation. I0413 06:08:56.441725 22206 net.cpp:220] inception_5b/5x5 needs backward computation. I0413 06:08:56.441730 22206 net.cpp:220] inception_5b/relu_5x5_reduce needs backward computation. I0413 06:08:56.441733 22206 net.cpp:220] inception_5b/5x5_reduce needs backward computation. I0413 06:08:56.441737 22206 net.cpp:220] inception_5b/relu_3x3 needs backward computation. I0413 06:08:56.441741 22206 net.cpp:220] inception_5b/3x3 needs backward computation. I0413 06:08:56.441745 22206 net.cpp:220] inception_5b/relu_3x3_reduce needs backward computation. I0413 06:08:56.441750 22206 net.cpp:220] inception_5b/3x3_reduce needs backward computation. I0413 06:08:56.441753 22206 net.cpp:220] inception_5b/relu_1x1 needs backward computation. I0413 06:08:56.441756 22206 net.cpp:220] inception_5b/1x1 needs backward computation. I0413 06:08:56.441761 22206 net.cpp:220] inception_5a/output_inception_5a/output_0_split needs backward computation. I0413 06:08:56.441766 22206 net.cpp:220] inception_5a/output needs backward computation. I0413 06:08:56.441771 22206 net.cpp:220] inception_5a/relu_pool_proj needs backward computation. I0413 06:08:56.441774 22206 net.cpp:220] inception_5a/pool_proj needs backward computation. I0413 06:08:56.441778 22206 net.cpp:220] inception_5a/pool needs backward computation. I0413 06:08:56.441782 22206 net.cpp:220] inception_5a/relu_5x5 needs backward computation. I0413 06:08:56.441787 22206 net.cpp:220] inception_5a/5x5 needs backward computation. I0413 06:08:56.441790 22206 net.cpp:220] inception_5a/relu_5x5_reduce needs backward computation. I0413 06:08:56.441794 22206 net.cpp:220] inception_5a/5x5_reduce needs backward computation. I0413 06:08:56.441798 22206 net.cpp:220] inception_5a/relu_3x3 needs backward computation. I0413 06:08:56.441802 22206 net.cpp:220] inception_5a/3x3 needs backward computation. I0413 06:08:56.441805 22206 net.cpp:220] inception_5a/relu_3x3_reduce needs backward computation. I0413 06:08:56.441809 22206 net.cpp:220] inception_5a/3x3_reduce needs backward computation. I0413 06:08:56.441813 22206 net.cpp:220] inception_5a/relu_1x1 needs backward computation. I0413 06:08:56.441817 22206 net.cpp:220] inception_5a/1x1 needs backward computation. I0413 06:08:56.441820 22206 net.cpp:220] inception_4e/output_inception_4e/output_0_split needs backward computation. I0413 06:08:56.441825 22206 net.cpp:220] inception_4e/output needs backward computation. I0413 06:08:56.441830 22206 net.cpp:220] inception_4e/relu_pool_proj needs backward computation. I0413 06:08:56.441834 22206 net.cpp:220] inception_4e/pool_proj needs backward computation. I0413 06:08:56.441844 22206 net.cpp:220] inception_4e/pool needs backward computation. I0413 06:08:56.441848 22206 net.cpp:220] inception_4e/relu_5x5 needs backward computation. I0413 06:08:56.441853 22206 net.cpp:220] inception_4e/5x5 needs backward computation. I0413 06:08:56.441856 22206 net.cpp:220] inception_4e/relu_5x5_reduce needs backward computation. I0413 06:08:56.441860 22206 net.cpp:220] inception_4e/5x5_reduce needs backward computation. I0413 06:08:56.441864 22206 net.cpp:220] inception_4e/relu_3x3 needs backward computation. I0413 06:08:56.441867 22206 net.cpp:220] inception_4e/3x3 needs backward computation. I0413 06:08:56.441871 22206 net.cpp:220] inception_4e/relu_3x3_reduce needs backward computation. I0413 06:08:56.441875 22206 net.cpp:220] inception_4e/3x3_reduce needs backward computation. I0413 06:08:56.441879 22206 net.cpp:220] inception_4e/relu_1x1 needs backward computation. I0413 06:08:56.441884 22206 net.cpp:220] inception_4e/1x1 needs backward computation. I0413 06:08:56.441887 22206 net.cpp:220] inception_4d/output_inception_4d/output_0_split needs backward computation. I0413 06:08:56.441891 22206 net.cpp:220] inception_4d/output needs backward computation. I0413 06:08:56.441896 22206 net.cpp:220] inception_4d/relu_pool_proj needs backward computation. I0413 06:08:56.441900 22206 net.cpp:220] inception_4d/pool_proj needs backward computation. I0413 06:08:56.441905 22206 net.cpp:220] inception_4d/pool needs backward computation. I0413 06:08:56.441908 22206 net.cpp:220] inception_4d/relu_5x5 needs backward computation. I0413 06:08:56.441912 22206 net.cpp:220] inception_4d/5x5 needs backward computation. I0413 06:08:56.441916 22206 net.cpp:220] inception_4d/relu_5x5_reduce needs backward computation. I0413 06:08:56.441920 22206 net.cpp:220] inception_4d/5x5_reduce needs backward computation. I0413 06:08:56.441925 22206 net.cpp:220] inception_4d/relu_3x3 needs backward computation. I0413 06:08:56.441927 22206 net.cpp:220] inception_4d/3x3 needs backward computation. I0413 06:08:56.441931 22206 net.cpp:220] inception_4d/relu_3x3_reduce needs backward computation. I0413 06:08:56.441934 22206 net.cpp:220] inception_4d/3x3_reduce needs backward computation. I0413 06:08:56.441938 22206 net.cpp:220] inception_4d/relu_1x1 needs backward computation. I0413 06:08:56.441942 22206 net.cpp:220] inception_4d/1x1 needs backward computation. I0413 06:08:56.441946 22206 net.cpp:220] inception_4c/output_inception_4c/output_0_split needs backward computation. I0413 06:08:56.441951 22206 net.cpp:220] inception_4c/output needs backward computation. I0413 06:08:56.441956 22206 net.cpp:220] inception_4c/relu_pool_proj needs backward computation. I0413 06:08:56.441958 22206 net.cpp:220] inception_4c/pool_proj needs backward computation. I0413 06:08:56.441962 22206 net.cpp:220] inception_4c/pool needs backward computation. I0413 06:08:56.441967 22206 net.cpp:220] inception_4c/relu_5x5 needs backward computation. I0413 06:08:56.441969 22206 net.cpp:220] inception_4c/5x5 needs backward computation. I0413 06:08:56.441973 22206 net.cpp:220] inception_4c/relu_5x5_reduce needs backward computation. I0413 06:08:56.441977 22206 net.cpp:220] inception_4c/5x5_reduce needs backward computation. I0413 06:08:56.441982 22206 net.cpp:220] inception_4c/relu_3x3 needs backward computation. I0413 06:08:56.441985 22206 net.cpp:220] inception_4c/3x3 needs backward computation. I0413 06:08:56.441988 22206 net.cpp:220] inception_4c/relu_3x3_reduce needs backward computation. I0413 06:08:56.441992 22206 net.cpp:220] inception_4c/3x3_reduce needs backward computation. I0413 06:08:56.441997 22206 net.cpp:220] inception_4c/relu_1x1 needs backward computation. I0413 06:08:56.441999 22206 net.cpp:220] inception_4c/1x1 needs backward computation. I0413 06:08:56.442003 22206 net.cpp:220] inception_4b/output_inception_4b/output_0_split needs backward computation. I0413 06:08:56.442008 22206 net.cpp:220] inception_4b/output needs backward computation. I0413 06:08:56.442013 22206 net.cpp:220] inception_4b/relu_pool_proj needs backward computation. I0413 06:08:56.442020 22206 net.cpp:220] inception_4b/pool_proj needs backward computation. I0413 06:08:56.442024 22206 net.cpp:220] inception_4b/pool needs backward computation. I0413 06:08:56.442028 22206 net.cpp:220] inception_4b/relu_5x5 needs backward computation. I0413 06:08:56.442032 22206 net.cpp:220] inception_4b/5x5 needs backward computation. I0413 06:08:56.442036 22206 net.cpp:220] inception_4b/relu_5x5_reduce needs backward computation. I0413 06:08:56.442039 22206 net.cpp:220] inception_4b/5x5_reduce needs backward computation. I0413 06:08:56.442044 22206 net.cpp:220] inception_4b/relu_3x3 needs backward computation. I0413 06:08:56.442047 22206 net.cpp:220] inception_4b/3x3 needs backward computation. I0413 06:08:56.442050 22206 net.cpp:220] inception_4b/relu_3x3_reduce needs backward computation. I0413 06:08:56.442054 22206 net.cpp:220] inception_4b/3x3_reduce needs backward computation. I0413 06:08:56.442059 22206 net.cpp:220] inception_4b/relu_1x1 needs backward computation. I0413 06:08:56.442062 22206 net.cpp:220] inception_4b/1x1 needs backward computation. I0413 06:08:56.442066 22206 net.cpp:220] inception_4a/output_inception_4a/output_0_split needs backward computation. I0413 06:08:56.442070 22206 net.cpp:220] inception_4a/output needs backward computation. I0413 06:08:56.442075 22206 net.cpp:220] inception_4a/relu_pool_proj needs backward computation. I0413 06:08:56.442078 22206 net.cpp:220] inception_4a/pool_proj needs backward computation. I0413 06:08:56.442082 22206 net.cpp:220] inception_4a/pool needs backward computation. I0413 06:08:56.442086 22206 net.cpp:220] inception_4a/relu_5x5 needs backward computation. I0413 06:08:56.442090 22206 net.cpp:220] inception_4a/5x5 needs backward computation. I0413 06:08:56.442095 22206 net.cpp:220] inception_4a/relu_5x5_reduce needs backward computation. I0413 06:08:56.442097 22206 net.cpp:220] inception_4a/5x5_reduce needs backward computation. I0413 06:08:56.442101 22206 net.cpp:220] inception_4a/relu_3x3 needs backward computation. I0413 06:08:56.442106 22206 net.cpp:220] inception_4a/3x3 needs backward computation. I0413 06:08:56.442109 22206 net.cpp:220] inception_4a/relu_3x3_reduce needs backward computation. I0413 06:08:56.442112 22206 net.cpp:220] inception_4a/3x3_reduce needs backward computation. I0413 06:08:56.442116 22206 net.cpp:220] inception_4a/relu_1x1 needs backward computation. I0413 06:08:56.442121 22206 net.cpp:220] inception_4a/1x1 needs backward computation. I0413 06:08:56.442124 22206 net.cpp:220] pool3/3x3_s2_pool3/3x3_s2_0_split needs backward computation. I0413 06:08:56.442128 22206 net.cpp:220] pool3/3x3_s2 needs backward computation. I0413 06:08:56.442132 22206 net.cpp:220] inception_3b/output needs backward computation. I0413 06:08:56.442138 22206 net.cpp:220] inception_3b/relu_pool_proj needs backward computation. I0413 06:08:56.442142 22206 net.cpp:220] inception_3b/pool_proj needs backward computation. I0413 06:08:56.442147 22206 net.cpp:220] inception_3b/pool needs backward computation. I0413 06:08:56.442150 22206 net.cpp:220] inception_3b/relu_5x5 needs backward computation. I0413 06:08:56.442154 22206 net.cpp:220] inception_3b/5x5 needs backward computation. I0413 06:08:56.442157 22206 net.cpp:220] inception_3b/relu_5x5_reduce needs backward computation. I0413 06:08:56.442162 22206 net.cpp:220] inception_3b/5x5_reduce needs backward computation. I0413 06:08:56.442165 22206 net.cpp:220] inception_3b/relu_3x3 needs backward computation. I0413 06:08:56.442169 22206 net.cpp:220] inception_3b/3x3 needs backward computation. I0413 06:08:56.442173 22206 net.cpp:220] inception_3b/relu_3x3_reduce needs backward computation. I0413 06:08:56.442178 22206 net.cpp:220] inception_3b/3x3_reduce needs backward computation. I0413 06:08:56.442181 22206 net.cpp:220] inception_3b/relu_1x1 needs backward computation. I0413 06:08:56.442185 22206 net.cpp:220] inception_3b/1x1 needs backward computation. I0413 06:08:56.442189 22206 net.cpp:220] inception_3a/output_inception_3a/output_0_split needs backward computation. I0413 06:08:56.442199 22206 net.cpp:220] inception_3a/output needs backward computation. I0413 06:08:56.442204 22206 net.cpp:220] inception_3a/relu_pool_proj needs backward computation. I0413 06:08:56.442209 22206 net.cpp:220] inception_3a/pool_proj needs backward computation. I0413 06:08:56.442212 22206 net.cpp:220] inception_3a/pool needs backward computation. I0413 06:08:56.442216 22206 net.cpp:220] inception_3a/relu_5x5 needs backward computation. I0413 06:08:56.442220 22206 net.cpp:220] inception_3a/5x5 needs backward computation. I0413 06:08:56.442224 22206 net.cpp:220] inception_3a/relu_5x5_reduce needs backward computation. I0413 06:08:56.442229 22206 net.cpp:220] inception_3a/5x5_reduce needs backward computation. I0413 06:08:56.442232 22206 net.cpp:220] inception_3a/relu_3x3 needs backward computation. I0413 06:08:56.442235 22206 net.cpp:220] inception_3a/3x3 needs backward computation. I0413 06:08:56.442239 22206 net.cpp:220] inception_3a/relu_3x3_reduce needs backward computation. I0413 06:08:56.442243 22206 net.cpp:220] inception_3a/3x3_reduce needs backward computation. I0413 06:08:56.442247 22206 net.cpp:220] inception_3a/relu_1x1 needs backward computation. I0413 06:08:56.442251 22206 net.cpp:220] inception_3a/1x1 needs backward computation. I0413 06:08:56.442255 22206 net.cpp:220] pool2/3x3_s2_pool2/3x3_s2_0_split needs backward computation. I0413 06:08:56.442260 22206 net.cpp:220] pool2/3x3_s2 needs backward computation. I0413 06:08:56.442265 22206 net.cpp:220] conv2/norm2 needs backward computation. I0413 06:08:56.442268 22206 net.cpp:220] conv2/relu_3x3 needs backward computation. I0413 06:08:56.442272 22206 net.cpp:220] conv2/3x3 needs backward computation. I0413 06:08:56.442276 22206 net.cpp:220] conv2/relu_3x3_reduce needs backward computation. I0413 06:08:56.442281 22206 net.cpp:220] conv2/3x3_reduce needs backward computation. I0413 06:08:56.442284 22206 net.cpp:220] pool1/norm1 needs backward computation. I0413 06:08:56.442289 22206 net.cpp:220] pool1/3x3_s2 needs backward computation. I0413 06:08:56.442293 22206 net.cpp:220] conv1/relu_7x7 needs backward computation. I0413 06:08:56.442297 22206 net.cpp:220] conv1/7x7_s2 needs backward computation. I0413 06:08:56.442301 22206 net.cpp:222] bb-obj-norm does not need backward computation. I0413 06:08:56.442307 22206 net.cpp:222] bb-label-norm does not need backward computation. I0413 06:08:56.442313 22206 net.cpp:222] obj-block_obj-block_0_split does not need backward computation. I0413 06:08:56.442318 22206 net.cpp:222] obj-block does not need backward computation. I0413 06:08:56.442325 22206 net.cpp:222] size-block_size-block_0_split does not need backward computation. I0413 06:08:56.442329 22206 net.cpp:222] size-block does not need backward computation. I0413 06:08:56.442335 22206 net.cpp:222] coverage-block does not need backward computation. I0413 06:08:56.442342 22206 net.cpp:222] coverage-label_slice-label_4_split does not need backward computation. I0413 06:08:56.442348 22206 net.cpp:222] obj-label_slice-label_3_split does not need backward computation. I0413 06:08:56.442353 22206 net.cpp:222] size-label_slice-label_2_split does not need backward computation. I0413 06:08:56.442358 22206 net.cpp:222] bbox-label_slice-label_1_split does not need backward computation. I0413 06:08:56.442364 22206 net.cpp:222] foreground-label_slice-label_0_split does not need backward computation. I0413 06:08:56.442370 22206 net.cpp:222] slice-label does not need backward computation. I0413 06:08:56.442375 22206 net.cpp:222] val_transform does not need backward computation. I0413 06:08:56.442380 22206 net.cpp:222] val_label does not need backward computation. I0413 06:08:56.442384 22206 net.cpp:222] val_data does not need backward computation. I0413 06:08:56.442387 22206 net.cpp:264] This network produces output loss_bbox I0413 06:08:56.442391 22206 net.cpp:264] This network produces output loss_coverage I0413 06:08:56.442395 22206 net.cpp:264] This network produces output mAP I0413 06:08:56.442399 22206 net.cpp:264] This network produces output precision I0413 06:08:56.442409 22206 net.cpp:264] This network produces output recall I0413 06:08:56.442546 22206 net.cpp:284] Network initialization done. I0413 06:08:56.443408 22206 solver.cpp:60] Solver scaffolding done. I0413 06:08:56.448447 22206 caffe.cpp:231] Starting Optimization I0413 06:08:56.448458 22206 solver.cpp:304] Solving I0413 06:08:56.448462 22206 solver.cpp:305] Learning Rate Policy: step I0413 06:08:56.454710 22206 solver.cpp:362] Iteration 0, Testing net (#0) I0413 06:08:56.454725 22206 net.cpp:723] Ignoring source layer train_data I0413 06:08:56.454730 22206 net.cpp:723] Ignoring source layer train_label I0413 06:08:56.454733 22206 net.cpp:723] Ignoring source layer train_transform I0413 06:09:23.007010 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:09:23.007134 22206 solver.cpp:429] Test net output #1: loss_coverage = 305.735 ( 1 = 305.735 loss) I0413 06:09:23.007150 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 06:09:23.007155 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 06:09:23.007159 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 06:09:40.952916 22206 solver.cpp:242] Iteration 0 (0 iter/s, 44.5051s/40 iter), loss = 317.739 I0413 06:09:40.952960 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:09:40.952968 22206 solver.cpp:261] Train net output #1: loss_coverage = 318.719 ( 1 = 318.719 loss) I0413 06:09:40.952993 22206 sgd_solver.cpp:106] Iteration 0, lr = 0.001 I0413 06:12:09.236304 22206 solver.cpp:242] Iteration 40 (0.26975 iter/s, 148.286s/40 iter), loss = -6.43825e-20 I0413 06:12:09.236418 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:12:09.236426 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:12:09.236436 22206 sgd_solver.cpp:106] Iteration 40, lr = 0.001 I0413 06:14:37.587092 22206 solver.cpp:242] Iteration 80 (0.269627 iter/s, 148.353s/40 iter), loss = -6.43825e-20 I0413 06:14:37.587157 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:14:37.587165 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:14:37.587175 22206 sgd_solver.cpp:106] Iteration 80, lr = 0.001 I0413 06:17:05.949126 22206 solver.cpp:242] Iteration 120 (0.269607 iter/s, 148.364s/40 iter), loss = -6.43825e-20 I0413 06:17:05.949256 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:17:05.949266 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:17:05.949278 22206 sgd_solver.cpp:106] Iteration 120, lr = 0.001 I0413 06:19:34.343139 22206 solver.cpp:242] Iteration 160 (0.269549 iter/s, 148.396s/40 iter), loss = -6.43825e-20 I0413 06:19:34.343253 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:19:34.343263 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:19:34.343273 22206 sgd_solver.cpp:106] Iteration 160, lr = 0.001 I0413 06:22:02.687221 22206 solver.cpp:242] Iteration 200 (0.269639 iter/s, 148.346s/40 iter), loss = -6.43825e-20 I0413 06:22:02.687297 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:22:02.687307 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:22:02.687319 22206 sgd_solver.cpp:106] Iteration 200, lr = 0.001 I0413 06:24:31.148965 22206 solver.cpp:242] Iteration 240 (0.269426 iter/s, 148.464s/40 iter), loss = -6.43825e-20 I0413 06:24:31.149034 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:24:31.149042 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:24:31.149052 22206 sgd_solver.cpp:106] Iteration 240, lr = 0.001 I0413 06:26:59.616809 22206 solver.cpp:242] Iteration 280 (0.269415 iter/s, 148.47s/40 iter), loss = -6.43825e-20 I0413 06:26:59.616961 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:26:59.616971 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:26:59.616981 22206 sgd_solver.cpp:106] Iteration 280, lr = 0.001 I0413 06:29:27.940598 22206 solver.cpp:242] Iteration 320 (0.269676 iter/s, 148.326s/40 iter), loss = -6.43825e-20 I0413 06:29:27.940678 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:29:27.940688 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:29:27.940698 22206 sgd_solver.cpp:106] Iteration 320, lr = 0.001 I0413 06:29:31.628262 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_322.caffemodel I0413 06:29:31.759322 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_322.solverstate I0413 06:31:57.534430 22206 solver.cpp:242] Iteration 360 (0.267387 iter/s, 149.596s/40 iter), loss = -6.43825e-20 I0413 06:31:57.534548 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:31:57.534557 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:31:57.534569 22206 sgd_solver.cpp:106] Iteration 360, lr = 0.001 I0413 06:34:26.713977 22206 solver.cpp:242] Iteration 400 (0.268129 iter/s, 149.182s/40 iter), loss = -6.43825e-20 I0413 06:34:26.714052 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:34:26.714061 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:34:26.714072 22206 sgd_solver.cpp:106] Iteration 400, lr = 0.001 I0413 06:36:56.036567 22206 solver.cpp:242] Iteration 440 (0.267872 iter/s, 149.325s/40 iter), loss = -6.43825e-20 I0413 06:36:56.036634 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:36:56.036643 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:36:56.036653 22206 sgd_solver.cpp:106] Iteration 440, lr = 0.001 I0413 06:39:25.477144 22206 solver.cpp:242] Iteration 480 (0.267661 iter/s, 149.443s/40 iter), loss = -6.43825e-20 I0413 06:39:25.477215 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:39:25.477223 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:39:25.477234 22206 sgd_solver.cpp:106] Iteration 480, lr = 0.001 I0413 06:41:54.831908 22206 solver.cpp:242] Iteration 520 (0.267815 iter/s, 149.357s/40 iter), loss = -6.43825e-20 I0413 06:41:54.831998 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:41:54.832008 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:41:54.832020 22206 sgd_solver.cpp:106] Iteration 520, lr = 0.001 I0413 06:44:24.388000 22206 solver.cpp:242] Iteration 560 (0.267454 iter/s, 149.558s/40 iter), loss = -6.43825e-20 I0413 06:44:24.388065 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:44:24.388074 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:44:24.388085 22206 sgd_solver.cpp:106] Iteration 560, lr = 0.001 I0413 06:46:53.393358 22206 solver.cpp:242] Iteration 600 (0.268443 iter/s, 149.008s/40 iter), loss = -6.43825e-20 I0413 06:46:53.393478 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:46:53.393488 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:46:53.393498 22206 sgd_solver.cpp:106] Iteration 600, lr = 0.001 I0413 06:49:22.734746 22206 solver.cpp:242] Iteration 640 (0.267839 iter/s, 149.344s/40 iter), loss = -6.43825e-20 I0413 06:49:22.734871 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:49:22.734881 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:49:22.734892 22206 sgd_solver.cpp:106] Iteration 640, lr = 0.001 I0413 06:49:33.901051 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_644.caffemodel I0413 06:49:33.993005 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_644.solverstate I0413 06:51:52.247838 22206 solver.cpp:242] Iteration 680 (0.267531 iter/s, 149.515s/40 iter), loss = -6.43825e-20 I0413 06:51:52.248011 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:51:52.248023 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:51:52.248035 22206 sgd_solver.cpp:106] Iteration 680, lr = 0.001 I0413 06:54:21.442358 22206 solver.cpp:242] Iteration 720 (0.268103 iter/s, 149.197s/40 iter), loss = -6.43825e-20 I0413 06:54:21.442461 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:54:21.442471 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:54:21.442482 22206 sgd_solver.cpp:106] Iteration 720, lr = 0.001 I0413 06:56:50.671871 22206 solver.cpp:242] Iteration 760 (0.26804 iter/s, 149.232s/40 iter), loss = -6.43825e-20 I0413 06:56:50.671988 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:56:50.671998 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:56:50.672009 22206 sgd_solver.cpp:106] Iteration 760, lr = 0.001 I0413 06:59:19.819095 22206 solver.cpp:242] Iteration 800 (0.268188 iter/s, 149.149s/40 iter), loss = -6.43825e-20 I0413 06:59:19.819205 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 06:59:19.819213 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 06:59:19.819224 22206 sgd_solver.cpp:106] Iteration 800, lr = 0.001 I0413 07:01:49.744330 22206 solver.cpp:242] Iteration 840 (0.266796 iter/s, 149.927s/40 iter), loss = -6.43825e-20 I0413 07:01:49.744400 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:01:49.744408 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:01:49.744420 22206 sgd_solver.cpp:106] Iteration 840, lr = 0.001 I0413 07:04:18.976639 22206 solver.cpp:242] Iteration 880 (0.268035 iter/s, 149.234s/40 iter), loss = -6.43825e-20 I0413 07:04:18.976755 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:04:18.976765 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:04:18.976776 22206 sgd_solver.cpp:106] Iteration 880, lr = 0.001 I0413 07:06:48.090966 22206 solver.cpp:242] Iteration 920 (0.268247 iter/s, 149.116s/40 iter), loss = -6.43825e-20 I0413 07:06:48.091079 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:06:48.091089 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:06:48.091099 22206 sgd_solver.cpp:106] Iteration 920, lr = 0.001 I0413 07:09:17.116433 22206 solver.cpp:242] Iteration 960 (0.268407 iter/s, 149.028s/40 iter), loss = -6.43825e-20 I0413 07:09:17.116539 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:09:17.116549 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:09:17.116559 22206 sgd_solver.cpp:106] Iteration 960, lr = 0.001 I0413 07:09:35.753537 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_966.caffemodel I0413 07:09:35.846139 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_966.solverstate I0413 07:11:46.418128 22206 solver.cpp:242] Iteration 1000 (0.26791 iter/s, 149.304s/40 iter), loss = -6.43825e-20 I0413 07:11:46.418980 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:11:46.418990 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:11:46.419003 22206 sgd_solver.cpp:106] Iteration 1000, lr = 0.001 I0413 07:14:15.607887 22206 solver.cpp:242] Iteration 1040 (0.268112 iter/s, 149.191s/40 iter), loss = -6.43825e-20 I0413 07:14:15.608037 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:14:15.608047 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:14:15.608058 22206 sgd_solver.cpp:106] Iteration 1040, lr = 0.001 I0413 07:16:44.633111 22206 solver.cpp:242] Iteration 1080 (0.268407 iter/s, 149.027s/40 iter), loss = -6.43825e-20 I0413 07:16:44.633185 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:16:44.633195 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:16:44.633205 22206 sgd_solver.cpp:106] Iteration 1080, lr = 0.001 I0413 07:19:13.658380 22206 solver.cpp:242] Iteration 1120 (0.268407 iter/s, 149.027s/40 iter), loss = -6.43825e-20 I0413 07:19:13.658449 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:19:13.658458 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:19:13.658468 22206 sgd_solver.cpp:106] Iteration 1120, lr = 0.001 I0413 07:21:42.794762 22206 solver.cpp:242] Iteration 1160 (0.268207 iter/s, 149.139s/40 iter), loss = -6.43825e-20 I0413 07:21:42.794872 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:21:42.794881 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:21:42.794893 22206 sgd_solver.cpp:106] Iteration 1160, lr = 0.001 I0413 07:24:12.001484 22206 solver.cpp:242] Iteration 1200 (0.268081 iter/s, 149.209s/40 iter), loss = -6.43825e-20 I0413 07:24:12.001596 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:24:12.001606 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:24:12.001617 22206 sgd_solver.cpp:106] Iteration 1200, lr = 0.001 I0413 07:26:41.062177 22206 solver.cpp:242] Iteration 1240 (0.268343 iter/s, 149.063s/40 iter), loss = -6.43825e-20 I0413 07:26:41.062252 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:26:41.062261 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:26:41.062273 22206 sgd_solver.cpp:106] Iteration 1240, lr = 0.001 I0413 07:29:10.076431 22206 solver.cpp:242] Iteration 1280 (0.268427 iter/s, 149.016s/40 iter), loss = -6.43825e-20 I0413 07:29:10.076552 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:29:10.076562 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:29:10.076573 22206 sgd_solver.cpp:106] Iteration 1280, lr = 0.001 I0413 07:29:36.118050 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1288.caffemodel I0413 07:29:36.210352 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1288.solverstate I0413 07:31:39.303977 22206 solver.cpp:242] Iteration 1320 (0.268043 iter/s, 149.23s/40 iter), loss = -6.43825e-20 I0413 07:31:39.304044 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:31:39.304054 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:31:39.304064 22206 sgd_solver.cpp:106] Iteration 1320, lr = 0.001 I0413 07:34:10.779850 22206 solver.cpp:242] Iteration 1360 (0.264065 iter/s, 151.478s/40 iter), loss = -6.43825e-20 I0413 07:34:10.779914 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:34:10.779924 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:34:10.779934 22206 sgd_solver.cpp:106] Iteration 1360, lr = 0.001 I0413 07:36:40.017544 22206 solver.cpp:242] Iteration 1400 (0.268025 iter/s, 149.24s/40 iter), loss = -6.43825e-20 I0413 07:36:40.017660 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:36:40.017670 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:36:40.017683 22206 sgd_solver.cpp:106] Iteration 1400, lr = 0.001 I0413 07:39:09.247287 22206 solver.cpp:242] Iteration 1440 (0.268039 iter/s, 149.232s/40 iter), loss = -6.43825e-20 I0413 07:39:09.247429 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:39:09.247440 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:39:09.247452 22206 sgd_solver.cpp:106] Iteration 1440, lr = 0.001 I0413 07:41:38.391238 22206 solver.cpp:242] Iteration 1480 (0.268194 iter/s, 149.146s/40 iter), loss = -6.43825e-20 I0413 07:41:38.391366 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:41:38.391376 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:41:38.391387 22206 sgd_solver.cpp:106] Iteration 1480, lr = 0.001 I0413 07:44:07.598134 22206 solver.cpp:242] Iteration 1520 (0.26808 iter/s, 149.209s/40 iter), loss = -6.43825e-20 I0413 07:44:07.599515 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:44:07.599524 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:44:07.599535 22206 sgd_solver.cpp:106] Iteration 1520, lr = 0.001 I0413 07:46:36.945183 22206 solver.cpp:242] Iteration 1560 (0.267831 iter/s, 149.348s/40 iter), loss = -6.43825e-20 I0413 07:46:36.945298 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:46:36.945309 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:46:36.945320 22206 sgd_solver.cpp:106] Iteration 1560, lr = 0.001 I0413 07:49:06.170992 22206 solver.cpp:242] Iteration 1600 (0.268046 iter/s, 149.228s/40 iter), loss = -6.43825e-20 I0413 07:49:06.171119 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:49:06.171129 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:49:06.171140 22206 sgd_solver.cpp:106] Iteration 1600, lr = 0.001 I0413 07:49:39.681165 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1610.caffemodel I0413 07:49:39.773512 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1610.solverstate I0413 07:49:39.850608 22206 solver.cpp:362] Iteration 1610, Testing net (#0) I0413 07:49:39.850631 22206 net.cpp:723] Ignoring source layer train_data I0413 07:49:39.850636 22206 net.cpp:723] Ignoring source layer train_label I0413 07:49:39.850641 22206 net.cpp:723] Ignoring source layer train_transform I0413 07:49:58.498661 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:49:58.498684 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:49:58.498710 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 07:49:58.498716 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 07:49:58.498721 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 07:51:54.017457 22206 solver.cpp:242] Iteration 1640 (0.23831 iter/s, 167.849s/40 iter), loss = -6.43825e-20 I0413 07:51:54.017525 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:51:54.017534 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:51:54.017545 22206 sgd_solver.cpp:106] Iteration 1640, lr = 0.001 I0413 07:54:23.078280 22206 solver.cpp:242] Iteration 1680 (0.268343 iter/s, 149.063s/40 iter), loss = -6.43825e-20 I0413 07:54:23.078408 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:54:23.078419 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:54:23.078431 22206 sgd_solver.cpp:106] Iteration 1680, lr = 0.001 I0413 07:56:52.164952 22206 solver.cpp:242] Iteration 1720 (0.268297 iter/s, 149.089s/40 iter), loss = -6.43825e-20 I0413 07:56:52.165076 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:56:52.165087 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:56:52.165098 22206 sgd_solver.cpp:106] Iteration 1720, lr = 0.001 I0413 07:59:21.380599 22206 solver.cpp:242] Iteration 1760 (0.268065 iter/s, 149.218s/40 iter), loss = -6.43825e-20 I0413 07:59:21.380702 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 07:59:21.380712 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 07:59:21.380722 22206 sgd_solver.cpp:106] Iteration 1760, lr = 0.001 I0413 08:01:50.508353 22206 solver.cpp:242] Iteration 1800 (0.268223 iter/s, 149.13s/40 iter), loss = -6.43825e-20 I0413 08:01:50.508491 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:01:50.508500 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:01:50.508513 22206 sgd_solver.cpp:106] Iteration 1800, lr = 0.001 I0413 08:04:19.601210 22206 solver.cpp:242] Iteration 1840 (0.268285 iter/s, 149.095s/40 iter), loss = -6.43825e-20 I0413 08:04:19.601328 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:04:19.601339 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:04:19.601351 22206 sgd_solver.cpp:106] Iteration 1840, lr = 0.001 I0413 08:06:48.672987 22206 solver.cpp:242] Iteration 1880 (0.268323 iter/s, 149.074s/40 iter), loss = -6.43825e-20 I0413 08:06:48.673053 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:06:48.673063 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:06:48.673074 22206 sgd_solver.cpp:106] Iteration 1880, lr = 0.001 I0413 08:09:17.874629 22206 solver.cpp:242] Iteration 1920 (0.26809 iter/s, 149.204s/40 iter), loss = -6.43825e-20 I0413 08:09:17.874747 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:09:17.874758 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:09:17.874768 22206 sgd_solver.cpp:106] Iteration 1920, lr = 0.001 I0413 08:09:58.770267 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1932.caffemodel I0413 08:09:58.862397 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1932.solverstate I0413 08:11:46.843391 22206 solver.cpp:242] Iteration 1960 (0.268509 iter/s, 148.971s/40 iter), loss = -6.43825e-20 I0413 08:11:46.843502 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:11:46.843513 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:11:46.843523 22206 sgd_solver.cpp:106] Iteration 1960, lr = 0.001 I0413 08:14:15.769006 22206 solver.cpp:242] Iteration 2000 (0.268587 iter/s, 148.928s/40 iter), loss = -6.43825e-20 I0413 08:14:15.769112 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:14:15.769122 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:14:15.769134 22206 sgd_solver.cpp:106] Iteration 2000, lr = 0.001 I0413 08:16:44.733536 22206 solver.cpp:242] Iteration 2040 (0.268516 iter/s, 148.967s/40 iter), loss = -6.43825e-20 I0413 08:16:44.733660 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:16:44.733671 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:16:44.733681 22206 sgd_solver.cpp:106] Iteration 2040, lr = 0.001 I0413 08:19:13.831907 22206 solver.cpp:242] Iteration 2080 (0.268275 iter/s, 149.1s/40 iter), loss = -6.43825e-20 I0413 08:19:13.832031 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:19:13.832041 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:19:13.832053 22206 sgd_solver.cpp:106] Iteration 2080, lr = 0.001 I0413 08:21:42.788041 22206 solver.cpp:242] Iteration 2120 (0.268532 iter/s, 148.958s/40 iter), loss = -6.43825e-20 I0413 08:21:42.788151 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:21:42.788161 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:21:42.788172 22206 sgd_solver.cpp:106] Iteration 2120, lr = 0.001 I0413 08:24:11.707514 22206 solver.cpp:242] Iteration 2160 (0.268598 iter/s, 148.922s/40 iter), loss = -6.43825e-20 I0413 08:24:11.707672 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:24:11.707684 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:24:11.707695 22206 sgd_solver.cpp:106] Iteration 2160, lr = 0.001 I0413 08:26:40.647996 22206 solver.cpp:242] Iteration 2200 (0.26856 iter/s, 148.943s/40 iter), loss = -6.43825e-20 I0413 08:26:40.648113 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:26:40.648124 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:26:40.648134 22206 sgd_solver.cpp:106] Iteration 2200, lr = 0.001 I0413 08:29:09.702431 22206 solver.cpp:242] Iteration 2240 (0.268355 iter/s, 149.057s/40 iter), loss = -6.43825e-20 I0413 08:29:09.702499 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:29:09.702508 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:29:09.702519 22206 sgd_solver.cpp:106] Iteration 2240, lr = 0.001 I0413 08:29:58.160796 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2254.caffemodel I0413 08:29:58.253464 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2254.solverstate I0413 08:31:38.717820 22206 solver.cpp:242] Iteration 2280 (0.268425 iter/s, 149.018s/40 iter), loss = -6.43825e-20 I0413 08:31:38.717890 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:31:38.717898 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:31:38.717908 22206 sgd_solver.cpp:106] Iteration 2280, lr = 0.001 I0413 08:34:07.770282 22206 solver.cpp:242] Iteration 2320 (0.268358 iter/s, 149.055s/40 iter), loss = -6.43825e-20 I0413 08:34:07.770356 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:34:07.770365 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:34:07.770376 22206 sgd_solver.cpp:106] Iteration 2320, lr = 0.001 I0413 08:36:36.791759 22206 solver.cpp:242] Iteration 2360 (0.268414 iter/s, 149.024s/40 iter), loss = -6.43825e-20 I0413 08:36:36.791877 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:36:36.791887 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:36:36.791899 22206 sgd_solver.cpp:106] Iteration 2360, lr = 0.001 I0413 08:39:05.874894 22206 solver.cpp:242] Iteration 2400 (0.268303 iter/s, 149.085s/40 iter), loss = -6.43825e-20 I0413 08:39:05.875010 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:39:05.875021 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:39:05.875032 22206 sgd_solver.cpp:106] Iteration 2400, lr = 0.001 I0413 08:41:34.985160 22206 solver.cpp:242] Iteration 2440 (0.268254 iter/s, 149.112s/40 iter), loss = -6.43825e-20 I0413 08:41:34.985235 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:41:34.985244 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:41:34.985255 22206 sgd_solver.cpp:106] Iteration 2440, lr = 0.001 I0413 08:44:03.968014 22206 solver.cpp:242] Iteration 2480 (0.268483 iter/s, 148.985s/40 iter), loss = -6.43825e-20 I0413 08:44:03.968082 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:44:03.968092 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:44:03.968103 22206 sgd_solver.cpp:106] Iteration 2480, lr = 0.001 I0413 08:46:33.126567 22206 solver.cpp:242] Iteration 2520 (0.268167 iter/s, 149.161s/40 iter), loss = -6.43825e-20 I0413 08:46:33.126674 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:46:33.126684 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:46:33.126695 22206 sgd_solver.cpp:106] Iteration 2520, lr = 0.001 I0413 08:49:02.207464 22206 solver.cpp:242] Iteration 2560 (0.268307 iter/s, 149.083s/40 iter), loss = -6.43825e-20 I0413 08:49:02.207599 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:49:02.207610 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:49:02.207622 22206 sgd_solver.cpp:106] Iteration 2560, lr = 0.001 I0413 08:49:58.178736 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2576.caffemodel I0413 08:49:58.270462 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2576.solverstate I0413 08:51:31.441325 22206 solver.cpp:242] Iteration 2600 (0.268032 iter/s, 149.236s/40 iter), loss = -6.43825e-20 I0413 08:51:31.441433 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:51:31.441444 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:51:31.441455 22206 sgd_solver.cpp:106] Iteration 2600, lr = 0.001 I0413 08:54:00.436144 22206 solver.cpp:242] Iteration 2640 (0.268462 iter/s, 148.997s/40 iter), loss = -6.43825e-20 I0413 08:54:00.436259 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:54:00.436269 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:54:00.436280 22206 sgd_solver.cpp:106] Iteration 2640, lr = 0.001 I0413 08:56:29.580874 22206 solver.cpp:242] Iteration 2680 (0.268192 iter/s, 149.147s/40 iter), loss = -6.43825e-20 I0413 08:56:29.580984 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:56:29.580994 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:56:29.581006 22206 sgd_solver.cpp:106] Iteration 2680, lr = 0.001 I0413 08:58:58.719501 22206 solver.cpp:242] Iteration 2720 (0.268203 iter/s, 149.141s/40 iter), loss = -6.43825e-20 I0413 08:58:58.719606 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 08:58:58.719616 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 08:58:58.719629 22206 sgd_solver.cpp:106] Iteration 2720, lr = 0.001 I0413 09:01:27.701104 22206 solver.cpp:242] Iteration 2760 (0.268486 iter/s, 148.984s/40 iter), loss = -6.43825e-20 I0413 09:01:27.701212 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:01:27.701221 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:01:27.701232 22206 sgd_solver.cpp:106] Iteration 2760, lr = 0.001 I0413 09:03:56.649909 22206 solver.cpp:242] Iteration 2800 (0.268545 iter/s, 148.951s/40 iter), loss = -6.43825e-20 I0413 09:03:56.650017 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:03:56.650027 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:03:56.650038 22206 sgd_solver.cpp:106] Iteration 2800, lr = 0.001 I0413 09:06:25.543249 22206 solver.cpp:242] Iteration 2840 (0.268645 iter/s, 148.895s/40 iter), loss = -6.43825e-20 I0413 09:06:25.543360 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:06:25.543370 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:06:25.543381 22206 sgd_solver.cpp:106] Iteration 2840, lr = 0.001 I0413 09:08:54.533068 22206 solver.cpp:242] Iteration 2880 (0.268471 iter/s, 148.992s/40 iter), loss = -6.43825e-20 I0413 09:08:54.533181 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:08:54.533191 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:08:54.533202 22206 sgd_solver.cpp:106] Iteration 2880, lr = 0.001 I0413 09:09:57.946005 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2898.caffemodel I0413 09:09:58.037713 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2898.solverstate I0413 09:11:23.805656 22206 solver.cpp:242] Iteration 2920 (0.267962 iter/s, 149.275s/40 iter), loss = -6.43825e-20 I0413 09:11:23.805763 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:11:23.805773 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:11:23.805783 22206 sgd_solver.cpp:106] Iteration 2920, lr = 0.001 I0413 09:13:52.782021 22206 solver.cpp:242] Iteration 2960 (0.268495 iter/s, 148.978s/40 iter), loss = -6.43825e-20 I0413 09:13:52.782095 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:13:52.782104 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:13:52.782114 22206 sgd_solver.cpp:106] Iteration 2960, lr = 0.001 I0413 09:16:21.811465 22206 solver.cpp:242] Iteration 3000 (0.268399 iter/s, 149.032s/40 iter), loss = -6.43825e-20 I0413 09:16:21.811580 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:16:21.811590 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:16:21.811601 22206 sgd_solver.cpp:106] Iteration 3000, lr = 0.001 I0413 09:18:50.943544 22206 solver.cpp:242] Iteration 3040 (0.268215 iter/s, 149.134s/40 iter), loss = -6.43825e-20 I0413 09:18:50.943651 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:18:50.943662 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:18:50.943673 22206 sgd_solver.cpp:106] Iteration 3040, lr = 0.001 I0413 09:21:20.150465 22206 solver.cpp:242] Iteration 3080 (0.26808 iter/s, 149.209s/40 iter), loss = -6.43825e-20 I0413 09:21:20.150588 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:21:20.150599 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:21:20.150609 22206 sgd_solver.cpp:106] Iteration 3080, lr = 0.001 I0413 09:23:49.286792 22206 solver.cpp:242] Iteration 3120 (0.268207 iter/s, 149.138s/40 iter), loss = -6.43825e-20 I0413 09:23:49.286911 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:23:49.286921 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:23:49.286932 22206 sgd_solver.cpp:106] Iteration 3120, lr = 0.001 I0413 09:26:18.334234 22206 solver.cpp:242] Iteration 3160 (0.268367 iter/s, 149.05s/40 iter), loss = -6.43825e-20 I0413 09:26:18.334349 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:26:18.334359 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:26:18.334370 22206 sgd_solver.cpp:106] Iteration 3160, lr = 0.001 I0413 09:28:47.519130 22206 solver.cpp:242] Iteration 3200 (0.26812 iter/s, 149.187s/40 iter), loss = -6.43825e-20 I0413 09:28:47.519201 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:28:47.519210 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:28:47.519222 22206 sgd_solver.cpp:106] Iteration 3200, lr = 0.0001 I0413 09:29:58.412418 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_3220.caffemodel I0413 09:29:58.505291 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3220.solverstate I0413 09:29:58.578584 22206 solver.cpp:362] Iteration 3220, Testing net (#0) I0413 09:29:58.578608 22206 net.cpp:723] Ignoring source layer train_data I0413 09:29:58.578613 22206 net.cpp:723] Ignoring source layer train_label I0413 09:29:58.578616 22206 net.cpp:723] Ignoring source layer train_transform I0413 09:30:17.187198 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:30:17.187222 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:30:17.187227 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 09:30:17.187232 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 09:30:17.187237 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 09:31:35.329571 22206 solver.cpp:242] Iteration 3240 (0.238361 iter/s, 167.813s/40 iter), loss = -6.43825e-20 I0413 09:31:35.329721 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:31:35.329732 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:31:35.329742 22206 sgd_solver.cpp:106] Iteration 3240, lr = 0.0001 I0413 09:34:04.281879 22206 solver.cpp:242] Iteration 3280 (0.268539 iter/s, 148.954s/40 iter), loss = -6.43825e-20 I0413 09:34:04.281991 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:34:04.282001 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:34:04.282011 22206 sgd_solver.cpp:106] Iteration 3280, lr = 0.0001 I0413 09:36:33.375701 22206 solver.cpp:242] Iteration 3320 (0.268284 iter/s, 149.096s/40 iter), loss = -6.43825e-20 I0413 09:36:33.375814 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:36:33.375824 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:36:33.375834 22206 sgd_solver.cpp:106] Iteration 3320, lr = 0.0001 I0413 09:39:02.467183 22206 solver.cpp:242] Iteration 3360 (0.268288 iter/s, 149.094s/40 iter), loss = -6.43825e-20 I0413 09:39:02.467304 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:39:02.467314 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:39:02.467324 22206 sgd_solver.cpp:106] Iteration 3360, lr = 0.0001 I0413 09:41:31.563366 22206 solver.cpp:242] Iteration 3400 (0.268279 iter/s, 149.098s/40 iter), loss = -6.43825e-20 I0413 09:41:31.563433 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:41:31.563442 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:41:31.563453 22206 sgd_solver.cpp:106] Iteration 3400, lr = 0.0001 I0413 09:44:00.769177 22206 solver.cpp:242] Iteration 3440 (0.268082 iter/s, 149.208s/40 iter), loss = -6.43825e-20 I0413 09:44:00.769311 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:44:00.769321 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:44:00.769332 22206 sgd_solver.cpp:106] Iteration 3440, lr = 0.0001 I0413 09:46:30.002990 22206 solver.cpp:242] Iteration 3480 (0.268032 iter/s, 149.236s/40 iter), loss = -6.43825e-20 I0413 09:46:30.003110 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:46:30.003120 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:46:30.003131 22206 sgd_solver.cpp:106] Iteration 3480, lr = 0.0001 I0413 09:48:59.040910 22206 solver.cpp:242] Iteration 3520 (0.268384 iter/s, 149.04s/40 iter), loss = -6.43825e-20 I0413 09:48:59.041010 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:48:59.041020 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:48:59.041031 22206 sgd_solver.cpp:106] Iteration 3520, lr = 0.0001 I0413 09:50:17.398932 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_3542.caffemodel I0413 09:50:17.490676 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3542.solverstate I0413 09:51:28.575299 22206 solver.cpp:242] Iteration 3560 (0.267493 iter/s, 149.537s/40 iter), loss = -6.43825e-20 I0413 09:51:28.575412 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:51:28.575422 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:51:28.575431 22206 sgd_solver.cpp:106] Iteration 3560, lr = 0.0001 I0413 09:53:57.796625 22206 solver.cpp:242] Iteration 3600 (0.268054 iter/s, 149.223s/40 iter), loss = -6.43825e-20 I0413 09:53:57.796737 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:53:57.796746 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:53:57.796758 22206 sgd_solver.cpp:106] Iteration 3600, lr = 0.0001 I0413 09:56:27.163589 22206 solver.cpp:242] Iteration 3640 (0.267793 iter/s, 149.369s/40 iter), loss = -6.43825e-20 I0413 09:56:27.163729 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:56:27.163738 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:56:27.163750 22206 sgd_solver.cpp:106] Iteration 3640, lr = 0.0001 I0413 09:58:56.335173 22206 solver.cpp:242] Iteration 3680 (0.268144 iter/s, 149.174s/40 iter), loss = -6.43825e-20 I0413 09:58:56.335278 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 09:58:56.335286 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 09:58:56.335299 22206 sgd_solver.cpp:106] Iteration 3680, lr = 0.0001 I0413 10:01:25.507982 22206 solver.cpp:242] Iteration 3720 (0.268141 iter/s, 149.175s/40 iter), loss = -6.43825e-20 I0413 10:01:25.508046 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:01:25.508055 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:01:25.508065 22206 sgd_solver.cpp:106] Iteration 3720, lr = 0.0001 I0413 10:03:54.554807 22206 solver.cpp:242] Iteration 3760 (0.268368 iter/s, 149.049s/40 iter), loss = -6.43825e-20 I0413 10:03:54.554925 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:03:54.554935 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:03:54.554946 22206 sgd_solver.cpp:106] Iteration 3760, lr = 0.0001 I0413 10:06:23.596776 22206 solver.cpp:242] Iteration 3800 (0.268377 iter/s, 149.044s/40 iter), loss = -6.43825e-20 I0413 10:06:23.596891 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:06:23.596902 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:06:23.596912 22206 sgd_solver.cpp:106] Iteration 3800, lr = 0.0001 I0413 10:08:52.669639 22206 solver.cpp:242] Iteration 3840 (0.268321 iter/s, 149.075s/40 iter), loss = -6.43825e-20 I0413 10:08:52.669754 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:08:52.669764 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:08:52.669773 22206 sgd_solver.cpp:106] Iteration 3840, lr = 0.0001 I0413 10:10:18.405055 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_3864.caffemodel I0413 10:10:18.496944 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3864.solverstate I0413 10:11:21.959678 22206 solver.cpp:242] Iteration 3880 (0.267931 iter/s, 149.292s/40 iter), loss = -6.43825e-20 I0413 10:11:21.959784 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:11:21.959792 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:11:21.959803 22206 sgd_solver.cpp:106] Iteration 3880, lr = 0.0001 I0413 10:13:51.061364 22206 solver.cpp:242] Iteration 3920 (0.268269 iter/s, 149.104s/40 iter), loss = -6.43825e-20 I0413 10:13:51.061491 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:13:51.061502 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:13:51.061514 22206 sgd_solver.cpp:106] Iteration 3920, lr = 0.0001 I0413 10:16:20.152899 22206 solver.cpp:242] Iteration 3960 (0.268288 iter/s, 149.094s/40 iter), loss = -6.43825e-20 I0413 10:16:20.152971 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:16:20.152978 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:16:20.152989 22206 sgd_solver.cpp:106] Iteration 3960, lr = 0.0001 I0413 10:18:49.210336 22206 solver.cpp:242] Iteration 4000 (0.268349 iter/s, 149.06s/40 iter), loss = -6.43825e-20 I0413 10:18:49.210407 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:18:49.210417 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:18:49.210427 22206 sgd_solver.cpp:106] Iteration 4000, lr = 0.0001 I0413 10:21:18.364866 22206 solver.cpp:242] Iteration 4040 (0.268174 iter/s, 149.157s/40 iter), loss = -6.43825e-20 I0413 10:21:18.365006 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:21:18.365016 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:21:18.365027 22206 sgd_solver.cpp:106] Iteration 4040, lr = 0.0001 I0413 10:23:47.478269 22206 solver.cpp:242] Iteration 4080 (0.268248 iter/s, 149.116s/40 iter), loss = -6.43825e-20 I0413 10:23:47.478392 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:23:47.478404 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:23:47.478413 22206 sgd_solver.cpp:106] Iteration 4080, lr = 0.0001 I0413 10:26:16.476501 22206 solver.cpp:242] Iteration 4120 (0.268456 iter/s, 149s/40 iter), loss = -6.43825e-20 I0413 10:26:16.476625 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:26:16.476635 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:26:16.476646 22206 sgd_solver.cpp:106] Iteration 4120, lr = 0.0001 I0413 10:28:45.564611 22206 solver.cpp:242] Iteration 4160 (0.268294 iter/s, 149.09s/40 iter), loss = -6.43825e-20 I0413 10:28:45.564725 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:28:45.564735 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:28:45.564745 22206 sgd_solver.cpp:106] Iteration 4160, lr = 0.0001 I0413 10:30:18.837824 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_4186.caffemodel I0413 10:30:18.929880 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4186.solverstate I0413 10:31:14.862416 22206 solver.cpp:242] Iteration 4200 (0.267917 iter/s, 149.3s/40 iter), loss = -6.43825e-20 I0413 10:31:14.862524 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:31:14.862534 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:31:14.862546 22206 sgd_solver.cpp:106] Iteration 4200, lr = 0.0001 I0413 10:33:43.922089 22206 solver.cpp:242] Iteration 4240 (0.268345 iter/s, 149.062s/40 iter), loss = -6.43825e-20 I0413 10:33:43.922157 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:33:43.922165 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:33:43.922175 22206 sgd_solver.cpp:106] Iteration 4240, lr = 0.0001 I0413 10:36:12.987992 22206 solver.cpp:242] Iteration 4280 (0.268334 iter/s, 149.068s/40 iter), loss = -6.43825e-20 I0413 10:36:12.988106 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:36:12.988116 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:36:12.988126 22206 sgd_solver.cpp:106] Iteration 4280, lr = 0.0001 I0413 10:38:42.116134 22206 solver.cpp:242] Iteration 4320 (0.268222 iter/s, 149.13s/40 iter), loss = -6.43825e-20 I0413 10:38:42.116241 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:38:42.116252 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:38:42.116263 22206 sgd_solver.cpp:106] Iteration 4320, lr = 0.0001 I0413 10:41:11.217193 22206 solver.cpp:242] Iteration 4360 (0.26827 iter/s, 149.103s/40 iter), loss = -6.43825e-20 I0413 10:41:11.217311 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:41:11.217321 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:41:11.217332 22206 sgd_solver.cpp:106] Iteration 4360, lr = 0.0001 I0413 10:43:40.140560 22206 solver.cpp:242] Iteration 4400 (0.268591 iter/s, 148.926s/40 iter), loss = -6.43825e-20 I0413 10:43:40.140676 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:43:40.140686 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:43:40.140697 22206 sgd_solver.cpp:106] Iteration 4400, lr = 0.0001 I0413 10:46:09.319939 22206 solver.cpp:242] Iteration 4440 (0.26813 iter/s, 149.182s/40 iter), loss = -6.43825e-20 I0413 10:46:09.320101 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:46:09.320112 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:46:09.320122 22206 sgd_solver.cpp:106] Iteration 4440, lr = 0.0001 I0413 10:48:38.501461 22206 solver.cpp:242] Iteration 4480 (0.268126 iter/s, 149.184s/40 iter), loss = -6.43825e-20 I0413 10:48:38.501576 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:48:38.501586 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:48:38.501597 22206 sgd_solver.cpp:106] Iteration 4480, lr = 0.0001 I0413 10:50:19.204316 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_4508.caffemodel I0413 10:50:19.296655 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4508.solverstate I0413 10:51:07.778781 22206 solver.cpp:242] Iteration 4520 (0.267954 iter/s, 149.279s/40 iter), loss = -6.43825e-20 I0413 10:51:07.778888 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:51:07.778899 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:51:07.778910 22206 sgd_solver.cpp:106] Iteration 4520, lr = 0.0001 I0413 10:53:36.721230 22206 solver.cpp:242] Iteration 4560 (0.268556 iter/s, 148.945s/40 iter), loss = -6.43825e-20 I0413 10:53:36.721348 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:53:36.721357 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:53:36.721369 22206 sgd_solver.cpp:106] Iteration 4560, lr = 0.0001 I0413 10:56:05.601032 22206 solver.cpp:242] Iteration 4600 (0.268669 iter/s, 148.882s/40 iter), loss = -6.43825e-20 I0413 10:56:05.601160 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:56:05.601171 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:56:05.601181 22206 sgd_solver.cpp:106] Iteration 4600, lr = 0.0001 I0413 10:58:34.820750 22206 solver.cpp:242] Iteration 4640 (0.268057 iter/s, 149.222s/40 iter), loss = -6.43825e-20 I0413 10:58:34.820994 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 10:58:34.821004 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 10:58:34.821015 22206 sgd_solver.cpp:106] Iteration 4640, lr = 0.0001 I0413 11:01:03.921664 22206 solver.cpp:242] Iteration 4680 (0.268271 iter/s, 149.103s/40 iter), loss = -6.43825e-20 I0413 11:01:03.921782 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:01:03.921792 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:01:03.921803 22206 sgd_solver.cpp:106] Iteration 4680, lr = 0.0001 I0413 11:03:33.017868 22206 solver.cpp:242] Iteration 4720 (0.268279 iter/s, 149.098s/40 iter), loss = -6.43825e-20 I0413 11:03:33.017990 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:03:33.018000 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:03:33.018012 22206 sgd_solver.cpp:106] Iteration 4720, lr = 0.0001 I0413 11:06:02.146970 22206 solver.cpp:242] Iteration 4760 (0.26822 iter/s, 149.131s/40 iter), loss = -6.43825e-20 I0413 11:06:02.147102 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:06:02.147112 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:06:02.147123 22206 sgd_solver.cpp:106] Iteration 4760, lr = 0.0001 I0413 11:08:31.073314 22206 solver.cpp:242] Iteration 4800 (0.268585 iter/s, 148.928s/40 iter), loss = -6.43825e-20 I0413 11:08:31.073456 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:08:31.073467 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:08:31.073477 22206 sgd_solver.cpp:106] Iteration 4800, lr = 0.0001 I0413 11:10:19.107978 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_4830.caffemodel I0413 11:10:19.199867 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4830.solverstate I0413 11:10:19.276968 22206 solver.cpp:362] Iteration 4830, Testing net (#0) I0413 11:10:19.276990 22206 net.cpp:723] Ignoring source layer train_data I0413 11:10:19.276995 22206 net.cpp:723] Ignoring source layer train_label I0413 11:10:19.276999 22206 net.cpp:723] Ignoring source layer train_transform I0413 11:10:37.890264 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:10:37.890288 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:10:37.890295 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 11:10:37.890300 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 11:10:37.890305 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 11:11:18.855345 22206 solver.cpp:242] Iteration 4840 (0.238401 iter/s, 167.784s/40 iter), loss = -6.43825e-20 I0413 11:11:18.855469 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:11:18.855479 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:11:18.855490 22206 sgd_solver.cpp:106] Iteration 4840, lr = 0.0001 I0413 11:13:47.931879 22206 solver.cpp:242] Iteration 4880 (0.268315 iter/s, 149.079s/40 iter), loss = -6.43825e-20 I0413 11:13:47.931987 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:13:47.931996 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:13:47.932008 22206 sgd_solver.cpp:106] Iteration 4880, lr = 0.0001 I0413 11:16:17.053846 22206 solver.cpp:242] Iteration 4920 (0.268233 iter/s, 149.124s/40 iter), loss = -6.43825e-20 I0413 11:16:17.053958 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:16:17.053969 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:16:17.053980 22206 sgd_solver.cpp:106] Iteration 4920, lr = 0.0001 I0413 11:18:46.184485 22206 solver.cpp:242] Iteration 4960 (0.268217 iter/s, 149.133s/40 iter), loss = -6.43825e-20 I0413 11:18:46.184582 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:18:46.184592 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:18:46.184602 22206 sgd_solver.cpp:106] Iteration 4960, lr = 0.0001 I0413 11:21:15.249085 22206 solver.cpp:242] Iteration 5000 (0.268336 iter/s, 149.067s/40 iter), loss = -6.43825e-20 I0413 11:21:15.249160 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:21:15.249169 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:21:15.249181 22206 sgd_solver.cpp:106] Iteration 5000, lr = 0.0001 I0413 11:23:44.405105 22206 solver.cpp:242] Iteration 5040 (0.268172 iter/s, 149.158s/40 iter), loss = -6.43825e-20 I0413 11:23:44.405216 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:23:44.405226 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:23:44.405237 22206 sgd_solver.cpp:106] Iteration 5040, lr = 0.0001 I0413 11:26:13.267356 22206 solver.cpp:242] Iteration 5080 (0.268701 iter/s, 148.864s/40 iter), loss = -6.43825e-20 I0413 11:26:13.267424 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:26:13.267433 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:26:13.267443 22206 sgd_solver.cpp:106] Iteration 5080, lr = 0.0001 I0413 11:28:42.219928 22206 solver.cpp:242] Iteration 5120 (0.268538 iter/s, 148.955s/40 iter), loss = -6.43825e-20 I0413 11:28:42.220067 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:28:42.220077 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:28:42.220088 22206 sgd_solver.cpp:106] Iteration 5120, lr = 0.0001 I0413 11:30:37.844563 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_5152.caffemodel I0413 11:30:37.936465 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_5152.solverstate I0413 11:31:11.507647 22206 solver.cpp:242] Iteration 5160 (0.267935 iter/s, 149.29s/40 iter), loss = -6.43825e-20 I0413 11:31:11.507751 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:31:11.507761 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:31:11.507772 22206 sgd_solver.cpp:106] Iteration 5160, lr = 0.0001 I0413 11:33:40.654230 22206 solver.cpp:242] Iteration 5200 (0.268189 iter/s, 149.149s/40 iter), loss = -6.43825e-20 I0413 11:33:40.654347 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:33:40.654357 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:33:40.654368 22206 sgd_solver.cpp:106] Iteration 5200, lr = 0.0001 I0413 11:36:09.591799 22206 solver.cpp:242] Iteration 5240 (0.268565 iter/s, 148.94s/40 iter), loss = -6.43825e-20 I0413 11:36:09.591915 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:36:09.591925 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:36:09.591936 22206 sgd_solver.cpp:106] Iteration 5240, lr = 0.0001 I0413 11:38:38.533434 22206 solver.cpp:242] Iteration 5280 (0.268558 iter/s, 148.944s/40 iter), loss = -6.43825e-20 I0413 11:38:38.533537 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:38:38.533547 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:38:38.533558 22206 sgd_solver.cpp:106] Iteration 5280, lr = 0.0001 I0413 11:41:07.509539 22206 solver.cpp:242] Iteration 5320 (0.268496 iter/s, 148.978s/40 iter), loss = -6.43825e-20 I0413 11:41:07.509606 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:41:07.509615 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:41:07.509626 22206 sgd_solver.cpp:106] Iteration 5320, lr = 0.0001 I0413 11:43:36.430819 22206 solver.cpp:242] Iteration 5360 (0.268594 iter/s, 148.923s/40 iter), loss = -6.43825e-20 I0413 11:43:36.430930 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:43:36.430940 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:43:36.430951 22206 sgd_solver.cpp:106] Iteration 5360, lr = 0.0001 I0413 11:46:05.434841 22206 solver.cpp:242] Iteration 5400 (0.268445 iter/s, 149.006s/40 iter), loss = -6.43825e-20 I0413 11:46:05.434968 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:46:05.434978 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:46:05.434988 22206 sgd_solver.cpp:106] Iteration 5400, lr = 0.0001 I0413 11:48:34.255635 22206 solver.cpp:242] Iteration 5440 (0.268776 iter/s, 148.823s/40 iter), loss = -6.43825e-20 I0413 11:48:34.255708 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:48:34.255717 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:48:34.255728 22206 sgd_solver.cpp:106] Iteration 5440, lr = 0.0001 I0413 11:50:37.201242 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_5474.caffemodel I0413 11:50:37.292891 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_5474.solverstate I0413 11:51:03.436975 22206 solver.cpp:242] Iteration 5480 (0.268126 iter/s, 149.184s/40 iter), loss = -6.43825e-20 I0413 11:51:03.437016 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:51:03.437024 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:51:03.437034 22206 sgd_solver.cpp:106] Iteration 5480, lr = 0.0001 I0413 11:53:32.492372 22206 solver.cpp:242] Iteration 5520 (0.268353 iter/s, 149.058s/40 iter), loss = -6.43825e-20 I0413 11:53:32.492514 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:53:32.492525 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:53:32.492535 22206 sgd_solver.cpp:106] Iteration 5520, lr = 0.0001 I0413 11:56:01.845796 22206 solver.cpp:242] Iteration 5560 (0.267817 iter/s, 149.356s/40 iter), loss = -6.43825e-20 I0413 11:56:01.845908 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:56:01.845919 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:56:01.845930 22206 sgd_solver.cpp:106] Iteration 5560, lr = 0.0001 I0413 11:58:30.763732 22206 solver.cpp:242] Iteration 5600 (0.2686 iter/s, 148.92s/40 iter), loss = -6.43825e-20 I0413 11:58:30.763833 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 11:58:30.763842 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 11:58:30.763854 22206 sgd_solver.cpp:106] Iteration 5600, lr = 0.0001 I0413 12:00:59.810398 22206 solver.cpp:242] Iteration 5640 (0.268368 iter/s, 149.049s/40 iter), loss = -6.43825e-20 I0413 12:00:59.810508 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:00:59.810519 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:00:59.810530 22206 sgd_solver.cpp:106] Iteration 5640, lr = 0.0001 I0413 12:03:28.841716 22206 solver.cpp:242] Iteration 5680 (0.268396 iter/s, 149.033s/40 iter), loss = -6.43825e-20 I0413 12:03:28.841830 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:03:28.841840 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:03:28.841852 22206 sgd_solver.cpp:106] Iteration 5680, lr = 0.0001 I0413 12:05:57.901038 22206 solver.cpp:242] Iteration 5720 (0.268346 iter/s, 149.061s/40 iter), loss = -6.43825e-20 I0413 12:05:57.901135 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:05:57.901145 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:05:57.901156 22206 sgd_solver.cpp:106] Iteration 5720, lr = 0.0001 I0413 12:08:27.050364 22206 solver.cpp:242] Iteration 5760 (0.268184 iter/s, 149.152s/40 iter), loss = -6.43825e-20 I0413 12:08:27.050463 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:08:27.050473 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:08:27.050484 22206 sgd_solver.cpp:106] Iteration 5760, lr = 0.0001 I0413 12:10:37.396387 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_5796.caffemodel I0413 12:10:37.488693 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_5796.solverstate I0413 12:10:56.210398 22206 solver.cpp:242] Iteration 5800 (0.268164 iter/s, 149.162s/40 iter), loss = -6.43825e-20 I0413 12:10:56.210441 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:10:56.210450 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:10:56.210460 22206 sgd_solver.cpp:106] Iteration 5800, lr = 0.0001 I0413 12:13:25.227550 22206 solver.cpp:242] Iteration 5840 (0.268422 iter/s, 149.019s/40 iter), loss = -6.43825e-20 I0413 12:13:25.227655 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:13:25.227665 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:13:25.227676 22206 sgd_solver.cpp:106] Iteration 5840, lr = 0.0001 I0413 12:15:54.367265 22206 solver.cpp:242] Iteration 5880 (0.268201 iter/s, 149.142s/40 iter), loss = -6.43825e-20 I0413 12:15:54.367399 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:15:54.367416 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:15:54.367434 22206 sgd_solver.cpp:106] Iteration 5880, lr = 0.0001 I0413 12:18:23.401654 22206 solver.cpp:242] Iteration 5920 (0.268391 iter/s, 149.037s/40 iter), loss = -6.43825e-20 I0413 12:18:23.401799 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:18:23.401809 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:18:23.401820 22206 sgd_solver.cpp:106] Iteration 5920, lr = 0.0001 I0413 12:20:52.562824 22206 solver.cpp:242] Iteration 5960 (0.268162 iter/s, 149.163s/40 iter), loss = -6.43825e-20 I0413 12:20:52.562935 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:20:52.562944 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:20:52.562955 22206 sgd_solver.cpp:106] Iteration 5960, lr = 0.0001 I0413 12:23:21.625830 22206 solver.cpp:242] Iteration 6000 (0.268339 iter/s, 149.065s/40 iter), loss = -6.43825e-20 I0413 12:23:21.625941 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:23:21.625950 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:23:21.625960 22206 sgd_solver.cpp:106] Iteration 6000, lr = 0.0001 I0413 12:25:50.510841 22206 solver.cpp:242] Iteration 6040 (0.26866 iter/s, 148.887s/40 iter), loss = -6.43825e-20 I0413 12:25:50.510959 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:25:50.510969 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:25:50.510980 22206 sgd_solver.cpp:106] Iteration 6040, lr = 0.0001 I0413 12:28:19.695318 22206 solver.cpp:242] Iteration 6080 (0.26812 iter/s, 149.187s/40 iter), loss = -6.43825e-20 I0413 12:28:19.695426 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:28:19.695436 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:28:19.695448 22206 sgd_solver.cpp:106] Iteration 6080, lr = 0.0001 I0413 12:30:37.608603 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_6118.caffemodel I0413 12:30:37.700292 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_6118.solverstate I0413 12:30:48.915585 22206 solver.cpp:242] Iteration 6120 (0.268056 iter/s, 149.222s/40 iter), loss = -6.43825e-20 I0413 12:30:48.915632 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:30:48.915639 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:30:48.915649 22206 sgd_solver.cpp:106] Iteration 6120, lr = 0.0001 I0413 12:33:18.196563 22206 solver.cpp:242] Iteration 6160 (0.267947 iter/s, 149.283s/40 iter), loss = -6.43825e-20 I0413 12:33:18.196668 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:33:18.196678 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:33:18.196689 22206 sgd_solver.cpp:106] Iteration 6160, lr = 0.0001 I0413 12:35:47.248986 22206 solver.cpp:242] Iteration 6200 (0.268358 iter/s, 149.055s/40 iter), loss = -6.43825e-20 I0413 12:35:47.249079 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:35:47.249089 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:35:47.249099 22206 sgd_solver.cpp:106] Iteration 6200, lr = 0.0001 I0413 12:38:16.190786 22206 solver.cpp:242] Iteration 6240 (0.268557 iter/s, 148.944s/40 iter), loss = -6.43825e-20 I0413 12:38:16.190904 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:38:16.190914 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:38:16.190924 22206 sgd_solver.cpp:106] Iteration 6240, lr = 0.0001 I0413 12:40:45.315899 22206 solver.cpp:242] Iteration 6280 (0.268227 iter/s, 149.127s/40 iter), loss = -6.43825e-20 I0413 12:40:45.316043 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:40:45.316054 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:40:45.316066 22206 sgd_solver.cpp:106] Iteration 6280, lr = 0.0001 I0413 12:43:14.356199 22206 solver.cpp:242] Iteration 6320 (0.26838 iter/s, 149.042s/40 iter), loss = -6.43825e-20 I0413 12:43:14.356298 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:43:14.356307 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:43:14.356318 22206 sgd_solver.cpp:106] Iteration 6320, lr = 0.0001 I0413 12:45:43.493907 22206 solver.cpp:242] Iteration 6360 (0.268205 iter/s, 149.14s/40 iter), loss = -6.43825e-20 I0413 12:45:43.494006 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:45:43.494016 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:45:43.494027 22206 sgd_solver.cpp:106] Iteration 6360, lr = 0.0001 I0413 12:48:12.582765 22206 solver.cpp:242] Iteration 6400 (0.268292 iter/s, 149.091s/40 iter), loss = -6.43825e-20 I0413 12:48:12.582886 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:48:12.582897 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:48:12.582908 22206 sgd_solver.cpp:106] Iteration 6400, lr = 1e-05 I0413 12:50:38.043826 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_6440.caffemodel I0413 12:50:38.135321 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_6440.solverstate I0413 12:50:38.208289 22206 solver.cpp:362] Iteration 6440, Testing net (#0) I0413 12:50:38.208309 22206 net.cpp:723] Ignoring source layer train_data I0413 12:50:38.208314 22206 net.cpp:723] Ignoring source layer train_label I0413 12:50:38.208317 22206 net.cpp:723] Ignoring source layer train_transform I0413 12:50:56.811767 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:50:56.811790 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:50:56.811796 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 12:50:56.811801 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 12:50:56.811805 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 12:51:00.520951 22206 solver.cpp:242] Iteration 6440 (0.238179 iter/s, 167.941s/40 iter), loss = -6.43825e-20 I0413 12:51:00.520995 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:51:00.521003 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:51:00.521014 22206 sgd_solver.cpp:106] Iteration 6440, lr = 1e-05 I0413 12:53:29.562093 22206 solver.cpp:242] Iteration 6480 (0.268378 iter/s, 149.043s/40 iter), loss = -6.43825e-20 I0413 12:53:29.562188 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:53:29.562198 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:53:29.562208 22206 sgd_solver.cpp:106] Iteration 6480, lr = 1e-05 I0413 12:55:58.574651 22206 solver.cpp:242] Iteration 6520 (0.26843 iter/s, 149.015s/40 iter), loss = -6.43825e-20 I0413 12:55:58.574770 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:55:58.574780 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:55:58.574791 22206 sgd_solver.cpp:106] Iteration 6520, lr = 1e-05 I0413 12:58:27.761814 22206 solver.cpp:242] Iteration 6560 (0.268116 iter/s, 149.189s/40 iter), loss = -6.43825e-20 I0413 12:58:27.761919 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 12:58:27.761929 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 12:58:27.761940 22206 sgd_solver.cpp:106] Iteration 6560, lr = 1e-05 I0413 13:00:56.901226 22206 solver.cpp:242] Iteration 6600 (0.268202 iter/s, 149.142s/40 iter), loss = -6.43825e-20 I0413 13:00:56.901410 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:00:56.901429 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:00:56.901448 22206 sgd_solver.cpp:106] Iteration 6600, lr = 1e-05 I0413 13:03:25.910713 22206 solver.cpp:242] Iteration 6640 (0.268435 iter/s, 149.012s/40 iter), loss = -6.43825e-20 I0413 13:03:25.910823 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:03:25.910832 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:03:25.910843 22206 sgd_solver.cpp:106] Iteration 6640, lr = 1e-05 I0413 13:05:55.244607 22206 solver.cpp:242] Iteration 6680 (0.267852 iter/s, 149.336s/40 iter), loss = -6.43825e-20 I0413 13:05:55.244720 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:05:55.244731 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:05:55.244742 22206 sgd_solver.cpp:106] Iteration 6680, lr = 1e-05 I0413 13:08:24.363471 22206 solver.cpp:242] Iteration 6720 (0.268238 iter/s, 149.121s/40 iter), loss = -6.43825e-20 I0413 13:08:24.363572 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:08:24.363582 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:08:24.363593 22206 sgd_solver.cpp:106] Iteration 6720, lr = 1e-05 I0413 13:10:53.399459 22206 solver.cpp:242] Iteration 6760 (0.268388 iter/s, 149.038s/40 iter), loss = -6.43825e-20 I0413 13:10:53.399587 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:10:53.399597 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:10:53.399607 22206 sgd_solver.cpp:106] Iteration 6760, lr = 1e-05 I0413 13:10:57.139369 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_6762.caffemodel I0413 13:10:57.231012 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_6762.solverstate I0413 13:13:22.875146 22206 solver.cpp:242] Iteration 6800 (0.267598 iter/s, 149.478s/40 iter), loss = -6.43825e-20 I0413 13:13:22.875260 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:13:22.875272 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:13:22.875283 22206 sgd_solver.cpp:106] Iteration 6800, lr = 1e-05 I0413 13:15:51.919904 22206 solver.cpp:242] Iteration 6840 (0.268372 iter/s, 149.047s/40 iter), loss = -6.43825e-20 I0413 13:15:51.920022 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:15:51.920032 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:15:51.920043 22206 sgd_solver.cpp:106] Iteration 6840, lr = 1e-05 I0413 13:18:21.060111 22206 solver.cpp:242] Iteration 6880 (0.2682 iter/s, 149.142s/40 iter), loss = -6.43825e-20 I0413 13:18:21.060223 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:18:21.060233 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:18:21.060245 22206 sgd_solver.cpp:106] Iteration 6880, lr = 1e-05 I0413 13:20:50.300302 22206 solver.cpp:242] Iteration 6920 (0.26802 iter/s, 149.242s/40 iter), loss = -6.43825e-20 I0413 13:20:50.300415 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:20:50.300423 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:20:50.300434 22206 sgd_solver.cpp:106] Iteration 6920, lr = 1e-05 I0413 13:23:19.393328 22206 solver.cpp:242] Iteration 6960 (0.268285 iter/s, 149.095s/40 iter), loss = -6.43825e-20 I0413 13:23:19.393443 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:23:19.393453 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:23:19.393465 22206 sgd_solver.cpp:106] Iteration 6960, lr = 1e-05 I0413 13:25:48.366895 22206 solver.cpp:242] Iteration 7000 (0.2685 iter/s, 148.976s/40 iter), loss = -6.43825e-20 I0413 13:25:48.367045 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:25:48.367056 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:25:48.367066 22206 sgd_solver.cpp:106] Iteration 7000, lr = 1e-05 I0413 13:28:17.712879 22206 solver.cpp:242] Iteration 7040 (0.267831 iter/s, 149.348s/40 iter), loss = -6.43825e-20 I0413 13:28:17.713006 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:28:17.713016 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:28:17.713027 22206 sgd_solver.cpp:106] Iteration 7040, lr = 1e-05 I0413 13:30:47.053527 22206 solver.cpp:242] Iteration 7080 (0.26784 iter/s, 149.343s/40 iter), loss = -6.43825e-20 I0413 13:30:47.053652 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:30:47.053661 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:30:47.053673 22206 sgd_solver.cpp:106] Iteration 7080, lr = 1e-05 I0413 13:30:58.248433 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_7084.caffemodel I0413 13:30:58.340050 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_7084.solverstate I0413 13:33:16.445003 22206 solver.cpp:242] Iteration 7120 (0.267749 iter/s, 149.394s/40 iter), loss = -6.43825e-20 I0413 13:33:16.445122 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:33:16.445132 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:33:16.445143 22206 sgd_solver.cpp:106] Iteration 7120, lr = 1e-05 I0413 13:35:45.404561 22206 solver.cpp:242] Iteration 7160 (0.268525 iter/s, 148.962s/40 iter), loss = -6.43825e-20 I0413 13:35:45.404664 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:35:45.404673 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:35:45.404685 22206 sgd_solver.cpp:106] Iteration 7160, lr = 1e-05 I0413 13:38:14.508970 22206 solver.cpp:242] Iteration 7200 (0.268264 iter/s, 149.107s/40 iter), loss = -6.43825e-20 I0413 13:38:14.509091 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:38:14.509101 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:38:14.509112 22206 sgd_solver.cpp:106] Iteration 7200, lr = 1e-05 I0413 13:40:43.527680 22206 solver.cpp:242] Iteration 7240 (0.268419 iter/s, 149.021s/40 iter), loss = -6.43825e-20 I0413 13:40:43.527781 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:40:43.527791 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:40:43.527801 22206 sgd_solver.cpp:106] Iteration 7240, lr = 1e-05 I0413 13:43:12.574939 22206 solver.cpp:242] Iteration 7280 (0.268367 iter/s, 149.049s/40 iter), loss = -6.43825e-20 I0413 13:43:12.575076 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:43:12.575086 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:43:12.575098 22206 sgd_solver.cpp:106] Iteration 7280, lr = 1e-05 I0413 13:45:41.590486 22206 solver.cpp:242] Iteration 7320 (0.268425 iter/s, 149.018s/40 iter), loss = -6.43825e-20 I0413 13:45:41.590557 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:45:41.590566 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:45:41.590577 22206 sgd_solver.cpp:106] Iteration 7320, lr = 1e-05 I0413 13:48:10.610152 22206 solver.cpp:242] Iteration 7360 (0.268417 iter/s, 149.022s/40 iter), loss = -6.43825e-20 I0413 13:48:10.610265 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:48:10.610275 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:48:10.610286 22206 sgd_solver.cpp:106] Iteration 7360, lr = 1e-05 I0413 13:50:39.783463 22206 solver.cpp:242] Iteration 7400 (0.268141 iter/s, 149.175s/40 iter), loss = -6.43825e-20 I0413 13:50:39.783591 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:50:39.783601 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:50:39.783612 22206 sgd_solver.cpp:106] Iteration 7400, lr = 1e-05 I0413 13:50:58.456598 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_7406.caffemodel I0413 13:50:58.548084 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_7406.solverstate I0413 13:53:09.213758 22206 solver.cpp:242] Iteration 7440 (0.26768 iter/s, 149.432s/40 iter), loss = -6.43825e-20 I0413 13:53:09.213860 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:53:09.213870 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:53:09.213881 22206 sgd_solver.cpp:106] Iteration 7440, lr = 1e-05 I0413 13:55:38.401008 22206 solver.cpp:242] Iteration 7480 (0.268116 iter/s, 149.189s/40 iter), loss = -6.43825e-20 I0413 13:55:38.401109 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:55:38.401119 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:55:38.401130 22206 sgd_solver.cpp:106] Iteration 7480, lr = 1e-05 I0413 13:58:07.487855 22206 solver.cpp:242] Iteration 7520 (0.268296 iter/s, 149.089s/40 iter), loss = -6.43825e-20 I0413 13:58:07.487967 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 13:58:07.487977 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 13:58:07.487988 22206 sgd_solver.cpp:106] Iteration 7520, lr = 1e-05 I0413 14:00:36.686660 22206 solver.cpp:242] Iteration 7560 (0.268095 iter/s, 149.201s/40 iter), loss = -6.43825e-20 I0413 14:00:36.686791 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:00:36.686801 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:00:36.686813 22206 sgd_solver.cpp:106] Iteration 7560, lr = 1e-05 I0413 14:03:05.957505 22206 solver.cpp:242] Iteration 7600 (0.267965 iter/s, 149.273s/40 iter), loss = -6.43825e-20 I0413 14:03:05.957608 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:03:05.957618 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:03:05.957629 22206 sgd_solver.cpp:106] Iteration 7600, lr = 1e-05 I0413 14:05:35.178156 22206 solver.cpp:242] Iteration 7640 (0.268056 iter/s, 149.223s/40 iter), loss = -6.43825e-20 I0413 14:05:35.178259 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:05:35.178268 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:05:35.178279 22206 sgd_solver.cpp:106] Iteration 7640, lr = 1e-05 I0413 14:08:04.350474 22206 solver.cpp:242] Iteration 7680 (0.268142 iter/s, 149.174s/40 iter), loss = -6.43825e-20 I0413 14:08:04.350613 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:08:04.350625 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:08:04.350634 22206 sgd_solver.cpp:106] Iteration 7680, lr = 1e-05 I0413 14:10:33.404428 22206 solver.cpp:242] Iteration 7720 (0.268355 iter/s, 149.056s/40 iter), loss = -6.43825e-20 I0413 14:10:33.404542 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:10:33.404552 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:10:33.404563 22206 sgd_solver.cpp:106] Iteration 7720, lr = 1e-05 I0413 14:10:59.537984 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_7728.caffemodel I0413 14:10:59.629503 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_7728.solverstate I0413 14:13:02.751924 22206 solver.cpp:242] Iteration 7760 (0.267828 iter/s, 149.35s/40 iter), loss = -6.43825e-20 I0413 14:13:02.752023 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:13:02.752033 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:13:02.752044 22206 sgd_solver.cpp:106] Iteration 7760, lr = 1e-05 I0413 14:15:32.002921 22206 solver.cpp:242] Iteration 7800 (0.268001 iter/s, 149.253s/40 iter), loss = -6.43825e-20 I0413 14:15:32.003027 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:15:32.003038 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:15:32.003049 22206 sgd_solver.cpp:106] Iteration 7800, lr = 1e-05 I0413 14:18:01.238049 22206 solver.cpp:242] Iteration 7840 (0.26803 iter/s, 149.237s/40 iter), loss = -6.43825e-20 I0413 14:18:01.238193 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:18:01.238219 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:18:01.238237 22206 sgd_solver.cpp:106] Iteration 7840, lr = 1e-05 I0413 14:20:30.413532 22206 solver.cpp:242] Iteration 7880 (0.268137 iter/s, 149.178s/40 iter), loss = -6.43825e-20 I0413 14:20:30.413650 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:20:30.413661 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:20:30.413671 22206 sgd_solver.cpp:106] Iteration 7880, lr = 1e-05 I0413 14:22:59.518537 22206 solver.cpp:242] Iteration 7920 (0.268263 iter/s, 149.107s/40 iter), loss = -6.43825e-20 I0413 14:22:59.518635 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:22:59.518645 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:22:59.518656 22206 sgd_solver.cpp:106] Iteration 7920, lr = 1e-05 I0413 14:25:28.803967 22206 solver.cpp:242] Iteration 7960 (0.267939 iter/s, 149.288s/40 iter), loss = -6.43825e-20 I0413 14:25:28.804087 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:25:28.804098 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:25:28.804110 22206 sgd_solver.cpp:106] Iteration 7960, lr = 1e-05 I0413 14:27:57.851713 22206 solver.cpp:242] Iteration 8000 (0.268367 iter/s, 149.05s/40 iter), loss = -6.43825e-20 I0413 14:27:57.851835 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:27:57.851845 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:27:57.851856 22206 sgd_solver.cpp:106] Iteration 8000, lr = 1e-05 I0413 14:30:27.084547 22206 solver.cpp:242] Iteration 8040 (0.268034 iter/s, 149.235s/40 iter), loss = -6.43825e-20 I0413 14:30:27.084661 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:30:27.084671 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:30:27.084682 22206 sgd_solver.cpp:106] Iteration 8040, lr = 1e-05 I0413 14:31:00.638676 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_8050.caffemodel I0413 14:31:00.730593 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_8050.solverstate I0413 14:31:00.803562 22206 solver.cpp:362] Iteration 8050, Testing net (#0) I0413 14:31:00.803586 22206 net.cpp:723] Ignoring source layer train_data I0413 14:31:00.803591 22206 net.cpp:723] Ignoring source layer train_label I0413 14:31:00.803594 22206 net.cpp:723] Ignoring source layer train_transform I0413 14:31:19.506942 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:31:19.506966 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:31:19.506973 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 14:31:19.506978 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 14:31:19.506983 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 14:33:15.070456 22206 solver.cpp:242] Iteration 8080 (0.238112 iter/s, 167.988s/40 iter), loss = -6.43825e-20 I0413 14:33:15.070543 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:33:15.070551 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:33:15.070562 22206 sgd_solver.cpp:106] Iteration 8080, lr = 1e-05 I0413 14:35:44.386499 22206 solver.cpp:242] Iteration 8120 (0.267884 iter/s, 149.318s/40 iter), loss = -6.43825e-20 I0413 14:35:44.386603 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:35:44.386613 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:35:44.386625 22206 sgd_solver.cpp:106] Iteration 8120, lr = 1e-05 I0413 14:38:13.611971 22206 solver.cpp:242] Iteration 8160 (0.268047 iter/s, 149.228s/40 iter), loss = -6.43825e-20 I0413 14:38:13.612078 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:38:13.612088 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:38:13.612099 22206 sgd_solver.cpp:106] Iteration 8160, lr = 1e-05 I0413 14:40:42.722482 22206 solver.cpp:242] Iteration 8200 (0.268254 iter/s, 149.113s/40 iter), loss = -6.43825e-20 I0413 14:40:42.722584 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:40:42.722594 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:40:42.722604 22206 sgd_solver.cpp:106] Iteration 8200, lr = 1e-05 I0413 14:43:11.992506 22206 solver.cpp:242] Iteration 8240 (0.267967 iter/s, 149.272s/40 iter), loss = -6.43825e-20 I0413 14:43:11.992638 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:43:11.992650 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:43:11.992662 22206 sgd_solver.cpp:106] Iteration 8240, lr = 1e-05 I0413 14:45:41.087404 22206 solver.cpp:242] Iteration 8280 (0.268282 iter/s, 149.097s/40 iter), loss = -6.43825e-20 I0413 14:45:41.087527 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:45:41.087538 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:45:41.087548 22206 sgd_solver.cpp:106] Iteration 8280, lr = 1e-05 I0413 14:48:10.234676 22206 solver.cpp:242] Iteration 8320 (0.268188 iter/s, 149.149s/40 iter), loss = -6.43825e-20 I0413 14:48:10.234797 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:48:10.234807 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:48:10.234819 22206 sgd_solver.cpp:106] Iteration 8320, lr = 1e-05 I0413 14:50:39.711995 22206 solver.cpp:242] Iteration 8360 (0.267595 iter/s, 149.479s/40 iter), loss = -6.43825e-20 I0413 14:50:39.712088 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:50:39.712097 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:50:39.712108 22206 sgd_solver.cpp:106] Iteration 8360, lr = 1e-05 I0413 14:51:20.728899 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_8372.caffemodel I0413 14:51:20.821074 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_8372.solverstate I0413 14:53:09.080492 22206 solver.cpp:242] Iteration 8400 (0.26779 iter/s, 149.371s/40 iter), loss = -6.43825e-20 I0413 14:53:09.080615 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:53:09.080626 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:53:09.080637 22206 sgd_solver.cpp:106] Iteration 8400, lr = 1e-05 I0413 14:55:38.304270 22206 solver.cpp:242] Iteration 8440 (0.26805 iter/s, 149.226s/40 iter), loss = -6.43825e-20 I0413 14:55:38.304374 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:55:38.304385 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:55:38.304396 22206 sgd_solver.cpp:106] Iteration 8440, lr = 1e-05 I0413 14:58:07.588831 22206 solver.cpp:242] Iteration 8480 (0.267941 iter/s, 149.287s/40 iter), loss = -6.43825e-20 I0413 14:58:07.588932 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 14:58:07.588943 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 14:58:07.588955 22206 sgd_solver.cpp:106] Iteration 8480, lr = 1e-05 I0413 15:00:36.969568 22206 solver.cpp:242] Iteration 8520 (0.267768 iter/s, 149.383s/40 iter), loss = -6.43825e-20 I0413 15:00:36.969687 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:00:36.969698 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:00:36.969708 22206 sgd_solver.cpp:106] Iteration 8520, lr = 1e-05 I0413 15:03:06.332303 22206 solver.cpp:242] Iteration 8560 (0.267801 iter/s, 149.365s/40 iter), loss = -6.43825e-20 I0413 15:03:06.332402 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:03:06.332412 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:03:06.332423 22206 sgd_solver.cpp:106] Iteration 8560, lr = 1e-05 I0413 15:05:35.588589 22206 solver.cpp:242] Iteration 8600 (0.267992 iter/s, 149.258s/40 iter), loss = -6.43825e-20 I0413 15:05:35.588707 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:05:35.588717 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:05:35.588728 22206 sgd_solver.cpp:106] Iteration 8600, lr = 1e-05 I0413 15:08:04.741399 22206 solver.cpp:242] Iteration 8640 (0.268178 iter/s, 149.155s/40 iter), loss = -6.43825e-20 I0413 15:08:04.741468 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:08:04.741478 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:08:04.741488 22206 sgd_solver.cpp:106] Iteration 8640, lr = 1e-05 I0413 15:10:34.081730 22206 solver.cpp:242] Iteration 8680 (0.267841 iter/s, 149.342s/40 iter), loss = -6.43825e-20 I0413 15:10:34.081846 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:10:34.081857 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:10:34.081869 22206 sgd_solver.cpp:106] Iteration 8680, lr = 1e-05 I0413 15:11:22.549012 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_8694.caffemodel I0413 15:11:22.642297 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_8694.solverstate I0413 15:13:03.606060 22206 solver.cpp:242] Iteration 8720 (0.267511 iter/s, 149.526s/40 iter), loss = -6.43825e-20 I0413 15:13:03.606163 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:13:03.606173 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:13:03.606184 22206 sgd_solver.cpp:106] Iteration 8720, lr = 1e-05 I0413 15:15:33.144889 22206 solver.cpp:242] Iteration 8760 (0.267485 iter/s, 149.541s/40 iter), loss = -6.43825e-20 I0413 15:15:33.144995 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:15:33.145006 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:15:33.145016 22206 sgd_solver.cpp:106] Iteration 8760, lr = 1e-05 I0413 15:18:02.505872 22206 solver.cpp:242] Iteration 8800 (0.267804 iter/s, 149.363s/40 iter), loss = -6.43825e-20 I0413 15:18:02.506003 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:18:02.506013 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:18:02.506026 22206 sgd_solver.cpp:106] Iteration 8800, lr = 1e-05 I0413 15:20:32.117471 22206 solver.cpp:242] Iteration 8840 (0.267355 iter/s, 149.614s/40 iter), loss = -6.43825e-20 I0413 15:20:32.117594 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:20:32.117605 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:20:32.117616 22206 sgd_solver.cpp:106] Iteration 8840, lr = 1e-05 I0413 15:23:01.613730 22206 solver.cpp:242] Iteration 8880 (0.267561 iter/s, 149.498s/40 iter), loss = -6.43825e-20 I0413 15:23:01.613893 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:23:01.613903 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:23:01.613915 22206 sgd_solver.cpp:106] Iteration 8880, lr = 1e-05 I0413 15:25:31.309424 22206 solver.cpp:242] Iteration 8920 (0.267205 iter/s, 149.698s/40 iter), loss = -6.43825e-20 I0413 15:25:31.309556 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:25:31.309566 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:25:31.309579 22206 sgd_solver.cpp:106] Iteration 8920, lr = 1e-05 I0413 15:28:01.191401 22206 solver.cpp:242] Iteration 8960 (0.266873 iter/s, 149.884s/40 iter), loss = -6.43825e-20 I0413 15:28:01.191548 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:28:01.191558 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:28:01.191572 22206 sgd_solver.cpp:106] Iteration 8960, lr = 1e-05 I0413 15:30:31.321321 22206 solver.cpp:242] Iteration 9000 (0.266432 iter/s, 150.132s/40 iter), loss = -6.43825e-20 I0413 15:30:31.321535 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:30:31.321545 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:30:31.321558 22206 sgd_solver.cpp:106] Iteration 9000, lr = 1e-05 I0413 15:31:27.611608 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_9016.caffemodel I0413 15:31:27.705700 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9016.solverstate I0413 15:33:01.407253 22206 solver.cpp:242] Iteration 9040 (0.26651 iter/s, 150.088s/40 iter), loss = -6.43825e-20 I0413 15:33:01.407387 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:33:01.407397 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:33:01.407411 22206 sgd_solver.cpp:106] Iteration 9040, lr = 1e-05 I0413 15:35:31.367146 22206 solver.cpp:242] Iteration 9080 (0.266734 iter/s, 149.962s/40 iter), loss = -6.43825e-20 I0413 15:35:31.367285 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:35:31.367297 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:35:31.367310 22206 sgd_solver.cpp:106] Iteration 9080, lr = 1e-05 I0413 15:38:01.585503 22206 solver.cpp:242] Iteration 9120 (0.266275 iter/s, 150.22s/40 iter), loss = -6.43825e-20 I0413 15:38:01.585613 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:38:01.585623 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:38:01.585635 22206 sgd_solver.cpp:106] Iteration 9120, lr = 1e-05 I0413 15:40:31.827421 22206 solver.cpp:242] Iteration 9160 (0.266233 iter/s, 150.244s/40 iter), loss = -6.43825e-20 I0413 15:40:31.827508 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:40:31.827517 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:40:31.827530 22206 sgd_solver.cpp:106] Iteration 9160, lr = 1e-05 I0413 15:43:01.989017 22206 solver.cpp:242] Iteration 9200 (0.266376 iter/s, 150.164s/40 iter), loss = -6.43825e-20 I0413 15:43:01.989157 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:43:01.989168 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:43:01.989181 22206 sgd_solver.cpp:106] Iteration 9200, lr = 1e-05 I0413 15:45:32.128882 22206 solver.cpp:242] Iteration 9240 (0.266414 iter/s, 150.142s/40 iter), loss = -6.43825e-20 I0413 15:45:32.128963 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:45:32.128973 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:45:32.128984 22206 sgd_solver.cpp:106] Iteration 9240, lr = 1e-05 I0413 15:48:01.950661 22206 solver.cpp:242] Iteration 9280 (0.26698 iter/s, 149.824s/40 iter), loss = -6.43825e-20 I0413 15:48:01.950836 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:48:01.950848 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:48:01.950861 22206 sgd_solver.cpp:106] Iteration 9280, lr = 1e-05 I0413 15:50:31.726364 22206 solver.cpp:242] Iteration 9320 (0.267062 iter/s, 149.778s/40 iter), loss = -6.43825e-20 I0413 15:50:31.726516 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:50:31.726527 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:50:31.726542 22206 sgd_solver.cpp:106] Iteration 9320, lr = 1e-05 I0413 15:51:35.415482 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_9338.caffemodel I0413 15:51:35.509249 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9338.solverstate I0413 15:53:01.691311 22206 solver.cpp:242] Iteration 9360 (0.266725 iter/s, 149.967s/40 iter), loss = -6.43825e-20 I0413 15:53:01.691435 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:53:01.691445 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:53:01.691457 22206 sgd_solver.cpp:106] Iteration 9360, lr = 1e-05 I0413 15:55:31.599824 22206 solver.cpp:242] Iteration 9400 (0.266826 iter/s, 149.911s/40 iter), loss = -6.43825e-20 I0413 15:55:31.599951 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:55:31.599961 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:55:31.599973 22206 sgd_solver.cpp:106] Iteration 9400, lr = 1e-05 I0413 15:58:01.239950 22206 solver.cpp:242] Iteration 9440 (0.267304 iter/s, 149.642s/40 iter), loss = -6.43825e-20 I0413 15:58:01.240053 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 15:58:01.240063 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 15:58:01.240074 22206 sgd_solver.cpp:106] Iteration 9440, lr = 1e-05 I0413 16:00:31.107925 22206 solver.cpp:242] Iteration 9480 (0.266898 iter/s, 149.87s/40 iter), loss = -6.43825e-20 I0413 16:00:31.108005 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 16:00:31.108013 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 16:00:31.108026 22206 sgd_solver.cpp:106] Iteration 9480, lr = 1e-05 I0413 16:03:01.155339 22206 solver.cpp:242] Iteration 9520 (0.266578 iter/s, 150.05s/40 iter), loss = -6.43825e-20 I0413 16:03:01.155455 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 16:03:01.155465 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 16:03:01.155478 22206 sgd_solver.cpp:106] Iteration 9520, lr = 1e-05 I0413 16:05:31.172293 22206 solver.cpp:242] Iteration 9560 (0.266633 iter/s, 150.019s/40 iter), loss = -6.43825e-20 I0413 16:05:31.172405 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 16:05:31.172415 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 16:05:31.172428 22206 sgd_solver.cpp:106] Iteration 9560, lr = 1e-05 I0413 16:08:01.182947 22206 solver.cpp:242] Iteration 9600 (0.266644 iter/s, 150.013s/40 iter), loss = -6.43825e-20 I0413 16:08:01.183090 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 16:08:01.183101 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 16:08:01.183115 22206 sgd_solver.cpp:106] Iteration 9600, lr = 1e-06 I0413 16:10:31.031286 22206 solver.cpp:242] Iteration 9640 (0.266933 iter/s, 149.85s/40 iter), loss = -6.43825e-20 I0413 16:10:31.031422 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 16:10:31.031432 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 16:10:31.031445 22206 sgd_solver.cpp:106] Iteration 9640, lr = 1e-06 I0413 16:11:42.178346 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_9660.caffemodel I0413 16:11:42.270309 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9660.solverstate I0413 16:11:42.342959 22206 solver.cpp:362] Iteration 9660, Testing net (#0) I0413 16:11:42.342983 22206 net.cpp:723] Ignoring source layer train_data I0413 16:11:42.342988 22206 net.cpp:723] Ignoring source layer train_label I0413 16:11:42.342991 22206 net.cpp:723] Ignoring source layer train_transform I0413 16:12:01.052938 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 ( 2 = 0 loss) I0413 16:12:01.052963 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 ( 1 = 0 loss) I0413 16:12:01.052969 22206 solver.cpp:429] Test net output #2: mAP = 0 I0413 16:12:01.052974 22206 solver.cpp:429] Test net output #3: precision = 0 I0413 16:12:01.052979 22206 solver.cpp:429] Test net output #4: recall = 0 I0413 16:12:01.052984 22206 solver.cpp:347] Optimization Done. I0413 16:12:01.052987 22206 caffe.cpp:234] Optimization Done.
Now iam getting a accuracy graph as mentioned.How can i improve MAP value.
What param/model/dataset changes helped in increasing the accuracy to the current value?
On Apr 20, 2017 6:31 AM, "sulthanashafi" notifications@github.com wrote:
Now iam getting a accuracy graph as mentioned.How can i improve MAP value. [image: screen shot 2017-04-20 at 6 30 06 am] https://cloud.githubusercontent.com/assets/26631312/25208518/e02ba350-2592-11e7-87a9-8b99cb4cb9ec.png
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295521900, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aDlAl7fPuIOeSUYdQY8WAYY2fj3iks5rxq5SgaJpZM4LfTqE .
Please sent your custom detectnet being used now.I will change in it and sent you back.Please try on it.If possible,sent a image of yours being treated or please let me know the properties of the image being used.I hope i can help you.
On 20-Apr-2017, at 6:42 AM, shreyasramesh notifications@github.com wrote:
What param/model/dataset changes helped in increasing the accuracy to the current value?
On Apr 20, 2017 6:31 AM, "sulthanashafi" notifications@github.com wrote:
Now iam getting a accuracy graph as mentioned.How can i improve MAP value. [image: screen shot 2017-04-20 at 6 30 06 am] https://cloud.githubusercontent.com/assets/26631312/25208518/e02ba350-2592-11e7-87a9-8b99cb4cb9ec.png
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295521900, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aDlAl7fPuIOeSUYdQY8WAYY2fj3iks5rxq5SgaJpZM4LfTqE .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295525882, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckO1yA1s81grrCi4H-PgfkPDeO8_4ks5rxrEAgaJpZM4LfTqE.
I'm using the customNetwork that I sent you before. Can you change that and send it back? I'll try it out Thanks!
On Apr 20, 2017 6:46 AM, "sulthanashafi" notifications@github.com wrote:
Please sent your custom detectnet being used now.I will change in it and sent you back.Please try on it.If possible,sent a image of yours being treated or please let me know the properties of the image being used.I hope i can help you.
On 20-Apr-2017, at 6:42 AM, shreyasramesh notifications@github.com wrote:
What param/model/dataset changes helped in increasing the accuracy to the current value?
On Apr 20, 2017 6:31 AM, "sulthanashafi" notifications@github.com wrote:
Now iam getting a accuracy graph as mentioned.How can i improve MAP value. [image: screen shot 2017-04-20 at 6 30 06 am] https://cloud.githubusercontent.com/assets/26631312/25208518/e02ba350- 2592-11e7-87a9-8b99cb4cb9ec.png
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295521900, or mute the thread https://github.com/notifications/unsubscribe-auth/ AX39aDlAl7fPuIOeSUYdQY8WAYY2fj3iks5rxq5SgaJpZM4LfTqE .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295525882>, or mute the thread https://github.com/notifications/unsubscribe- auth/AZZckO1yA1s81grrCi4H-PgfkPDeO8_4ks5rxrEAgaJpZM4LfTqE.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295527502, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aNim4o6ez5jKgSmKemO30-U9YPOLks5rxrIAgaJpZM4LfTqE .
As mentioned before, my images are 1248x384. 2000 train and 500 test.
On Apr 20, 2017 6:46 AM, "sulthanashafi" notifications@github.com wrote:
Please sent your custom detectnet being used now.I will change in it and sent you back.Please try on it.If possible,sent a image of yours being treated or please let me know the properties of the image being used.I hope i can help you.
On 20-Apr-2017, at 6:42 AM, shreyasramesh notifications@github.com wrote:
What param/model/dataset changes helped in increasing the accuracy to the current value?
On Apr 20, 2017 6:31 AM, "sulthanashafi" notifications@github.com wrote:
Now iam getting a accuracy graph as mentioned.How can i improve MAP value. [image: screen shot 2017-04-20 at 6 30 06 am] https://cloud.githubusercontent.com/assets/ 26631312/25208518/e02ba350-2592-11e7-87a9-8b99cb4cb9ec.png
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295521900, or mute the thread https://github.com/notifications/unsubscribe-auth/ AX39aDlAl7fPuIOeSUYdQY8WAYY2fj3iks5rxq5SgaJpZM4LfTqE .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295525882>, or mute the thread https://github.com/notifications/unsubscribe- auth/AZZckO1yA1s81grrCi4H-PgfkPDeO8_4ks5rxrEAgaJpZM4LfTqE.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1384#issuecomment-295527502, or mute the thread https://github.com/notifications/unsubscribe-auth/AX39aNim4o6ez5jKgSmKemO30-U9YPOLks5rxrIAgaJpZM4LfTqE .
Hi there!
I'm trying to use Detectnet network on DIGITS to detect blood vessels in ultrasound images (only one blood vessel per image). My images are originally 128x128 pixels and grayscale, so I had to do some resizing (512x512) so that its easier for the Detectnet network to place the bounding box and convert the images to RGB (doing a 3-matrix with the same Image). I also compress the images with .jpeg.
Those are an example of an image I'm using and the label of that image printed with Matlab:
For the labels of the images I use the following format (this is the label from the previous image): car 0.0 0 0.0 12.0 172.0 402.0 318.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 I have checked it out, and ALL the bounding boxes are between 50x50 and 400x400 pixels, so that shouldn't be a problem.
Here are the parameters I have used in the last run and also the results I got:
But as you can see the MAP value is always zero and when I try to test a single or a list of images, the network doesn't place the bounding box (even if I test images that I use for training!).
As you can see it doesn't place the bounding box, but the coverage "map" is correct and it identifies correctly the blood vessel. Plus, if eventually in some random images (normally images I used for training) it does place the Bounding box, it is always correctly placed.
Finally, I'll post the code in caffe of the network we are using. Maybe I've missed something and you can point it out to me:
Detectnet.txt
With all this, I wonder the following: 1) After some epoch, the network starts to overfit without having increased the mAP at all, even with 130k images... Do you have any idea of what could be going wrong?
2) With the amount of images that I´m using, I don´t understand why it doesn´t place the box even after 300 epoch. Do you think that the sizes are an issue? Should we increase the size to 1024x1024 of the image (512x512) via padding (to keep the bounding boxes between 50x50 - 400x400)?
3) I am aware that in the phyton layers, there are the parameters in param_str: https://github.com/gheinrich/caffe/blob/08c0ce38d1cd144aad11d62ee0045d12265b6cbf/examples/kitti/detectnet_network.prototxt#L2502). I try to change them but I cannot diminish the threshold so the neural network places more bounding boxes... How should we tune it in order to make the network place only one bounding box per image? It seems it shouldn't be hard, taking into account that the coverage is almost perfect.
4) Any other suggestion or recommendations will be highly appreciated.
Thank you very much. I really need some help here. Thanks! :)