tensorflow / models

Models and examples built with TensorFlow
Other
76.93k stars 45.8k forks source link

Segmentation fault #6801

Closed RizanPSTU closed 5 years ago

RizanPSTU commented 5 years ago

Running on Colab root/models/research/deeplab

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

testBuildDeepLabWithDensePredictionCell (main.DeeplabModelTest) ... WARNING:tensorflow:From /usr/lib/python2.7/contextlib.py:84: test_session (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use self.session() or self.cached_session() instead. 2019-05-16 22:35:26.621976: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-05-16 22:35:26.836012: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-05-16 22:35:26.836555: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x555773ea3440 executing computations on platform CUDA. Devices: 2019-05-16 22:35:26.836591: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Tesla T4, Compute Capability 7.5 2019-05-16 22:35:26.838446: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz 2019-05-16 22:35:26.838661: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x555773ea2d60 executing computations on platform Host. Devices: 2019-05-16 22:35:26.838693: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 2019-05-16 22:35:26.838994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:00:04.0 totalMemory: 14.73GiB freeMemory: 14.60GiB 2019-05-16 22:35:26.839018: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:35:26.840638: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:35:26.840670: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:35:26.840681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:35:26.840901: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2019-05-16 22:35:26.840936: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4523 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) WARNING:tensorflow:From /root/models/research/deeplab/core/feature_extractor.py:196: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. ok testBuildDeepLabv2 (main.DeeplabModelTest) ... 2019-05-16 22:35:28.706723: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:35:28.706797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:35:28.706812: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:35:28.706825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:35:28.707135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4523 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) 2019-05-16 22:35:33.337877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:35:33.337949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:35:33.337965: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:35:33.337978: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:35:33.338230: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4523 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) 2019-05-16 22:35:39.616532: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:35:39.616603: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:35:39.616618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:35:39.616630: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:35:39.616881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4523 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) 2019-05-16 22:35:41.594860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:35:41.594933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:35:41.594948: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:35:41.594961: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:35:41.595227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4523 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) ok testForwardpassDeepLabv3plus (main.DeeplabModelTest) ... 2019-05-16 22:35:44.074012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:35:44.074081: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:35:44.074096: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:35:44.074109: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:35:44.074409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4523 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) ok testWrongDeepLabVariant (main.DeeplabModelTest) ... ok test_session (main.DeeplabModelTest) Use cached_session instead. (deprecated) ... skipped 'Not a test.'


Ran 5 tests in 19.954s

OK (skipped=1) Downloading VOCtrainval_11-May-2012.tar to ./pascal_voc_seg % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 388 0 388 0 0 57 0 --:--:-- 0:00:06 --:--:-- 85 100 57.9M 0 57.9M 0 0 7254k 0 --:--:-- 0:00:08 --:--:-- 67.3M Uncompressing Rizan VOCtrainval_11-May-2012.tar Removing the color map in ground truth annotations... Converting PASCAL VOC 2012 dataset... ('main ar split ', ['./pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt', './pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt']) ('for Rizan split', './pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt') ('Vitore datasetsplit ', './pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt') ('Rizan dataset ', 'train') Processing Riztrain('Rizan f ', ['re500500leaf639', 're500500leaf692', 're500500leaf1075', 're500500leaf1117', 're500500leaf1697', 're500500leaf240', 're500500leaf596', 're500500leaf744', 're500500leaf910', 're500500leaf1212', 're500500leaf1087', 're500500leaf898', 're500500leaf1670', 're500500leaf406', 're500500leaf1176', 're500500leaf1029', 're500500leaf831', 're500500leaf641', 're500500leaf1155', 're500500leaf282', 're500500leaf1480', 're500500leaf42', 're500500leaf1356', 're500500leaf429', 're500500leaf1659', 're500500leaf498', 're500500leaf889', 're500500leaf352', 're500500leaf1192', 're500500leaf1028', 're500500leaf1391', 're500500leaf816', 're500500leaf1539', 're500500leaf228', 're500500leaf1370', 're500500leaf1482', 're500500leaf1641', 're500500leaf1533', 're500500leaf716', 're500500leaf1332', 're500500leaf1475', 're500500leaf1653', 're500500leaf62', 're500500leaf1575', 're500500leaf1089', 're500500leaf700', 're500500leaf1459', 're500500leaf574', 're500500leaf984', 're500500leaf1106', 're500500leaf1462', 're500500leaf915', 're500500leaf659', 're500500leaf1036', 're500500leaf1096', 're500500leaf728', 're500500leaf1765', 're500500leaf1346', 're500500leaf1721', 're500500leaf812', 're500500leaf233', 're500500leaf593', 're500500leaf685', 're500500leaf1314', 're500500leaf499', 're500500leaf483', 're500500leaf181', 're500500leaf146', 're500500leaf1030', 're500500leaf249', 're500500leaf124', 're500500leaf138', 're500500leaf1020', 're500500leaf305', 're500500leaf1306', 're500500leaf1772', 're500500leaf1610', 're500500leaf1300', 're500500leaf760', 're500500leaf973', 're500500leaf1429', 're500500leaf60', 're500500leaf788', 're500500leaf506', 're500500leaf1100', 're500500leaf601', 're500500leaf1341', 're500500leaf1590', 're500500leaf278', 're500500leaf21', 're500500leaf815', 're500500leaf1256', 're500500leaf1043', 're500500leaf156', 're500500leaf527', 're500500leaf505', 're500500leaf1738', 're500500leaf1109', 're500500leaf713', 're500500leaf546', 're500500leaf772', 're500500leaf27', 're500500leaf1510', 're500500leaf160', 're500500leaf1723', 're500500leaf1203', 're500500leaf676', 're500500leaf1454', 're500500leaf1486', 're500500leaf1455', 're500500leaf1515', 're500500leaf695', 're500500leaf1751', 're500500leaf1181', 're500500leaf1307', 're500500leaf1714', 're500500leaf403', 're500500leaf982', 're500500leaf141', 're500500leaf471', 're500500leaf1270', 're500500leaf1528', 're500500leaf942', 're500500leaf578', 're500500leaf1000', 're500500leaf299', 're500500leaf978', 're500500leaf1679', 're500500leaf568', 're500500leaf817', 're500500leaf1595', 're500500leaf515', 're500500leaf477', 're500500leaf790', 're500500leaf245', 're500500leaf203', 're500500leaf1211', 're500500leaf1095', 're500500leaf969', 're500500leaf1194', 're500500leaf1101', 're500500leaf1630', 're500500leaf35', 're500500leaf1674', 're500500leaf177', 're500500leaf1442', 're500500leaf295', 're500500leaf75', 're500500leaf1631', 're500500leaf1021', 're500500leaf331', 're500500leaf759', 're500500leaf1357', 're500500leaf1611', 're500500leaf1033', 're500500leaf706', 're500500leaf1255', 're500500leaf1518', 're500500leaf1180', 're500500leaf1146', 're500500leaf833', 're500500leaf422', 're500500leaf464', 're500500leaf25', 're500500leaf159', 're500500leaf274', 're500500leaf1719', 're500500leaf1214', 're500500leaf223', 're500500leaf1221', 're500500leaf1272', 're500500leaf1400', 're500500leaf852', 're500500leaf818', 're500500leaf1435', 're500500leaf793', 're500500leaf1798', 're500500leaf1403', 're500500leaf665', 're500500leaf873', 're500500leaf1223', 're500500leaf705', 're500500leaf1384', 're500500leaf1497', 're500500leaf946', 're500500leaf686', 're500500leaf310', 're500500leaf1724', 're500500leaf662', 're500500leaf1531', 're500500leaf201', 're500500leaf633', 're500500leaf377', 're500500leaf1156', 're500500leaf683', 're500500leaf927', 're500500leaf1064', 're500500leaf57', 're500500leaf963', 're500500leaf1815', 're500500leaf1545', 're500500leaf23', 're500500leaf714', 're500500leaf599', 're500500leaf1436', 're500500leaf151', 're500500leaf823', 're500500leaf549', 're500500leaf799', 're500500leaf895', 're500500leaf548', 're500500leaf162', 're500500leaf142', 're500500leaf421', 're500500leaf276', 're500500leaf503', 're500500leaf709', 're500500leaf891', 're500500leaf689', 're500500leaf1660', 're500500leaf17', 're500500leaf1383', 're500500leaf1572', 're500500leaf1338', 're500500leaf303', 're500500leaf170', 're500500leaf858', 're500500leaf522', 're500500leaf1770', 're500500leaf569', 're500500leaf908', 're500500leaf435', 're500500leaf622', 're500500leaf930', 're500500leaf820', 're500500leaf552', 're500500leaf239', 're500500leaf433', 're500500leaf824', 're500500leaf1681', 're500500leaf1340', 're500500leaf118', 're500500leaf757', 're500500leaf606', 're500500leaf1288', 're500500leaf1003', 're500500leaf1755', 're500500leaf1279', 're500500leaf609', 're500500leaf1561', 're500500leaf1059', 're500500leaf1331', 're500500leaf213', 're500500leaf339', 're500500leaf1425', 're500500leaf1133', 're500500leaf148', 're500500leaf800', 're500500leaf86', 're500500leaf254', 're500500leaf484', 're500500leaf381', 're500500leaf822', 're500500leaf504', 're500500leaf440', 're500500leaf263', 're500500leaf486', 're500500leaf646', 're500500leaf205', 're500500leaf298', 're500500leaf1268', 're500500leaf830', 're500500leaf192', 're500500leaf1797', 're500500leaf914', 're500500leaf1552', 're500500leaf306', 're500500leaf607', 're500500leaf947', 're500500leaf1079', 're500500leaf1032', 're500500leaf1577', 're500500leaf1642', 're500500leaf925', 're500500leaf995', 're500500leaf1524', 're500500leaf1485', 're500500leaf862', 're500500leaf905', 're500500leaf847', 're500500leaf887', 're500500leaf1603', 're500500leaf412', 're500500leaf1808', 're500500leaf1791', 're500500leaf917', 're500500leaf351', 're500500leaf736', 're500500leaf1636', 're500500leaf1206', 're500500leaf1360', 're500500leaf1361', 're500500leaf1702', 're500500leaf721', 're500500leaf1634', 're500500leaf172', 're500500leaf859', 're500500leaf266', 're500500leaf1011', 're500500leaf1511', 're500500leaf107', 're500500leaf1756', 're500500leaf1612', 're500500leaf941', 're500500leaf734', 're500500leaf750', 're500500leaf1405', 're500500leaf106', 're500500leaf701', 're500500leaf1077', 're500500leaf1495', 're500500leaf280', 're500500leaf110', 're500500leaf1508', 're500500leaf297', 're500500leaf96', 're500500leaf259', 're500500leaf1190', 're500500leaf1513', 're500500leaf1186', 're500500leaf1428', 're500500leaf819', 're500500leaf1152', 're500500leaf1424', 're500500leaf153', 're500500leaf1567', 're500500leaf373', 're500500leaf1451', 're500500leaf853', 're500500leaf6', 're500500leaf1339', 're500500leaf1068', 're500500leaf1010', 're500500leaf1174', 're500500leaf1655', 're500500leaf1149', 're500500leaf673', 're500500leaf514', 're500500leaf1369', 're500500leaf1421', 're500500leaf762', 're500500leaf1185', 're500500leaf687', 're500500leaf394', 're500500leaf120', 're500500leaf1748', 're500500leaf966', 're500500leaf554', 're500500leaf849', 're500500leaf46', 're500500leaf1303', 're500500leaf1210', 're500500leaf1037', 're500500leaf1121', 're500500leaf1468', 're500500leaf967', 're500500leaf1254', 're500500leaf551', 're500500leaf1602', 're500500leaf944', 're500500leaf1046', 're500500leaf1229', 're500500leaf489', 're500500leaf958', 're500500leaf1197', 're500500leaf1289', 're500500leaf1588', 're500500leaf558', 're500500leaf367', 're500500leaf22', 're500500leaf918', 're500500leaf531', 're500500leaf1235', 're500500leaf840', 're500500leaf588', 're500500leaf1355', 're500500leaf1496', 're500500leaf528', 're500500leaf557', 're500500leaf1364', 're500500leaf1569', 're500500leaf1044', 're500500leaf398', 're500500leaf876', 're500500leaf88', 're500500leaf455', 're500500leaf1165', 're500500leaf1163', 're500500leaf356', 're500500leaf1541', 're500500leaf542', 're500500leaf169', 're500500leaf1287', 're500500leaf1312', 're500500leaf126', 're500500leaf1458', 're500500leaf264', 're500500leaf214', 're500500leaf1237', 're500500leaf1582', 're500500leaf807', 're500500leaf1664', 're500500leaf693', 're500500leaf904', 're500500leaf449', 're500500leaf115', 're500500leaf300', 're500500leaf628', 're500500leaf383', 're500500leaf640', 're500500leaf877', 're500500leaf431', 're500500leaf193', 're500500leaf1473', 're500500leaf890', 're500500leaf271', 're500500leaf507', 're500500leaf418', 're500500leaf1620', 're500500leaf702', 're500500leaf1734', 're500500leaf1090', 're500500leaf131', 're500500leaf703', 're500500leaf32', 're500500leaf751', 're500500leaf333', 're500500leaf1733', 're500500leaf1053', 're500500leaf474', 're500500leaf1740', 're500500leaf382', 're500500leaf1591', 're500500leaf238', 're500500leaf395', 're500500leaf416', 're500500leaf1484', 're500500leaf880', 're500500leaf1173', 're500500leaf989', 're500500leaf1563', 're500500leaf1560', 're500500leaf1199', 're500500leaf1329', 're500500leaf292', 're500500leaf1621', 're500500leaf324', 're500500leaf1313', 're500500leaf971', 're500500leaf803', 're500500leaf1219', 're500500leaf1061', 're500500leaf645', 're500500leaf1683', 're500500leaf475', 're500500leaf275', 're500500leaf1574', 're500500leaf29', 're500500leaf1600', 're500500leaf1460', 're500500leaf1', 're500500leaf1142', 're500500leaf320', 're500500leaf1493', 're500500leaf467', 're500500leaf1026', 're500500leaf912', 're500500leaf1259', 're500500leaf1607', 're500500leaf72', 're500500leaf789', 're500500leaf1758', 're500500leaf668', 're500500leaf1701', 're500500leaf1739', 're500500leaf450', 're500500leaf842', 're500500leaf1788', 're500500leaf1387', 're500500leaf307', 're500500leaf452', 're500500leaf1227', 're500500leaf863', 're500500leaf1623', 're500500leaf137', 're500500leaf1789', 're500500leaf1698', 're500500leaf1669', 're500500leaf1226', 're500500leaf811', 're500500leaf1198', 're500500leaf1640', 're500500leaf180', 're500500leaf1519', 're500500leaf1774', 're500500leaf1500', 're500500leaf667', 're500500leaf1251', 're500500leaf38', 're500500leaf1228', 're500500leaf1498', 're500500leaf53', 're500500leaf1373', 're500500leaf1298', 're500500leaf1762', 're500500leaf353', 're500500leaf792', 're500500leaf50', 're500500leaf206', 're500500leaf664', 're500500leaf986', 're500500leaf672', 're500500leaf208', 're500500leaf591', 're500500leaf1281', 're500500leaf139', 're500500leaf185', 're500500leaf1433', 're500500leaf150', 're500500leaf1696', 're500500leaf900', 're500500leaf1813', 're500500leaf296', 're500500leaf972', 're500500leaf1626', 're500500leaf397', 're500500leaf1284', 're500500leaf497', 're500500leaf1580', 're500500leaf1461', 're500500leaf1352', 're500500leaf1333', 're500500leaf1246', 're500500leaf1506', 're500500leaf916', 're500500leaf1527', 're500500leaf1257', 're500500leaf1672', 're500500leaf1463', 're500500leaf756', 're500500leaf1347', 're500500leaf1042', 're500500leaf425', 're500500leaf167', 're500500leaf860', 're500500leaf711', 're500500leaf1266', 're500500leaf1418', 're500500leaf262', 're500500leaf336', 're500500leaf1559', 're500500leaf1342', 're500500leaf1220', 're500500leaf376', 're500500leaf359', 're500500leaf575', 're500500leaf1326', 're500500leaf188', 're500500leaf1768', 're500500leaf742', 're500500leaf393', 're500500leaf1248', 're500500leaf1801', 're500500leaf322', 're500500leaf1700', 're500500leaf335', 're500500leaf637', 're500500leaf619', 're500500leaf1586', 're500500leaf1647', 're500500leaf196', 're500500leaf766', 're500500leaf186', 're500500leaf848', 're500500leaf808', 're500500leaf66', 're500500leaf1353', 're500500leaf1343', 're500500leaf13', 're500500leaf1048', 're500500leaf770', 're500500leaf996', 're500500leaf570', 're500500leaf1274', 're500500leaf1472', 're500500leaf525', 're500500leaf688', 're500500leaf749', 're500500leaf612', 're500500leaf1576', 're500500leaf614', 're500500leaf195', 're500500leaf1710', 're500500leaf7', 're500500leaf1397', 're500500leaf1483', 're500500leaf644', 're500500leaf1159', 're500500leaf121', 're500500leaf1706', 're500500leaf882', 're500500leaf761', 're500500leaf1785', 're500500leaf3', 're500500leaf1232', 're500500leaf945', 're500500leaf907', 're500500leaf1195', 're500500leaf446', 're500500leaf1135', 're500500leaf1189', 're500500leaf999', 're500500leaf851', 're500500leaf1726', 're500500leaf1477', 're500500leaf287', 're500500leaf1805', 're500500leaf594', 're500500leaf379', 're500500leaf573', 're500500leaf1140', 're500500leaf409', 're500500leaf1179', 're500500leaf962', 're500500leaf1124', 're500500leaf511', 're500500leaf1054', 're500500leaf119', 're500500leaf1718', 're500500leaf49', 're500500leaf600', 're500500leaf346', 're500500leaf426', 're500500leaf924', 're500500leaf739', 're500500leaf345', 're500500leaf1447', 're500500leaf1204', 're500500leaf1742', 're500500leaf1143', 're500500leaf1804', 're500500leaf1402', 're500500leaf45', 're500500leaf737', 're500500leaf1245', 're500500leaf1017', 're500500leaf938', 're500500leaf95', 're500500leaf556', 're500500leaf1058', 're500500leaf494', 're500500leaf289', 're500500leaf1649', 're500500leaf1119', 're500500leaf152', 're500500leaf14', 're500500leaf1296', 're500500leaf321', 're500500leaf1231', 're500500leaf1693', 're500500leaf1457', 're500500leaf1069', 're500500leaf1554', 're500500leaf155', 're500500leaf997', 're500500leaf1171', 're500500leaf1105', 're500500leaf951', 're500500leaf1557', 're500500leaf1627', 're500500leaf712', 're500500leaf1638', 're500500leaf1779', 're500500leaf1367', 're500500leaf804', 're500500leaf897', 're500500leaf1667', 're500500leaf794', 're500500leaf1168', 're500500leaf99', 're500500leaf1450', 're500500leaf1157', 're500500leaf755', 're500500leaf1247', 're500500leaf1542', 're500500leaf1571', 're500500leaf1643', 're500500leaf1013', 're500500leaf269', 're500500leaf844', 're500500leaf315', 're500500leaf921', 're500500leaf225', 're500500leaf1737', 're500500leaf566', 're500500leaf534', 're500500leaf1253', 're500500leaf1810', 're500500leaf1629', 're500500leaf408', 're500500leaf1080', 're500500leaf1471', 're500500leaf670', 're500500leaf11', 're500500leaf325', 're500500leaf1446', 're500500leaf1676', 're500500leaf396', 're500500leaf1345', 're500500leaf796', 're500500leaf869', 're500500leaf434', 're500500leaf960', 're500500leaf1144', 're500500leaf1578', 're500500leaf940', 're500500leaf533', 're500500leaf798', 're500500leaf1809', 're500500leaf660', 're500500leaf71', 're500500leaf47', 're500500leaf763', 're500500leaf954', 're500500leaf441', 're500500leaf1158', 're500500leaf961', 're500500leaf92', 're500500leaf133', 're500500leaf1128', 're500500leaf1041', 're500500leaf1045', 're500500leaf1573', 're500500leaf430', 're500500leaf31', 're500500leaf632', 're500500leaf232', 're500500leaf1616', 're500500leaf279', 're500500leaf722', 're500500leaf1088', 're500500leaf797', 're500500leaf1570', 're500500leaf370', 're500500leaf776', 're500500leaf1295', 're500500leaf653', 're500500leaf1269', 're500500leaf1735', 're500500leaf585', 're500500leaf492', 're500500leaf730', 're500500leaf1555', 're500500leaf738', 're500500leaf550', 're500500leaf1547', 're500500leaf357', 're500500leaf1814', 're500500leaf1234', 're500500leaf1408', 're500500leaf361', 're500500leaf888', 're500500leaf881', 're500500leaf344', 're500500leaf101', 're500500leaf1302', 're500500leaf1305', 're500500leaf1072', 're500500leaf147', 're500500leaf194', 're500500leaf1639', 're500500leaf657', 're500500leaf445', 're500500leaf362', 're500500leaf211', 're500500leaf785', 're500500leaf1301', 're500500leaf43', 're500500leaf1193', 're500500leaf1172', 're500500leaf677', 're500500leaf934', 're500500leaf1492', 're500500leaf1278', 're500500leaf839', 're500500leaf861', 're500500leaf1614', 're500500leaf1196', 're500500leaf54', 're500500leaf26', 're500500leaf893', 're500500leaf650', 're500500leaf463', 're500500leaf451', 're500500leaf1291', 're500500leaf643', 're500500leaf956', 're500500leaf459', 're500500leaf1759', 're500500leaf461', 're500500leaf604', 're500500leaf438', 're500500leaf285', 're500500leaf1745', 're500500leaf1224', 're500500leaf582', 're500500leaf1652', 're500500leaf1534', 're500500leaf1414', 're500500leaf1084', 're500500leaf680', 're500500leaf834', 're500500leaf1632', 're500500leaf1685', 're500500leaf67', 're500500leaf617', 're500500leaf414', 're500500leaf1083', 're500500leaf1441', 're500500leaf1619', 're500500leaf1799', 're500500leaf1426', 're500500leaf371', 're500500leaf1273', 're500500leaf630', 're500500leaf777', 're500500leaf1699', 're500500leaf1132', 're500500leaf1725', 're500500leaf1731', 're500500leaf841', 're500500leaf1420', 're500500leaf1766', 're500500leaf1097', 're500500leaf922', 're500500leaf726', 're500500leaf1562', 're500500leaf428', 're500500leaf1688', 're500500leaf179', 're500500leaf1750', 're500500leaf63', 're500500leaf1476', 're500500leaf1512', 're500500leaf1584', 're500500leaf1297', 're500500leaf771', 're500500leaf929', 're500500leaf212', 're500500leaf1019', 're500500leaf52', 're500500leaf154', 're500500leaf1781', 're500500leaf993', 're500500leaf1131', 're500500leaf1469', 're500500leaf1038', 're500500leaf913', 're500500leaf545', 're500500leaf202', 're500500leaf1449', 're500500leaf719', 're500500leaf157', 're500500leaf718', 're500500leaf1663', 're500500leaf1039', 're500500leaf1108', 're500500leaf1716', 're500500leaf988', 're500500leaf1694', 're500500leaf105', 're500500leaf950', 're500500leaf143', 're500500leaf767', 're500500leaf968', 're500500leaf616', 're500500leaf1635', 're500500leaf906', 're500500leaf1767', 're500500leaf33', 're500500leaf1365', 're500500leaf1615', 're500500leaf510', 're500500leaf826', 're500500leaf293', 're500500leaf200', 're500500leaf1182', 're500500leaf1715', 're500500leaf1680', 're500500leaf1689', 're500500leaf1646', 're500500leaf1708', 're500500leaf420', 're500500leaf787', 're500500leaf73', 're500500leaf1556', 're500500leaf883', 're500500leaf835']) ('Rizan num of file ', 908) 2019-05-16 22:36:06.012112: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-05-16 22:36:06.176837: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-05-16 22:36:06.177350: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x558880053340 executing computations on platform CUDA. Devices: 2019-05-16 22:36:06.177382: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Tesla T4, Compute Capability 7.5 2019-05-16 22:36:06.179047: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz 2019-05-16 22:36:06.179267: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5588800531e0 executing computations on platform Host. Devices: 2019-05-16 22:36:06.179326: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 2019-05-16 22:36:06.179622: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:00:04.0 totalMemory: 14.73GiB freeMemory: 14.60GiB 2019-05-16 22:36:06.179647: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:36:06.180158: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:36:06.180179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:36:06.180189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:36:06.180435: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2019-05-16 22:36:06.180476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14202 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) 2019-05-16 22:36:06.182972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:36:06.183011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:36:06.183026: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:36:06.183038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:36:06.183258: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14202 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)

Converting image 1/908 shard 0WARNING:tensorflow:From ./build_voc2012_data.py:124: init (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version. Instructions for updating: Use tf.gfile.GFile. Converting image 227/908 shard 0 Converting image 454/908 shard 1 Converting image 681/908 shard 2 Converting image 908/908 shard 3 ('for Rizan split', './pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt') ('Vitore datasetsplit ', './pascal_voc_seg/VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt') ('Rizan dataset ', 'val') Processing Rizval('Rizan f ', ['re500500leaf835', 're500500leaf949', 're500500leaf93', 're500500leaf1022', 're500500leaf485', 're500500leaf1780', 're500500leaf1065', 're500500leaf1727', 're500500leaf405', 're500500leaf65', 're500500leaf61', 're500500leaf1505', 're500500leaf724', 're500500leaf1218', 're500500leaf427', 're500500leaf70', 're500500leaf1276', 're500500leaf1267', 're500500leaf1416', 're500500leaf769', 're500500leaf241', 're500500leaf476', 're500500leaf328', 're500500leaf392', 're500500leaf288', 're500500leaf856', 're500500leaf198', 're500500leaf56', 're500500leaf1169', 're500500leaf696', 're500500leaf674', 're500500leaf1188', 're500500leaf648', 're500500leaf1394', 're500500leaf562', 're500500leaf465', 're500500leaf1316', 're500500leaf165', 're500500leaf222', 're500500leaf509', 're500500leaf1280', 're500500leaf1525', 're500500leaf1648', 're500500leaf1409', 're500500leaf1438', 're500500leaf1153', 're500500leaf1145', 're500500leaf663', 're500500leaf1783', 're500500leaf741', 're500500leaf128', 're500500leaf1448', 're500500leaf1666', 're500500leaf502', 're500500leaf1317', 're500500leaf1656', 're500500leaf518', 're500500leaf1060', 're500500leaf1134', 're500500leaf565', 're500500leaf715', 're500500leaf782', 're500500leaf684', 're500500leaf1299', 're500500leaf175', 're500500leaf1705', 're500500leaf1564', 're500500leaf18', 're500500leaf870', 're500500leaf79', 're500500leaf1215', 're500500leaf1587', 're500500leaf348', 're500500leaf1116', 're500500leaf40', 're500500leaf884', 're500500leaf294', 're500500leaf902', 're500500leaf364', 're500500leaf943', 're500500leaf1099', 're500500leaf158', 're500500leaf636', 're500500leaf1444', 're500500leaf1201', 're500500leaf1330', 're500500leaf255', 're500500leaf1502', 're500500leaf602', 're500500leaf368', 're500500leaf1763', 're500500leaf843', 're500500leaf358', 're500500leaf39', 're500500leaf1427', 're500500leaf48', 're500500leaf1490', 're500500leaf1325', 're500500leaf1752', 're500500leaf340', 're500500leaf1568', 're500500leaf1209', 're500500leaf1645', 're500500leaf1114', 're500500leaf230', 're500500leaf520', 're500500leaf81', 're500500leaf590', 're500500leaf1359', 're500500leaf584', 're500500leaf540', 're500500leaf1110', 're500500leaf1136', 're500500leaf462', 're500500leaf217', 're500500leaf1549', 're500500leaf699', 're500500leaf629', 're500500leaf108', 're500500leaf1200', 're500500leaf987', 're500500leaf402', 're500500leaf134', 're500500leaf669', 're500500leaf34', 're500500leaf260', 're500500leaf327', 're500500leaf774', 're500500leaf827', 're500500leaf1177', 're500500leaf399', 're500500leaf1337', 're500500leaf581', 're500500leaf247', 're500500leaf901', 're500500leaf91', 're500500leaf1122', 're500500leaf8', 're500500leaf122', 're500500leaf1014', 're500500leaf1790', 're500500leaf1082', 're500500leaf250', 're500500leaf1074', 're500500leaf302', 're500500leaf350', 're500500leaf113', 're500500leaf974', 're500500leaf1123', 're500500leaf1794', 're500500leaf727', 're500500leaf1671', 're500500leaf791', 're500500leaf226', 're500500leaf532', 're500500leaf1334', 're500500leaf273', 're500500leaf1456', 're500500leaf1439', 're500500leaf488', 're500500leaf219', 're500500leaf1760', 're500500leaf976', 're500500leaf360', 're500500leaf258', 're500500leaf98', 're500500leaf1806', 're500500leaf975', 're500500leaf1126', 're500500leaf1066', 're500500leaf174', 're500500leaf4', 're500500leaf1452', 're500500leaf1520', 're500500leaf1293', 're500500leaf1665', 're500500leaf603', 're500500leaf1761', 're500500leaf378', 're500500leaf1566', 're500500leaf1009', 're500500leaf1423', 're500500leaf1811', 're500500leaf1063', 're500500leaf1208', 're500500leaf735', 're500500leaf955', 're500500leaf114', 're500500leaf932', 're500500leaf1378', 're500500leaf69', 're500500leaf1035', 're500500leaf178', 're500500leaf936', 're500500leaf343', 're500500leaf1535', 're500500leaf899', 're500500leaf1536', 're500500leaf1396', 're500500leaf281', 're500500leaf694', 're500500leaf1668', 're500500leaf1167', 're500500leaf454', 're500500leaf28', 're500500leaf746', 're500500leaf1249', 're500500leaf845', 're500500leaf1523', 're500500leaf78', 're500500leaf117', 're500500leaf868', 're500500leaf638', 're500500leaf543', 're500500leaf829', 're500500leaf1127', 're500500leaf524', 're500500leaf1478', 're500500leaf386', 're500500leaf1786', 're500500leaf1277', 're500500leaf411', 're500500leaf894', 're500500leaf655', 're500500leaf1800', 're500500leaf611', 're500500leaf1431', 're500500leaf308', 're500500leaf878', 're500500leaf647', 're500500leaf210', 're500500leaf1376', 're500500leaf854', 're500500leaf731', 're500500leaf51', 're500500leaf337', 're500500leaf1686', 're500500leaf1407', 're500500leaf681', 're500500leaf500', 're500500leaf1320', 're500500leaf1415', 're500500leaf1216', 're500500leaf1263', 're500500leaf1031', 're500500leaf164', 're500500leaf825', 're500500leaf1654', 're500500leaf144', 're500500leaf270', 're500500leaf1413', 're500500leaf555', 're500500leaf923', 're500500leaf885', 're500500leaf610', 're500500leaf1548', 're500500leaf1625', 're500500leaf1319', 're500500leaf564', 're500500leaf1129', 're500500leaf111', 're500500leaf832', 're500500leaf1385', 're500500leaf537', 're500500leaf1025', 're500500leaf1379', 're500500leaf291', 're500500leaf990', 're500500leaf1380', 're500500leaf1160', 're500500leaf355', 're500500leaf1467', 're500500leaf1543', 're500500leaf1690', 're500500leaf330', 're500500leaf456', 're500500leaf1507', 're500500leaf470', 're500500leaf184', 're500500leaf237', 're500500leaf204', 're500500leaf928', 're500500leaf1244', 're500500leaf1008', 're500500leaf481', 're500500leaf758', 're500500leaf234', 're500500leaf1453', 're500500leaf1115', 're500500leaf182', 're500500leaf1323', 're500500leaf661', 're500500leaf1432', 're500500leaf697', 're500500leaf519', 're500500leaf775', 're500500leaf1782', 're500500leaf631', 're500500leaf1464', 're500500leaf671', 're500500leaf1522', 're500500leaf163', 're500500leaf1661', 're500500leaf1271', 're500500leaf68', 're500500leaf9', 're500500leaf747', 're500500leaf1004', 're500500leaf12', 're500500leaf729', 're500500leaf1658', 're500500leaf85', 're500500leaf103', 're500500leaf874', 're500500leaf618', 're500500leaf539', 're500500leaf1260', 're500500leaf326', 're500500leaf1057', 're500500leaf341', 're500500leaf1242', 're500500leaf1094', 're500500leaf301', 're500500leaf334', 're500500leaf1264', 're500500leaf567', 're500500leaf41', 're500500leaf948', 're500500leaf745', 're500500leaf1746', 're500500leaf1292', 're500500leaf1526', 're500500leaf15', 're500500leaf1445', 're500500leaf30', 're500500leaf469', 're500500leaf526', 're500500leaf419', 're500500leaf620', 're500500leaf1530', 're500500leaf1052', 're500500leaf149', 're500500leaf682', 're500500leaf1771', 're500500leaf1207', 're500500leaf1784', 're500500leaf1544', 're500500leaf1137', 're500500leaf1732', 're500500leaf1007', 're500500leaf1056', 're500500leaf886', 're500500leaf1175', 're500500leaf1417', 're500500leaf400', 're500500leaf1111', 're500500leaf1514', 're500500leaf76', 're500500leaf140', 're500500leaf1213', 're500500leaf1103', 're500500leaf384', 're500500leaf1776', 're500500leaf242', 're500500leaf1252', 're500500leaf743', 're500500leaf1027', 're500500leaf779', 're500500leaf795', 're500500leaf1605', 're500500leaf1796', 're500500leaf74', 're500500leaf227', 're500500leaf130', 're500500leaf1062', 're500500leaf635', 're500500leaf992', 're500500leaf495', 're500500leaf977', 're500500leaf1538', 're500500leaf1728', 're500500leaf231', 're500500leaf1049', 're500500leaf82', 're500500leaf508', 're500500leaf780', 're500500leaf626', 're500500leaf58', 're500500leaf953', 're500500leaf838', 're500500leaf437', 're500500leaf235', 're500500leaf1749', 're500500leaf535', 're500500leaf1002', 're500500leaf1529', 're500500leaf1604', 're500500leaf1335', 're500500leaf1162', 're500500leaf244', 're500500leaf1599', 're500500leaf1692', 're500500leaf374', 're500500leaf1644', 're500500leaf1147', 're500500leaf521', 're500500leaf1362', 're500500leaf516', 're500500leaf1178', 're500500leaf1581', 're500500leaf83', 're500500leaf1055', 're500500leaf1673', 're500500leaf784', 're500500leaf436', 're500500leaf268', 're500500leaf1592', 're500500leaf460', 're500500leaf77', 're500500leaf480', 're500500leaf872', 're500500leaf1764', 're500500leaf952', 're500500leaf1722', 're500500leaf1388', 're500500leaf100', 're500500leaf1366', 're500500leaf417', 're500500leaf190', 're500500leaf939', 're500500leaf197', 're500500leaf867', 're500500leaf583', 're500500leaf1018', 're500500leaf1707', 're500500leaf1775', 're500500leaf1637', 're500500leaf1757', 're500500leaf964', 're500500leaf347', 're500500leaf781', 're500500leaf656', 're500500leaf112', 're500500leaf1120', 're500500leaf410', 're500500leaf733', 're500500leaf444', 're500500leaf1487', 're500500leaf1597', 're500500leaf1657', 're500500leaf1023', 're500500leaf229', 're500500leaf493', 're500500leaf1593', 're500500leaf850', 're500500leaf1736', 're500500leaf530', 're500500leaf1351', 're500500leaf866', 're500500leaf1225', 're500500leaf90', 're500500leaf937', 're500500leaf55', 're500500leaf589', 're500500leaf1398', 're500500leaf1191', 're500500leaf1092', 're500500leaf704', 're500500leaf1324', 're500500leaf256', 're500500leaf1311', 're500500leaf318', 're500500leaf1091', 're500500leaf809', 're500500leaf1389', 're500500leaf615', 're500500leaf1792', 're500500leaf1703', 're500500leaf496', 're500500leaf221', 're500500leaf1377', 're500500leaf1050', 're500500leaf723', 're500500leaf1516', 're500500leaf1695', 're500500leaf491', 're500500leaf513', 're500500leaf1098', 're500500leaf216', 're500500leaf323', 're500500leaf765', 're500500leaf407', 're500500leaf919', 're500500leaf957', 're500500leaf161', 're500500leaf458', 're500500leaf1430', 're500500leaf1309', 're500500leaf560', 're500500leaf909', 're500500leaf487', 're500500leaf1410', 're500500leaf1488', 're500500leaf1589', 're500500leaf931', 're500500leaf1239', 're500500leaf517', 're500500leaf1622', 're500500leaf768', 're500500leaf1687', 're500500leaf1802', 're500500leaf979', 're500500leaf1624', 're500500leaf1777', 're500500leaf472', 're500500leaf1382', 're500500leaf725', 're500500leaf1258', 're500500leaf387', 're500500leaf1374', 're500500leaf1729', 're500500leaf828', 're500500leaf129', 're500500leaf1104', 're500500leaf224', 're500500leaf136', 're500500leaf1503', 're500500leaf1118', 're500500leaf20', 're500500leaf1491', 're500500leaf1419', 're500500leaf1747', 're500500leaf1532', 're500500leaf625', 're500500leaf1465', 're500500leaf1047', 're500500leaf1183', 're500500leaf1598', 're500500leaf1166', 're500500leaf1294', 're500500leaf1285', 're500500leaf778', 're500500leaf316', 're500500leaf666', 're500500leaf424', 're500500leaf1713', 're500500leaf980', 're500500leaf1282', 're500500leaf608', 're500500leaf1187', 're500500leaf875', 're500500leaf1399', 're500500leaf1650', 're500500leaf786', 're500500leaf1546', 're500500leaf473', 're500500leaf97', 're500500leaf981', 're500500leaf1601', 're500500leaf354', 're500500leaf257', 're500500leaf1078', 're500500leaf135', 're500500leaf1238', 're500500leaf1067', 're500500leaf388', 're500500leaf1479', 're500500leaf1071', 're500500leaf679', 're500500leaf1434', 're500500leaf123', 're500500leaf1618', 're500500leaf1130', 're500500leaf1594', 're500500leaf1321', 're500500leaf1328', 're500500leaf1540', 're500500leaf579', 're500500leaf59', 're500500leaf413', 're500500leaf1286', 're500500leaf1230', 're500500leaf332', 're500500leaf576', 're500500leaf1711', 're500500leaf1778', 're500500leaf87', 're500500leaf1691', 're500500leaf1034', 're500500leaf304', 're500500leaf1754', 're500500leaf220', 're500500leaf1315', 're500500leaf2', 're500500leaf1633', 're500500leaf1275', 're500500leaf309', 're500500leaf1085', 're500500leaf199', 're500500leaf1675', 're500500leaf1617', 're500500leaf753', 're500500leaf586', 're500500leaf1354', 're500500leaf94', 're500500leaf690', 're500500leaf529', 're500500leaf329', 're500500leaf501', 're500500leaf561', 're500500leaf385', 're500500leaf468', 're500500leaf342', 're500500leaf415', 're500500leaf1730', 're500500leaf926', 're500500leaf836', 're500500leaf1372', 're500500leaf448', 're500500leaf1070', 're500500leaf1682', 're500500leaf634', 're500500leaf896', 're500500leaf597', 're500500leaf183', 're500500leaf457', 're500500leaf1113', 're500500leaf1015', 're500500leaf1720', 're500500leaf490', 're500500leaf1422', 're500500leaf658', 're500500leaf1704', 're500500leaf1558', 're500500leaf246', 're500500leaf1753', 're500500leaf1265', 're500500leaf209', 're500500leaf1395', 're500500leaf1565', 're500500leaf1148', 're500500leaf1741', 're500500leaf1769', 're500500leaf707', 're500500leaf595', 're500500leaf985', 're500500leaf1076', 're500500leaf10', 're500500leaf802', 're500500leaf783', 're500500leaf1350', 're500500leaf389', 're500500leaf1016', 're500500leaf752', 're500500leaf773', 're500500leaf168', 're500500leaf432', 're500500leaf810', 're500500leaf19', 're500500leaf1499', 're500500leaf1304', 're500500leaf1236', 're500500leaf44', 're500500leaf466', 're500500leaf36', 're500500leaf1816', 're500500leaf1709', 're500500leaf613', 're500500leaf1202', 're500500leaf1537', 're500500leaf290', 're500500leaf1349', 're500500leaf1440', 're500500leaf1466', 're500500leaf1322', 're500500leaf478', 're500500leaf1001', 're500500leaf251', 're500500leaf720', 're500500leaf572', 're500500leaf243', 're500500leaf1262', 're500500leaf1437', 're500500leaf1609', 're500500leaf311', 're500500leaf252', 're500500leaf1138', 're500500leaf1184', 're500500leaf1081', 're500500leaf372', 're500500leaf132', 're500500leaf37', 're500500leaf1243', 're500500leaf1040', 're500500leaf482', 're500500leaf1005', 're500500leaf89', 're500500leaf1371', 're500500leaf754', 're500500leaf675', 're500500leaf1743', 're500500leaf109', 're500500leaf991', 're500500leaf442', 're500500leaf691', 're500500leaf864', 're500500leaf623', 're500500leaf627', 're500500leaf621', 're500500leaf366', 're500500leaf191', 're500500leaf1222', 're500500leaf1358', 're500500leaf1677', 're500500leaf998', 're500500leaf1579', 're500500leaf965', 're500500leaf732', 're500500leaf994', 're500500leaf401', 're500500leaf846', 're500500leaf1509', 're500500leaf1585', 're500500leaf1481', 're500500leaf563', 're500500leaf1773', 're500500leaf580', 're500500leaf1073', 're500500leaf1107', 're500500leaf1553', 're500500leaf1596', 're500500leaf277', 're500500leaf102', 're500500leaf1283', 're500500leaf265', 're500500leaf1678', 're500500leaf1240', 're500500leaf698', 're500500leaf283', 're500500leaf598', 're500500leaf654', 're500500leaf1551', 're500500leaf1470', 're500500leaf855', 're500500leaf764', 're500500leaf363', 're500500leaf338', 're500500leaf865', 're500500leaf1613', 're500500leaf1411', 're500500leaf375', 're500500leaf189', 're500500leaf1390', 're500500leaf1381', 're500500leaf920', 're500500leaf933', 're500500leaf236', 're500500leaf801', 're500500leaf380', 're500500leaf1608', 're500500leaf652', 're500500leaf577', 're500500leaf1161', 're500500leaf1744', 're500500leaf1141', 're500500leaf404', 're500500leaf319', 're500500leaf267', 're500500leaf453', 're500500leaf1712', 're500500leaf1501', 're500500leaf1401', 're500500leaf1151', 're500500leaf1093', 're500500leaf717', 're500500leaf390', 're500500leaf1205', 're500500leaf166', 're500500leaf1308', 're500500leaf1606', 're500500leaf892', 're500500leaf127', 're500500leaf1392', 're500500leaf284', 're500500leaf740', 're500500leaf314', 're500500leaf286', 're500500leaf547', 're500500leaf1489', 're500500leaf24', 're500500leaf544', 're500500leaf678', 're500500leaf1375', 're500500leaf317', 're500500leaf1318', 're500500leaf1310', 're500500leaf207', 're500500leaf104', 're500500leaf173', 're500500leaf911', 're500500leaf1787', 're500500leaf80', 're500500leaf1051', 're500500leaf1170', 're500500leaf1583', 're500500leaf1628', 're500500leaf1521', 're500500leaf1217', 're500500leaf587', 're500500leaf1443', 're500500leaf1348', 're500500leaf1807', 're500500leaf983', 're500500leaf1474', 're500500leaf125', 're500500leaf1406', 're500500leaf1504', 're500500leaf1233', 're500500leaf857', 're500500leaf935', 're500500leaf871', 're500500leaf171', 're500500leaf1662', 're500500leaf176', 're500500leaf1793', 're500500leaf1290', 're500500leaf571', 're500500leaf272', 're500500leaf651', 're500500leaf391', 're500500leaf1803', 're500500leaf253', 're500500leaf439', 're500500leaf1150', 're500500leaf553', 're500500leaf813', 're500500leaf592', 're500500leaf1550', 're500500leaf805', 're500500leaf312', 're500500leaf1795', 're500500leaf1651', 're500500leaf479', 're500500leaf523', 're500500leaf1086', 're500500leaf837', 're500500leaf369', 're500500leaf1336', 're500500leaf218', 're500500leaf1261', 're500500leaf84', 're500500leaf1024', 're500500leaf1154', 're500500leaf879', 're500500leaf821', 're500500leaf1102', 're500500leaf1412', 're500500leaf1363', 're500500leaf447', 're500500leaf349', 're500500leaf1250', 're500500leaf1494', 're500500leaf806', 're500500leaf1684', 're500500leaf1164', 're500500leaf624', 're500500leaf536', 're500500leaf1139', 're500500leaf710', 're500500leaf5', 're500500leaf16', 're500500leaf1125', 're500500leaf261', 're500500leaf1006', 're500500leaf1517', 're500500leaf1393', 're500500leaf116', 're500500leaf970', 're500500leaf512', 're500500leaf1344', 're500500leaf605', 're500500leaf64', 're500500leaf1404', 're500500leaf145', 're500500leaf1327', 're500500leaf1386', 're500500leaf1112', 're500500leaf538', 're500500leaf313', 're500500leaf814', 're500500leaf903', 're500500leaf708', 're500500leaf959', 're500500leaf649', 're500500leaf642', 're500500leaf1717', 're500500leaf541', 're500500leaf1241', 're500500leaf215', 're500500leaf1012', 're500500leaf1368', 're500500leaf443', 're500500leaf365', 're500500leaf748', 're500500leaf423', 're500500leaf1812', 're500500leaf248', 're500500leaf559']) ('Rizan num of file ', 908) 2019-05-16 22:36:08.925685: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:36:08.925761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:36:08.925777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:36:08.925787: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:36:08.926340: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14202 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) 2019-05-16 22:36:08.928408: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:36:08.928450: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:36:08.928464: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:36:08.928473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:36:08.928849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14202 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) Converting image 227/908 shard 0 Converting image 454/908 shard 1 Converting image 681/908 shard 2 Converting image 908/908 shard 3 --2019-05-16 22:36:11-- http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz Resolving download.tensorflow.org (download.tensorflow.org)... 172.217.25.112, 2404:6800:4004:819::2010 Connecting to download.tensorflow.org (download.tensorflow.org)|172.217.25.112|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 23882985 (23M) [application/x-tar] Saving to: ‘deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz’

deeplabv3_mnv2_pasc 100%[===================>] 22.78M 40.2MB/s in 0.6s

2019-05-16 22:36:12 (40.2 MB/s) - ‘deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz’ saved [23882985/23882985]

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

INFO:tensorflow:Training on trainval set WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:423: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /root/models/research/deeplab/core/preprocess_utils.py:203: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/learning_rate_decay_v2.py:321: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. WARNING:tensorflow:From /root/models/research/deeplab/core/feature_extractor.py:196: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/layers/core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob. WARNING:tensorflow:From /root/models/research/deeplab/train.py:418: Print (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2018-08-20. Instructions for updating: Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:

    sess = tf.Session()
    with sess.as_default():
        tensor = tf.range(10)
        print_op = tf.print(tensor)
        with tf.control_dependencies([print_op]):
          out = tf.add(tensor, tensor)
        sess.run(out)

Additionally, to use tf.print in python 2.7, users must make sure to import the following:

from __future__ import print_function

INFO:tensorflow:Initializing model from path: /root/models/research/deeplab/datasets/pascal_voc_seg/init_models/deeplabv3_mnv2_pascal_train_aug/model.ckpt-30000 INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. 2019-05-16 22:36:22.836414: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-05-16 22:36:22.995164: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-05-16 22:36:22.995756: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5653101dec60 executing computations on platform CUDA. Devices: 2019-05-16 22:36:22.995790: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Tesla T4, Compute Capability 7.5 2019-05-16 22:36:22.997581: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz 2019-05-16 22:36:22.997783: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5653101deb00 executing computations on platform Host. Devices: 2019-05-16 22:36:22.997813: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 2019-05-16 22:36:22.998103: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:00:04.0 totalMemory: 14.73GiB freeMemory: 14.60GiB 2019-05-16 22:36:22.998126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-05-16 22:36:22.998663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-05-16 22:36:22.998686: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-05-16 22:36:22.998702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-05-16 22:36:22.998922: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2019-05-16 22:36:22.998957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14202 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5) INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 0 into /root/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set_mobilenetv2/train/model.ckpt. Segmentation fault (core dumped)

tensorflowbutler commented 5 years ago

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. What is the top-level directory of the model you are using Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce

RizanPSTU commented 5 years ago

Changed = build_voc2012_data.py

Copyright 2018 The TensorFlow Authors All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

==============================================================================

"""Converts PASCAL VOC 2012 data to TFRecord file format with Example protos. PASCAL VOC 2012 dataset is expected to have the following directory structure:

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string('image_folder', './VOCdevkit/VOC2012/JPEGImages', 'Folder containing images.')

tf.app.flags.DEFINE_string( 'semantic_segmentation_folder', './VOCdevkit/VOC2012/SegmentationClassRaw', 'Folder containing semantic segmentation annotations.')

tf.app.flags.DEFINE_string( 'list_folder', './VOCdevkit/VOC2012/ImageSets/Segmentation', 'Folder containing lists for training and validation')

tf.app.flags.DEFINE_string( 'output_dir', './tfrecord', 'Path to save converted SSTable of TensorFlow examples.')

_NUM_SHARDS = 4

def _convert_dataset(dataset_split): """Converts the specified dataset split to TFRecord format. Args: dataset_split: The dataset split (e.g., train, test). Raises: RuntimeError: If loaded image and label have different shape. """ print("Vitore datasetsplit ",dataset_split) dataset = os.path.basename(dataset_split)[:-4] print("Rizan dataset ",dataset) sys.stdout.write('Processing Riz' + dataset) filenames = [x.strip('\r\n') for x in open(dataset_split, 'r')] print("Rizan f ",filenames) num_images = len(filenames) print("Rizan num of file ",num_images) num_per_shard = int(math.ceil(num_images / float(_NUM_SHARDS)))

image_reader = build_data.ImageReader('jpeg', channels=3) label_reader = build_data.ImageReader('png', channels=1)

for shard_id in range(_NUM_SHARDS): output_filename = os.path.join( FLAGS.output_dir, '%s-%05d-of-%05d.tfrecord' % (dataset, shard_id, _NUM_SHARDS)) with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer: start_idx = shard_id num_per_shard end_idx = min((shard_id + 1) num_per_shard, num_images) for i in range(start_idx, end_idx): sys.stdout.write('\r>> Converting image %d/%d shard %d' % ( i + 1, len(filenames), shard_id))

print("riz 1")

    sys.stdout.flush()
    #print("riz 2")
    # Read the image.
    image_filename = os.path.join(
        FLAGS.image_folder, filenames[i] + '.' + FLAGS.image_format)
    #rint("riz 3")
    #rint(image_filename)
    image_data = tf.gfile.FastGFile(image_filename, 'rb').read()
    #rint("riz 4")
    height, width = image_reader.read_image_dims(image_data)
    # Read the semantic segmentation annotation.
    seg_filename = os.path.join(
        FLAGS.semantic_segmentation_folder,
        filenames[i] + '.' + FLAGS.label_format)
    seg_data = tf.gfile.FastGFile(seg_filename, 'rb').read()
    seg_height, seg_width = label_reader.read_image_dims(seg_data)
    if height != seg_height or width != seg_width:
      raise RuntimeError('Shape mismatched between image and label.')
    # Convert to tf example.
    example = build_data.image_seg_to_tfexample(
        image_data, filenames[i], height, width, seg_data)
    tfrecord_writer.write(example.SerializeToString())
sys.stdout.write('\n')
sys.stdout.flush()

def main(unused_argv): dataset_splits = tf.gfile.Glob(os.path.join(FLAGS.list_folder, '*.txt')) print("main ar split ",dataset_splits) for dataset_split in dataset_splits: print("for Rizan split",dataset_split) _convert_dataset(dataset_split)

if name == 'main': tf.app.run()

Changed = data_generator.py

Copyright 2018 The TensorFlow Authors All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

==============================================================================

"""Wrapper for providing semantic segmentaion data. The SegmentationDataset class provides both images and annotations (semantic segmentation and/or instance segmentation) for TensorFlow. Currently, we support the following datasets:

  1. PASCAL VOC 2012 (http://host.robots.ox.ac.uk/pascal/VOC/voc2012/). PASCAL VOC 2012 semantic segmentation dataset annotates 20 foreground objects (e.g., bike, person, and so on) and leaves all the other semantic classes as one background class. The dataset contains 1464, 1449, and 1456 annotated images for the training, validation and test respectively.
  2. Cityscapes dataset (https://www.cityscapes-dataset.com) The Cityscapes dataset contains 19 semantic labels (such as road, person, car, and so on) for urban street scenes.
  3. ADE20K dataset (http://groups.csail.mit.edu/vision/datasets/ADE20K) The ADE20K dataset contains 150 semantic labels both urban street scenes and indoor scenes. References: M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman, The pascal visual object classes challenge a retrospective. IJCV, 2014. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," In Proc. of CVPR, 2016. B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, A. Torralba, "Scene Parsing through ADE20K dataset", In Proc. of CVPR, 2017. """

import collections import os import tensorflow as tf from deeplab import common from deeplab import input_preprocess

Named tuple to describe the dataset properties.

DatasetDescriptor = collections.namedtuple( 'DatasetDescriptor', [ 'splits_to_sizes', # Splits of the dataset into training, val and test. 'num_classes', # Number of semantic classes, including the

background class (if exists). For example, there

                    # are 20 foreground classes + 1 background class in
                    # the PASCAL VOC 2012 dataset. Thus, we set
                    # num_classes=21.
    'ignore_label',  # Ignore label value.
])

_CITYSCAPES_INFORMATION = DatasetDescriptor( splits_to_sizes={ 'train': 2975, 'val': 500, }, num_classes=19, ignore_label=255, )

_PASCAL_VOC_SEG_INFORMATION = DatasetDescriptor( splits_to_sizes={ 'train': 908, 'trainval': 908, }, num_classes=2, ignore_label=255, )

_ADE20K_INFORMATION = DatasetDescriptor( splits_to_sizes={ 'train': 20210, # num of samples in images/training 'val': 2000, # num of samples in images/validation }, num_classes=151, ignore_label=0, )

_DATASETS_INFORMATION = { 'cityscapes': _CITYSCAPES_INFORMATION, 'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION, 'ade20k': _ADE20K_INFORMATION, }

Default file pattern of TFRecord of TensorFlow Example.

_FILE_PATTERN = '%s-*'

def get_cityscapes_dataset_name(): return 'cityscapes'

class Dataset(object): """Represents input dataset for deeplab model."""

def init(self, dataset_name, split_name, dataset_dir, batch_size, crop_size, min_resize_value=None, max_resize_value=None, resize_factor=None, min_scale_factor=1., max_scale_factor=1., scale_factor_step_size=0, model_variant=None, num_readers=1, is_training=False, should_shuffle=False, should_repeat=False): """Initializes the dataset. Args: dataset_name: Dataset name. split_name: A train/val Split name. dataset_dir: The directory of the dataset sources. batch_size: Batch size. crop_size: The size used to crop the image and label. min_resize_value: Desired size of the smaller image side. max_resize_value: Maximum allowed size of the larger image side. resize_factor: Resized dimensions are multiple of factor plus one. min_scale_factor: Minimum scale factor value. max_scale_factor: Maximum scale factor value. scale_factor_step_size: The step size from min scale factor to max scale factor. The input is randomly scaled based on the value of (min_scale_factor, max_scale_factor, scale_factor_step_size). model_variant: Model variant (string) for choosing how to mean-subtract the images. See feature_extractor.network_map for supported model variants. num_readers: Number of readers for data provider. is_training: Boolean, if dataset is for training or not. should_shuffle: Boolean, if should shuffle the input data. should_repeat: Boolean, if should repeat the input data. Raises: ValueError: Dataset name and split name are not supported. """ if dataset_name not in _DATASETS_INFORMATION: raise ValueError('The specified dataset is not supported yet.') self.dataset_name = dataset_name

splits_to_sizes = _DATASETS_INFORMATION[dataset_name].splits_to_sizes

if split_name not in splits_to_sizes:
  raise ValueError('data split name %s not recognized' % split_name)

if model_variant is None:
  tf.logging.warning('Please specify a model_variant. See '
                     'feature_extractor.network_map for supported model '
                     'variants.')

self.split_name = split_name
self.dataset_dir = dataset_dir
self.batch_size = batch_size
self.crop_size = crop_size
self.min_resize_value = min_resize_value
self.max_resize_value = max_resize_value
self.resize_factor = resize_factor
self.min_scale_factor = min_scale_factor
self.max_scale_factor = max_scale_factor
self.scale_factor_step_size = scale_factor_step_size
self.model_variant = model_variant
self.num_readers = num_readers
self.is_training = is_training
self.should_shuffle = should_shuffle
self.should_repeat = should_repeat

self.num_of_classes = _DATASETS_INFORMATION[self.dataset_name].num_classes
self.ignore_label = _DATASETS_INFORMATION[self.dataset_name].ignore_label

def _parse_function(self, example_proto): """Function to parse the example proto. Args: example_proto: Proto in the format of tf.Example. Returns: A dictionary with parsed image, label, height, width and image name. Raises: ValueError: Label is of wrong shape. """

# Currently only supports jpeg and png.
# Need to use this logic because the shape is not known for
# tf.image.decode_image and we rely on this info to
# extend label if necessary.
def _decode_image(content, channels):
  return tf.cond(
      tf.image.is_jpeg(content),
      lambda: tf.image.decode_jpeg(content, channels),
      lambda: tf.image.decode_png(content, channels))

features = {
    'image/encoded':
        tf.FixedLenFeature((), tf.string, default_value=''),
    'image/filename':
        tf.FixedLenFeature((), tf.string, default_value=''),
    'image/format':
        tf.FixedLenFeature((), tf.string, default_value='jpeg'),
    'image/height':
        tf.FixedLenFeature((), tf.int64, default_value=0),
    'image/width':
        tf.FixedLenFeature((), tf.int64, default_value=0),
    'image/segmentation/class/encoded':
        tf.FixedLenFeature((), tf.string, default_value=''),
    'image/segmentation/class/format':
        tf.FixedLenFeature((), tf.string, default_value='png'),
}

parsed_features = tf.parse_single_example(example_proto, features)

image = _decode_image(parsed_features['image/encoded'], channels=3)

label = None
if self.split_name != common.TEST_SET:
  label = _decode_image(
      parsed_features['image/segmentation/class/encoded'], channels=1)

image_name = parsed_features['image/filename']
if image_name is None:
  image_name = tf.constant('')

sample = {
    common.IMAGE: image,
    common.IMAGE_NAME: image_name,
    common.HEIGHT: parsed_features['image/height'],
    common.WIDTH: parsed_features['image/width'],
}

if label is not None:
  if label.get_shape().ndims == 2:
    label = tf.expand_dims(label, 2)
  elif label.get_shape().ndims == 3 and label.shape.dims[2] == 1:
    pass
  else:
    raise ValueError('Input label shape must be [height, width], or '
                     '[height, width, 1].')

  label.set_shape([None, None, 1])

  sample[common.LABELS_CLASS] = label

return sample

def _preprocess_image(self, sample): """Preprocesses the image and label. Args: sample: A sample containing image and label. Returns: sample: Sample with preprocessed image and label. Raises: ValueError: Ground truth label not provided during training. """ image = sample[common.IMAGE] label = sample[common.LABELS_CLASS]

original_image, image, label = input_preprocess.preprocess_image_and_label(
    image=image,
    label=label,
    crop_height=self.crop_size[0],
    crop_width=self.crop_size[1],
    min_resize_value=self.min_resize_value,
    max_resize_value=self.max_resize_value,
    resize_factor=self.resize_factor,
    min_scale_factor=self.min_scale_factor,
    max_scale_factor=self.max_scale_factor,
    scale_factor_step_size=self.scale_factor_step_size,
    ignore_label=self.ignore_label,
    is_training=self.is_training,
    model_variant=self.model_variant)

sample[common.IMAGE] = image

if not self.is_training:
  # Original image is only used during visualization.
  sample[common.ORIGINAL_IMAGE] = original_image

if label is not None:
  sample[common.LABEL] = label

# Remove common.LABEL_CLASS key in the sample since it is only used to
# derive label and not used in training and evaluation.
sample.pop(common.LABELS_CLASS, None)

return sample

def get_one_shot_iterator(self): """Gets an iterator that iterates across the dataset once. Returns: An iterator of type tf.data.Iterator. """

files = self._get_all_files()

dataset = (
    tf.data.TFRecordDataset(files, num_parallel_reads=self.num_readers)
    .map(self._parse_function, num_parallel_calls=self.num_readers)
    .map(self._preprocess_image, num_parallel_calls=self.num_readers))

if self.should_shuffle:
  dataset = dataset.shuffle(buffer_size=100)

if self.should_repeat:
  dataset = dataset.repeat()  # Repeat forever for training.
else:
  dataset = dataset.repeat(1)

dataset = dataset.batch(self.batch_size).prefetch(self.batch_size)
return dataset.make_one_shot_iterator()

def _get_all_files(self): """Gets all the files to read data from. Returns: A list of input files. """ file_pattern = _FILE_PATTERN file_pattern = os.path.join(self.dataset_dir, file_pattern % self.split_name) return tf.gfile.Glob(file_pattern)

Changed = train_utils.py

Copyright 2018 The TensorFlow Authors All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

==============================================================================

"""Utility functions for training."""

import six

import tensorflow as tf from deeplab.core import preprocess_utils

def _div_maybe_zero(total_loss, num_present): """Normalizes the total loss with the number of present pixels.""" return tf.to_float(num_present > 0) * tf.div(total_loss, tf.maximum(1e-5, num_present))

def add_softmax_cross_entropy_loss_for_each_scale(scales_to_logits, labels, num_classes, ignore_label, loss_weight=1.0, upsample_logits=True, hard_example_mining_step=0, top_k_percent_pixels=1.0, scope=None): """Adds softmax cross entropy loss for logits of each scale. Args: scales_to_logits: A map from logits names for different scales to logits. The logits have shape [batch, logits_height, logits_width, num_classes]. labels: Groundtruth labels with shape [batch, image_height, image_width, 1]. num_classes: Integer, number of target classes. ignore_label: Integer, label to ignore. loss_weight: Float, loss weight. upsample_logits: Boolean, upsample logits or not. hard_example_mining_step: An integer, the training step in which the hard exampling mining kicks off. Note that we gradually reduce the mining percent to the top_k_percent_pixels. For example, if hard_example_mining_step = 100K and top_k_percent_pixels = 0.25, then mining percent will gradually reduce from 100% to 25% until 100K steps after which we only mine top 25% pixels. top_k_percent_pixels: A float, the value lies in [0.0, 1.0]. When its value < 1.0, only compute the loss for the top k percent pixels (e.g., the top 20% pixels). This is useful for hard pixel mining. scope: String, the scope for the loss. Raises: ValueError: Label or logits is None. """ if labels is None: raise ValueError('No label for softmax cross entropy loss.')

for scale, logits in six.iteritems(scales_to_logits): loss_scope = None if scope: lossscope = '%s%s' % (scope, scale)

if upsample_logits:
  # Label is not downsampled, and instead we upsample logits.
  logits = tf.image.resize_bilinear(
      logits,
      preprocess_utils.resolve_shape(labels, 4)[1:3],
      align_corners=True)
  scaled_labels = labels
else:
  # Label is downsampled to the same size as logits.
  scaled_labels = tf.image.resize_nearest_neighbor(
      labels,
      preprocess_utils.resolve_shape(logits, 4)[1:3],
      align_corners=True)

scaled_labels = tf.reshape(scaled_labels, shape=[-1])
# removed
# not_ignore_mask = tf.to_float(tf.not_equal(scaled_labels,
#                                            ignore_label)) * loss_weight

# weights must be tuned
ignore_weight = 0
label0_weight = 1
label1_weight = 15

not_ignore_mask = tf.to_float(tf.equal(scaled_labels, 0)) * label0_weight + tf.to_float(tf.equal(scaled_labels, 1)) * label1_weight + tf.to_float(tf.equal(scaled_labels, ignore_label)) * ignore_weight 

one_hot_labels = tf.one_hot(
    scaled_labels, num_classes, on_value=1.0, off_value=0.0)

if top_k_percent_pixels == 1.0:
  # Compute the loss for all pixels.
  tf.losses.softmax_cross_entropy(
      one_hot_labels,
      tf.reshape(logits, shape=[-1, num_classes]),
      weights=not_ignore_mask,
      scope=loss_scope)
else:
  logits = tf.reshape(logits, shape=[-1, num_classes])
  weights = not_ignore_mask
  with tf.name_scope(loss_scope, 'softmax_hard_example_mining',
                     [logits, one_hot_labels, weights]):
    one_hot_labels = tf.stop_gradient(
        one_hot_labels, name='labels_stop_gradient')
    pixel_losses = tf.nn.softmax_cross_entropy_with_logits_v2(
        labels=one_hot_labels,
        logits=logits,
        name='pixel_losses')
    weighted_pixel_losses = tf.multiply(pixel_losses, weights)
    num_pixels = tf.to_float(tf.shape(logits)[0])
    # Compute the top_k_percent pixels based on current training step.
    if hard_example_mining_step == 0:
      # Directly focus on the top_k pixels.
      top_k_pixels = tf.to_int32(top_k_percent_pixels * num_pixels)
    else:
      # Gradually reduce the mining percent to top_k_percent_pixels.
      global_step = tf.to_float(tf.train.get_or_create_global_step())
      ratio = tf.minimum(1.0, global_step / hard_example_mining_step)
      top_k_pixels = tf.to_int32(
          (ratio * top_k_percent_pixels + (1.0 - ratio)) * num_pixels)
    top_k_losses, _ = tf.nn.top_k(weighted_pixel_losses,
                                  k=top_k_pixels,
                                  sorted=True,
                                  name='top_k_percent_pixels')
    total_loss = tf.reduce_sum(top_k_losses)
    num_present = tf.reduce_sum(
        tf.to_float(tf.not_equal(top_k_losses, 0.0)))
    loss = _div_maybe_zero(total_loss, num_present)
    tf.losses.add_loss(loss)

def get_model_init_fn(train_logdir, tf_initial_checkpoint, initialize_last_layer, last_layers, ignore_missing_vars=False): """Gets the function initializing model variables from a checkpoint. Args: train_logdir: Log directory for training. tf_initial_checkpoint: TensorFlow checkpoint for initialization. initialize_last_layer: Initialize last layer or not. last_layers: Last layers of the model. ignore_missing_vars: Ignore missing variables in the checkpoint. Returns: Initialization function. """ if tf_initial_checkpoint is None: tf.logging.info('Not initializing the model from a checkpoint.') return None

if tf.train.latest_checkpoint(train_logdir): tf.logging.info('Ignoring initialization; other checkpoint exists') return None

tf.logging.info('Initializing model from path: %s', tf_initial_checkpoint)

Variables that will not be restored.

exclude_list = ['global_step','logits'] if not initialize_last_layer: exclude_list.extend(last_layers)

variables_to_restore = tf.contrib.framework.get_variables_to_restore( exclude=exclude_list)

if variables_to_restore: init_op, init_feed_dict = tf.contrib.framework.assign_from_checkpoint( tf_initial_checkpoint, variables_to_restore, ignore_missing_vars=ignore_missing_vars) global_step = tf.train.get_or_create_global_step()

def restore_fn(unused_scaffold, sess):
  sess.run(init_op, init_feed_dict)
  sess.run([global_step])

return restore_fn

return None

def get_model_gradient_multipliers(last_layers, last_layer_gradient_multiplier): """Gets the gradient multipliers. The gradient multipliers will adjust the learning rates for model variables. For the task of semantic segmentation, the models are usually fine-tuned from the models trained on the task of image classification. To fine-tune the models, we usually set larger (e.g., 10 times larger) learning rate for the parameters of last layer. Args: last_layers: Scopes of last layers. last_layer_gradient_multiplier: The gradient multiplier for last layers. Returns: The gradient multiplier map with variables as key, and multipliers as value. """ gradient_multipliers = {}

for var in tf.model_variables():

Double the learning rate for biases.

if 'biases' in var.op.name:
  gradient_multipliers[var.op.name] = 2.

# Use larger learning rate for last layer variables.
for layer in last_layers:
  if layer in var.op.name and 'biases' in var.op.name:
    gradient_multipliers[var.op.name] = 2 * last_layer_gradient_multiplier
    break
  elif layer in var.op.name:
    gradient_multipliers[var.op.name] = last_layer_gradient_multiplier
    break

return gradient_multipliers

def get_model_learning_rate(learning_policy, base_learning_rate, learning_rate_decay_step, learning_rate_decay_factor, training_number_of_steps, learning_power, slow_start_step, slow_start_learning_rate, slow_start_burnin_type='none'): """Gets model's learning rate. Computes the model's learning rate for different learning policy. Right now, only "step" and "poly" are supported. (1) The learning policy for "step" is computed as follows: current_learning_rate = base_learning_rate learning_rate_decay_factor ^ (global_step / learning_rate_decay_step) See tf.train.exponential_decay for details. (2) The learning policy for "poly" is computed as follows: current_learning_rate = base_learning_rate (1 - global_step / training_number_of_steps) ^ learning_power Args: learning_policy: Learning rate policy for training. base_learning_rate: The base learning rate for model training. learning_rate_decay_step: Decay the base learning rate at a fixed step. learning_rate_decay_factor: The rate to decay the base learning rate. training_number_of_steps: Number of steps for training. learning_power: Power used for 'poly' learning policy. slow_start_step: Training model with small learning rate for the first few steps. slow_start_learning_rate: The learning rate employed during slow start. slow_start_burnin_type: The burnin type for the slow start stage. Can be none which means no burnin or linear which means the learning rate increases linearly from slow_start_learning_rate and reaches base_learning_rate after slow_start_steps. Returns: Learning rate for the specified learning policy. Raises: ValueError: If learning policy or slow start burnin type is not recognized. """ global_step = tf.train.get_or_create_global_step() adjusted_global_step = global_step

if slow_start_burnin_type != 'none': adjusted_global_step -= slow_start_step

if learning_policy == 'step': learning_rate = tf.train.exponential_decay( base_learning_rate, adjusted_global_step, learning_rate_decay_step, learning_rate_decay_factor, staircase=True) elif learning_policy == 'poly': learning_rate = tf.train.polynomial_decay( base_learning_rate, adjusted_global_step, training_number_of_steps, end_learning_rate=0, power=learning_power) else: raise ValueError('Unknown learning policy.')

adjusted_slow_start_learning_rate = slow_start_learning_rate if slow_start_burnin_type == 'linear':

Do linear burnin. Increase linearly from slow_start_learning_rate and

# reach base_learning_rate after (global_step >= slow_start_steps).
adjusted_slow_start_learning_rate = (
    slow_start_learning_rate +
    (base_learning_rate - slow_start_learning_rate) *
    tf.to_float(global_step) / slow_start_step)

elif slow_start_burnin_type != 'none': raise ValueError('Unknown burnin type.')

Employ small learning rate at the first few steps for warm start.

return tf.where(global_step < slow_start_step, adjusted_slow_start_learning_rate, learning_rate)

Changed = local_test_mobilenetv2.sh

!/bin/bash

Copyright 2018 The TensorFlow Authors All Rights Reserved.

#

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

#

http://www.apache.org/licenses/LICENSE-2.0

#

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

==============================================================================

#

This script is used to run local test on PASCAL VOC 2012 using MobileNet-v2.

Users could also modify from this script for their use case.

#

Usage:

From the tensorflow/models/research/deeplab directory.

sh ./local_test_mobilenetv2.sh

# #

Exit immediately if a command exits with a non-zero status.

set -e

Move one-level up to tensorflow/models/research directory.

cd ..

Update PYTHONPATH.

export PYTHONPATH=$PYTHONPATH:pwd:pwd/slim

Set up the working environment.

CURRENT_DIR=$(pwd) WORK_DIR="${CURRENT_DIR}/deeplab"

Run model_test first to make sure the PYTHONPATH is correctly set.

python "${WORK_DIR}"/model_test.py -v

Go to datasets folder and download PASCAL VOC 2012 segmentation dataset.

DATASET_DIR="datasets" cd "${WORK_DIR}/${DATASET_DIR}" sh download_and_convert_voc2012.sh

Go back to original directory.

cd "${CURRENT_DIR}"

Set up the working directories.

PASCAL_FOLDER="pascal_voc_seg" EXP_FOLDER="exp/train_on_trainval_set_mobilenetv2" INIT_FOLDER="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/init_models" TRAIN_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/train" EVAL_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/eval" VIS_LOGDIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/vis" EXPORT_DIR="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/${EXP_FOLDER}/export" mkdir -p "${INIT_FOLDER}" mkdir -p "${TRAIN_LOGDIR}" mkdir -p "${EVAL_LOGDIR}" mkdir -p "${VIS_LOGDIR}" mkdir -p "${EXPORT_DIR}"

Copy locally the trained checkpoint as the initial checkpoint.

TF_INIT_ROOT="http://download.tensorflow.org/models" CKPT_NAME="deeplabv3_mnv2_pascal_train_aug" TF_INIT_CKPT="${CKPT_NAME}_2018_01_29.tar.gz" cd "${INIT_FOLDER}" wget -nd -c "${TF_INIT_ROOT}/${TF_INIT_CKPT}" tar -xf "${TF_INIT_CKPT}" cd "${CURRENT_DIR}"

PASCAL_DATASET="${WORK_DIR}/${DATASET_DIR}/${PASCAL_FOLDER}/tfrecord"

Train 10 iterations.

NUM_ITERATIONS=10 python "${WORK_DIR}"/train.py \ --logtostderr \ --train_split="trainval" \ --model_variant="mobilenet_v2" \ --output_stride=16 \ --train_crop_size="513,513" \ --train_batch_size=4 \ --training_number_of_steps="${NUM_ITERATIONS}" \ --initialize_last_layer = False \ --fine_tune_batch_norm = False \ --last_layers_contain_logits_only=true \ --tf_initial_checkpoint="${INIT_FOLDER}/${CKPT_NAME}/model.ckpt-30000" \ --train_logdir="${TRAIN_LOGDIR}" \ --dataset_dir="${PASCAL_DATASET}"

Run evaluation. This performs eval over the full val split (1449 images) and

will take a while.

Using the provided checkpoint, one should expect mIOU=75.34%.

python "${WORK_DIR}"/eval.py \ --logtostderr \ --eval_split="val" \ --model_variant="mobilenet_v2" \ --eval_crop_size="513,513" \ --checkpoint_dir="${TRAIN_LOGDIR}" \ --eval_logdir="${EVAL_LOGDIR}" \ --dataset_dir="${PASCAL_DATASET}" \ --max_number_of_evaluations=1

Visualize the results.

python "${WORK_DIR}"/vis.py \ --logtostderr \ --vis_split="val" \ --model_variant="mobilenet_v2" \ --vis_crop_size="513,513" \ --checkpoint_dir="${TRAIN_LOGDIR}" \ --vis_logdir="${VIS_LOGDIR}" \ --dataset_dir="${PASCAL_DATASET}" \ --max_number_of_iterations=1

Export the trained checkpoint.

CKPT_PATH="${TRAIN_LOGDIR}/model.ckpt-${NUM_ITERATIONS}" EXPORT_PATH="${EXPORT_DIR}/frozen_inference_graph.pb"

python "${WORK_DIR}"/export_model.py \ --logtostderr \ --checkpoint_path="${CKPT_PATH}" \ --export_path="${EXPORT_PATH}" \ --model_variant="mobilenet_v2" \ --num_classes=2 \ --crop_size=513 \ --crop_size=513 \ --inference_scales=1.0

Run inference with the exported checkpoint.

Please refer to the provided deeplab_demo.ipynb for an example.

All running on Colab google.

RizanPSTU commented 5 years ago

find the problem.. need to add trainval.txt in it with val and train.txt

paviddavid commented 5 years ago

@RizanPSTU What was the problem? I dont understand what you mean by trainval.txt. Where is mentioned to have some files like that? What should these files contain? ~Thansks