Closed junyongyou closed 3 years ago
I got a temporal solution by using tf.reset_default_graph(), but perhaps there are better approaches.
I got a temporal solution by using tf.reset_default_graph(), but perhaps there are better approaches.
Try ServingDriver
I get following error with serving driver. My image input is two jpg images of size 400x400 both.
Code imgs = [] for f in ['/tf/notebooks/resized100/158.jpg', '/tf/notebooks/resized100/5800.jpg']: imgs.append(np.array(Image.open(f))) driver_serving = ServingDriver( 'efficientdet-d0', '/tf/notebooks/automl/efficientdet/efficientdet-d0', batch_size=len(imgs)) driver_serving.build() predictions = driver_serving.serve_images(imgs) for i in range(len(imgs)): driver.visualize(imgs[i], predictions[i])
InvalidArgumentError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, args) 1364 try: -> 1365 return fn(args) 1366 except errors.OpError as e:
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata) 1349 return self._call_tf_sessionrun(options, feed_dict, fetch_list, -> 1350 target_list, run_metadata) 1351
/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata) 1442 fetch_list, target_list, -> 1443 run_metadata) 1444
InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Shapes of all inputs must match: values[0].shape = [1,7] != values[1].shape = [0,7] [[{{node detections}}]] [[detections/_1519]] (1) Invalid argument: Shapes of all inputs must match: values[0].shape = [1,7] != values[1].shape = [0,7] [[{{node detections}}]] 0 successful operations. 0 derived errors ignored.
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call last)
I have trained a model, and want to make inference on multiple images with different sizes. It is working normally if I do the images separately. However, when I tried to do them in one script, even though the ModelInspector is built individually for each image, I still get a ValueError: Variable efficientnet-b4/stem/conv2d/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? See the full error track below. Can anybody help on this? Thanks a lot.
`File "C:/automl/efficientdet/model_inspect_steinsvik.py", line 219, in inference_single_image outputs_np = driver.inference(image_path, output_dir, config_dict) File "C:\automl\efficientdet\inference.py", line 605, in inference self.params) File "C:\automl\efficientdet\inference.py", line 118, in build_model class_outputs, box_outputs = model_arch(inputs, model_name, *kwargs) File "C:\automl\efficientdet\efficientdet_arch.py", line 682, in efficientdet features = build_backbone(features, config) File "C:\automl\efficientdet\efficientdet_arch.py", line 412, in build_backbone override_params=override_params) File "C:\automl\efficientdet\backbone\efficientnet_builder.py", line 328, in build_model_base features = model(images, training=training, features_only=True) File "C:\Users\junyong\AppData\Local\Continuum\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 778, in call outputs = call_fn(cast_inputs, args, **kwargs) File "C:\Users\junyong\AppData\Local\Continuum\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 237, in wrapper raise e.ag_error_metadata.to_exception(e) ValueError: in converted code:
`