juglab / n2v

This is the implementation of Noise2Void training.
Other
385 stars 107 forks source link

FIxing exporting model weight #130

Closed jdeschamps closed 1 year ago

jdeschamps commented 1 year ago

This fixes https://github.com/juglab/n2v/issues/128.

However, the export works once and I get this error when running the same export cell twice:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In [15], line 1
----> 1 model.export_TF(name='Noise2Void - 2D SEM Example', 
      2                 description='This is the 2D Noise2Void example trained on SEM data in python.', 
      3                 authors=["Tim-Oliver Buchholz", "Alexander Krull", "Florian Jug"],
      4                 test_img=X_val[0,...,0], axes='YX',
      5                 patch_shape=patch_shape)

File ~/miniconda3/envs/n2v2/lib/python3.9/site-packages/csbdeep/models/base_model.py:32, in suppress_without_basedir.<locals>._suppress_without_basedir.<locals>.wrapper(*args, **kwargs)
     30     warn is False or warnings.warn("Suppressing call of '%s' (due to basedir=None)." % f.__name__)
     31 else:
---> 32     return f(*args, **kwargs)

File ~/git/n2v/n2v/models/n2v_standard.py:460, in N2V.export_TF(self, name, description, authors, test_img, axes, patch_shape, fname)
    457 assert input_n_dims == self.config.n_dim, 'Input and network dimensions do not match.'
    458 assert test_img.shape[axes.index('X')] == test_img.shape[
    459     axes.index('Y')], 'X and Y dimensions are not of same length.'
--> 460 test_output = self.predict(test_img, axes)
    461 # Extract central slice of Z-Stack
    462 if 'Z' in axes:

File ~/git/n2v/n2v/models/n2v_standard.py:408, in N2V.predict(self, img, axes, resizer, n_tiles, tta)
    405     pred = tta_backward(preds)
    406 else:
    407     pred = \
--> 408         self._predict_mean_and_scale(normalized, axes=new_axes, normalizer=None, resizer=resizer,
    409                                      n_tiles=new_n_tiles)[0]
    411 pred = self.__denormalize__(pred, means, stds)
    413 if 'C' in axes:

File ~/miniconda3/envs/n2v2/lib/python3.9/site-packages/csbdeep/models/care_standard.py:377, in CARE._predict_mean_and_scale(self, img, axes, normalizer, resizer, n_tiles)
    374 while not done:
    375     try:
    376         # raise tf.errors.ResourceExhaustedError(None,None,None) # tmp
--> 377         x = predict_tiled(self.keras_model,x,axes_in=net_axes_in,axes_out=net_axes_out,
    378                           n_tiles=n_tiles,block_sizes=net_axes_in_div_by,tile_overlaps=net_axes_in_overlaps,pbar=progress)
    379         # x has net_axes_out semantics
    380         done = True

File ~/miniconda3/envs/n2v2/lib/python3.9/site-packages/csbdeep/internals/predict.py:51, in predict_tiled(keras_model, x, n_tiles, block_sizes, tile_overlaps, axes_in, axes_out, pbar, **kwargs)
     48 """TODO."""
     50 if all(t==1 for t in n_tiles):
---> 51     pred = predict_direct(keras_model,x,axes_in,axes_out,**kwargs)
     52     if pbar is not None:
     53         pbar.update()

File ~/miniconda3/envs/n2v2/lib/python3.9/site-packages/csbdeep/internals/predict.py:41, in predict_direct(keras_model, x, axes_in, axes_out, **kwargs)
     39 len(axes_in) == x.ndim or _raise(ValueError())
     40 x = to_tensor(x,channel=channel_in,single_sample=single_sample)
---> 41 pred = from_tensor(keras_model.predict(x,**kwargs),channel=channel_out,single_sample=single_sample)
     42 len(axes_out) == pred.ndim or _raise(ValueError())
     43 return pred

File ~/miniconda3/envs/n2v2/lib/python3.9/site-packages/keras/engine/training_v1.py:1053, in Model.predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
    988 def predict(
    989     self,
    990     x,
   (...)
    997     use_multiprocessing=False,
    998 ):
    999     """Generates output predictions for the input samples.
   1000 
   1001     Computation is done in batches (see the `batch_size` arg.)
   (...)
   1051             that is not a multiple of the batch size.
   1052     """
-> 1053     self._assert_built_as_v1()
   1054     base_layer.keras_api_gauge.get_cell("predict").set(True)
   1055     self._check_call_args("predict")

File ~/miniconda3/envs/n2v2/lib/python3.9/site-packages/keras/engine/base_layer_v1.py:906, in Layer._assert_built_as_v1(self)
    904 def _assert_built_as_v1(self):
    905     if not hasattr(self, "_originally_built_as_v1"):
--> 906         raise ValueError(
    907             "Your Layer or Model is in an invalid state. "
    908             "This can happen for the following cases:\n "
    909             "1. You might be interleaving estimator/non-estimator models "
    910             "or interleaving models/layers made in "
    911             "tf.compat.v1.Graph.as_default() with models/layers created "
    912             "outside of it. "
    913             "Converting a model to an estimator (via model_to_estimator) "
    914             "invalidates all models/layers made before the conversion "
    915             "(even if they were not the model converted to an estimator). "
    916             "Similarly, making a layer or a model inside a "
    917             "a tf.compat.v1.Graph invalidates all layers/models you "
    918             "previously made outside of the graph.\n"
    919             "2. You might be using a custom keras layer implementation "
    920             "with custom __init__ which didn't call super().__init__. "
    921             " Please check the implementation of %s and its bases."
    922             % (type(self),)
    923         )

ValueError: Your Layer or Model is in an invalid state. This can happen for the following cases:
 1. You might be interleaving estimator/non-estimator models or interleaving models/layers made in 
tf.compat.v1.Graph.as_default() with models/layers created outside of it. Converting a model to an estimator (via model_to_estimator) invalidates all models/layers made before the conversion (even if they were not the model converted to an estimator). Similarly, making a layer or a model inside a a tf.compat.v1.Graph invalidates all layers/models you previously made outside of the graph.
3. You might be using a custom keras layer implementation with custom __init__ which didn't call super().__init__.  Please check the implementation of <class 'keras.engine.functional.Functional'> and its bases.

Here is the suggestion from TF error (without the single line...): ValueError: Your Layer or Model is in an invalid state. This can happen for the following cases:

  1. You might be interleaving estimator/non-estimator models or interleaving models/layers made in tf.compat.v1.Graph.as_default() with models/layers created outside of it. Converting a model to an estimator (via model_to_estimator) invalidates all models/layers made before the conversion (even if they were not the model converted to an estimator). Similarly, making a layer or a model inside a a tf.compat.v1.Graph invalidates all layers/models you previously made outside of the graph.
    1. You might be using a custom keras layer implementation with custom init which didn't call super().init. Please check the implementation of <class 'keras.engine.functional.Functional'> and its bases.

@tibuch any clue as to what might be happening? Was this happening before as well?

Note: because I was using the update_setup branch, this one should be merged with the other one.

jdeschamps commented 1 year ago

Note that this error is also there when not using N2V2.