Closed marcglettig closed 3 years ago
@gokceneraslan, any thoughts?
This was fixed here https://github.com/theislab/scanpy/commit/88fc8d854e90976df7cfb47c92d3d517e92f2d7. Can you explicitly pass optimizer="RMSprop"?
This was fixed here 88fc8d8. Can you explicitly pass optimizer="RMSprop"?
Passing the optimizer explicitly leads to the following stack trace.
dca: Successfully preprocessed 17374 genes and 3844 cells.
---------------------------------------------------------------------------
FailedPreconditionError Traceback (most recent call last)
<ipython-input-64-eb3d6e7f203b> in <module>
----> 1 sce.pp.dca(adata, optimizer='RMSprop')
~/miniconda3/envs/mg/lib/python3.8/site-packages/scanpy/external/pp/_dca.py in dca(adata, mode, ae_type, normalize_per_cell, scale, log1p, hidden_size, hidden_dropout, batchnorm, activation, init, network_kwds, epochs, reduce_lr, early_stop, batch_size, optimizer, random_state, threads, learning_rate, verbose, training_kwds, return_model, return_info, copy)
152 raise ImportError('Please install dca package (>= 0.2.1) via `pip install dca`')
153
--> 154 return dca(
155 adata,
156 mode=mode,
~/miniconda3/envs/mg/lib/python3.8/site-packages/dca/api.py in dca(adata, mode, ae_type, normalize_per_cell, scale, log1p, hidden_size, hidden_dropout, batchnorm, activation, init, network_kwds, epochs, reduce_lr, early_stop, batch_size, optimizer, learning_rate, random_state, threads, verbose, training_kwds, return_model, return_info, copy, check_counts)
199 }
200
--> 201 hist = train(adata[adata.obs.dca_split == 'train'], net, **training_kwds)
202 res = net.predict(adata, mode, return_info, copy)
203 adata = res if copy else adata
~/miniconda3/envs/mg/lib/python3.8/site-packages/dca/train.py in train(adata, network, output_dir, optimizer, learning_rate, epochs, reduce_lr, output_subset, use_raw_as_output, early_stop, batch_size, clip_grad, save_weights, validation_split, tensorboard, verbose, threads, **kwds)
89 output = adata.raw.X if use_raw_as_output else adata.X
90
---> 91 loss = model.fit(inputs, output,
92 epochs=epochs,
93 batch_size=batch_size,
~/miniconda3/envs/mg/lib/python3.8/site-packages/keras/engine/training_v1.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
776
777 func = self._select_training_loop(x)
--> 778 return func.fit(
779 self,
780 x=x,
~/miniconda3/envs/mg/lib/python3.8/site-packages/keras/engine/training_arrays_v1.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
638 val_x, val_y, val_sample_weights = None, None, None
639
--> 640 return fit_loop(
641 model,
642 inputs=x,
~/miniconda3/envs/mg/lib/python3.8/site-packages/keras/engine/training_arrays_v1.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
374
375 # Get outputs.
--> 376 batch_outs = f(ins_batch)
377 if not isinstance(batch_outs, list):
378 batch_outs = [batch_outs]
~/miniconda3/envs/mg/lib/python3.8/site-packages/keras/backend.py in __call__(self, inputs)
4017 self._make_callable(feed_arrays, feed_symbols, symbol_vals, session)
4018
-> 4019 fetched = self._callable_fn(*array_vals,
4020 run_metadata=self.run_metadata)
4021 self._call_fetch_callbacks(fetched[-len(self._fetches):])
~/miniconda3/envs/mg/lib/python3.8/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1478 try:
1479 run_metadata_ptr = tf_session.TF_NewBuffer() if run_metadata else None
-> 1480 ret = tf_session.TF_SessionRunCallable(self._session._session,
1481 self._handle, args,
1482 run_metadata_ptr)
FailedPreconditionError: Could not find variable training_4/RMSprop/batch_normalization_7/beta/rms. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Resource localhost/training_4/RMSprop/batch_normalization_7/beta/rms/N10tensorflow3VarE does not exist.
[[{{node training_4/RMSprop/RMSprop/update_batch_normalization_7/beta/mul/ReadVariableOp}}]]
Are dca and tf2 up-to-date?
I'm using dca 0.3.2 and tf 2.5.0, so yes they are up-to-date. But maybe there is an issue with using dca from the scanpy package as external function?
OK, some questions:
1) Are you able to reproduce this:
Code:
import scanpy as sc
import scanpy.external as sce
adata = sc.datasets.paul15()
ad = sce.pp.dca(adata, verbose=True, copy=True, epochs=2)
ad
2) If you install the tf2-optimizers
branch of DCA, does it fix the issue? E.g. pip install git+https://github.com/theislab/dca@tf2-optimizers
3) What is the version of keras?
Thank you for your answer, unfortunately I run into the very same error with the code snipped you posted. The error remains after installing the tf2-optimizer branch.
I use keras 2.4.3
Transferred to DCA, will have a look in more detail when I have more time.
Two more questions: Are you running on CPU or GPU? Does it still happen if you downgrade to TF 2.4.1 (this seems like the only difference between our setups)?
Running on CPU or GPU does not change the error. Downgrading did also not work for me.
It turns out tf 2.5 is not compatible with the keras version I am using. Please install the latest dca which requires keras 2.4.x and TF 2.4.x. Thanks
I just installed the dca extension to use in scanpy and came across an issue regarding the optimizer. The issue occurs regardless of the optimizer chosen. As the command line option of dca does not support data in h5 format I did not try this option so far.
The error occurred for any dataset used so I will not include it and just reference to a scanpy adata object with raw counts.
Gives me the following trace: